Rapid AI development is putting security and privacy at risk

An AI-powered phone mockup
(Image credit: Shutterstock / ZinetroN)

Rapid development of artificial intelligence is subject to a number of threats including misdirection, data poisoning and privacy attacks according to the National Institute of Standards and Technology (NIST).

A report from NIST states that hostile actors can attack and confuse AI systems and there is no way to fully protect against it.

The publication is intended to promote the responsible development of AI tools and help industries recognize that all AI can be subject to attacks, so greater care should be taken in deploying AI.

Evade, poison, abuse

Due to the massive data sets used to train large language models (LLM), it is not possible to fully audit all of the data being fed to an AI to train it, leaving vulnerabilities in the accuracy of the data, its content, or how it will respond to certain queries.

AI can be targeted during its training in an attack known as poisoning, which involves the AI recognizing obscene language as a common part of communication by throwing in swear words and toxic language into training material. In the past, AI trained on poisoned data have quickly become racist and derogatory in their responses to certain questions

There are also concerns that evasion attacks could target AI post-deployment by changing its recognition of inputs, or how an AI responds to an input. One example given in the publication is adding additional markings to a stop sign at an intersection, causing a self-driving car to not recognize the sign, potentially causing an accident.

The publication also highlights that the sources used to train AI can be identified by reverse engineering its responses to queries, and then adding malicious examples or information to these sources prompting inappropriate responses from the AI.

Finally, it is possible for malicious actors to compromise a legitimate source of information used by the AI and edit its contents to change the AI’s behavior so that it no longer works within the context of its intended use.

The most worrying part of these attacks, the publication notes, is that these attacks can be done with “black-box” knowledge. Black-box implies that the attackers require very little knowledge of AI systems in order to carry out a successful attack. White-box would imply full knowledge of a system, and a partial knowledge is known as gray-box. 

One of the authors of the publication, NIST computer scientist Apostol Vassilev, said, “We are providing an overview of attack techniques and methodologies that consider all types of AI systems. 

“We also describe current mitigation strategies reported in the literature, but these available defenses currently lack robust assurances that they fully mitigate the risks. We are encouraging the community to come up with better defenses.”

More from TechRadar Pro

Benedict Collins
Staff Writer (Security)

Benedict has been writing about security issues for over 7 years, first focusing on geopolitics and international relations while at the University of Buckingham. During this time he studied BA Politics with Journalism, for which he received a second-class honours (upper division), then continuing his studies at a postgraduate level, achieving a distinction in MA Security, Intelligence and Diplomacy. Upon joining TechRadar Pro as a Staff Writer, Benedict transitioned his focus towards cybersecurity, exploring state-sponsored threat actors, malware, social engineering, and national security. Benedict is also an expert on B2B security products, including firewalls, antivirus, endpoint security, and password management.

Read more
An abstract image of digital security.
Identifying the evolving security threats to AI models
A hand reaching out to touch a futuristic rendering of an AI processor.
Balancing innovation and security in an era of intensifying global competition
An abstract image of digital security.
Looking before we leap: why security is essential to agentic AI success
A stylized depiction of a padlocked WiFi symbol sitting in the centre of an interlocking vault.
Sounding the alarm on AI-powered cybersecurity threats in 2025
A padlock resting on a keyboard.
AI-powered cyber threats demand enhanced security awareness for SMEs and supply chains
A person holding out their hand with a digital AI symbol.
How will the evolution of AI change its security?
Latest in Pro
An AI face in profile against a digital background.
How to harmonize the complexities of global AI regulation
Person using a laptop.
The hidden costs of your on-premise software
A hand reaching out to touch a futuristic rendering of an AI processor.
Driving innovation and reshaping the insurance landscape with AI
Hands typing on a keyboard surrounded by security icons
Outdated ID verification myths put businesses at risk
China
Chinese hackers targeting Juniper Networks routers, so patch now
Google Meet create custom backgrounds
More AI features are coming to Google Workspace
Latest in News
A collage of Tom Holland's unmasked Spider-Man and Sadie Sink's Max in Stranger Things season 4
Marvel reportedly casts Stranger Things star Sadie Sink in Spider-Man 4, but I don't want her to tackle the roles she's rumored to play
Google Gemini Robotics
Gemini just got physical and you should prepare for a robot revolution
Lilo & Stitch Official Trailer
Stitch crashes into earth and steals our hearts with the first trailer for the live-action Lilo & Stitch
GTA 5
GTA Online publisher Take-Two is gunning for a black market that’s basically heaven for cheaters
Y2K cast looking shocked
Y2K has a streaming release date on Max, so you can witness the technology uprising at home
The Discovery+ homepage
Discovery+ just got a big update to its streaming app that makes it more like Max – here are 5 great new features to try