How will the evolution of AI change its security?

A person holding out their hand with a digital AI symbol.
(Image credit: Shutterstock / LookerStudio)

We’re in the midst of an AI hype cycle. On either side are extremes: from solving all problems, accelerating humanity’s demise, to being massively overhyped.

In the middle is everyone else with varying degrees of awareness of both the opportunities and risks. There is still much uncertainty, partly due to the nature of the technology itself: AI systems are complex and often not fully understood, leading to significant unknowns.

It’s human nature to understand how things work, even if it is at a conceptual level. For example, you don’t need to be a Computer Scientist to understand that web-search is akin to looking up relevant keywords within a massive database for matches. The big difference here is how AI finds patterns and comes up with answers that are not readily intuitive nor explainable to the user, even for experts. So the opaqueness of AI is undermining the trust we might place in it.

It’s why we’re seeing efforts to establish ways of improving trust (such as the growing activity within the explainable AI field). For instance, the UK government has introduced an AI assurance platform, while the European Union’s AI Act aims to ensure better conditions for developing and using the technology.

And this trust is something that needs to be established soon. AI will become so pervasive that even if we tell workforces to proceed with extreme caution, most will struggle to even make a clear distinction between what is AI and what is not AI.

Dr.Peter Garraghan

CEO and co-founder of Mindgard.

The AI technology cycle

Why do I think AI will become that embedded when there are still big gaps in our understanding? For one thing, AI has been successfully used for decades in business for finding patterns or making predictions. It is only within the past few years after a research paper published in 2017 cracked a problem that gave rise to the LLMs that we see (and are discussing) today. No one can predict the future, however a lot of people are investing in it: 84% of CIOs expect to increase their funding in AI by 33% in 2025; the scale of these investments necessitates that companies look at five- or ten-year cycles.

It’s a balancing act between putting initiatives in place geared towards becoming AI-first, while trying to fix today’s problems and ensuring sustainability for the next few years. It’s a situation that most technologies now considered commonplace have been through. We go from idealization and realization to finding practical applications that solve real problems, followed by hype and excitement. Eventually, we recognize the technology’s limitations and address the challenges that emerge, and the technology becomes integrated into daily life in a way that moves beyond the hype cycles.

Where AI diverges from previous technologies is its intrinsically random and opaque nature, which is very different from traditional software. This has significant implications for all aspects of deploying AI. For instance, in security, while many current practices are applicable for securing AI, they serve more as analogies than direct solutions.

The doctor will see you now

Think of it like going to a doctor. When you walk in and say, "I don’t feel well," the doctor doesn’t reply, “Give me your DNA, and I’ll tell you everything wrong.” It doesn’t work like that because, aside from cost issues, DNA is an immensely complex system that we are still trying to understand, it can only reveal certain predispositions, and doesn’t capture environmental factors. Twins, for instance, have the same DNA but can develop different ailments.

Instead, doctors look at your family history, perform some tests, ask questions, and try different approaches to figure things out. Doctors are looking at the problem – your illness – through a socio-technical lens. The investigations into family history, your lifestyle, recent activities, and the social element: what’s going on in your life that could be contributing or the cause of the problem? The technical aspect is the fix – what medicine we have available to treat you – but the social element heavily influences it.

A truly integrated socio-technical approach

As time goes by, it is increasingly apparent that we need to apply the same logic to securing AI. Cybersecurity is recognized as a socio-technical field (most security issues are resultant of people, after all). Right now, there appears to be different framings between the social and the technical. We talk about social engineering, insider threats, educating employees on the risks of opening unfamiliar attachments. Separately, we deploy technical measures to provide security or mitigate the impact of attacks.

Where securing AI will differ is in the need to embed the social and the technical within the same techniques as opposed to viewing them as separate. We’re already seeing examples of where our expectations clash with what the AI models deliver: one recent example was Google Gemini telling someone to ‘please die’.

This highlights multiple points to consider. First, the opaque nature of AI: LLMs don't think like humans do (although it can be good at fooling us to believe otherwise), so it’s hard for us to understand how it could generate such a response based on an innocuous conversation around aging.

Second, if an LLM can output such a response through what appears to be an innocent conversation, what could happen when there is a deliberate attempt to generate malicious responses?

Finally, the Gemini incident underlines the importance of looking at the context in which AI tools are being used and how they are onboarded into an organization. This has a major social dimension, not within the system itself, but rather in how people interact with it.

Further complexity with pervasive AI

How is this any different from any other tool or technology solution? We’ve noticed a tendency for people to anthropomorphize AI to the degree that they haven’t done with any recent technology. The average user is having what they think are conversations with AI. They’re not writing code to generate a response or action, but talking like they’ve used search engines or even other people to find information or do something.

The biggest mistake we could make is assuming that, as AI models become commonplace, the level of attention devoted to their risks can drop off. Even with clear warnings, most users aren’t going to distinguish what is and what is not AI. Our focus needs to be on what informs our AI tools, the models and overlays they’re composed of, and where the weak spots are.

The challenge is significant, but so is the opportunity. With the right approaches we can ensure that AI enhances our world without compromising the trust that underpins it. The journey to secure AI isn’t just about protecting systems—it’s about shaping the future.

We've featured the best AI video editor.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Dr.Peter Garraghan is CEO and co-founder of Mindgard.

Read more
An abstract image of digital security.
Identifying the evolving security threats to AI models
An abstract image of digital security.
Looking before we leap: why security is essential to agentic AI success
A representative abstraction of artificial intelligence
Enterprises aren’t aligning AI governance and AI security. That’s a real problem
A hand reaching out to touch a futuristic rendering of an AI processor.
Balancing innovation and security in an era of intensifying global competition
Abstract image of cyber security in action.
Protectors of the modern world: defending against Shadow ML and Agentic AI
Closing the cybersecurity skills gap
AI security: establishing the first and last layer of defense
Latest in Pro
cybersecurity
What's the right type of web hosting for me?
Security padlock and circuit board to protect data
Trust in digital services around the world sees a massive drop as security worries continue
Hacker silhouette working on a laptop with North Korean flag on the background
North Korea unveils new military unit targeting AI attacks
An image of network security icons for a network encircling a digital blue earth.
US government warns agencies to make sure their backups are safe from NAKIVO security issue
Laptop computer displaying logo of WordPress, a free and open-source content management system (CMS)
This top WordPress plugin could be hiding a worrying security flaw, so be on your guard
construction
Building in the digital age: why construction’s future depends on scaling jobsite intelligence
Latest in News
Ray-Ban Meta Smart Glasses
Samsung's rumored smart specs may be launching before the end of 2025
Apple iPhone 16 Review
The latest iPhone 18 leak hints at a major chipset upgrade for all four models
Quordle on a smartphone held in a hand
Quordle hints and answers for Monday, March 24 (game #1155)
NYT Strands homescreen on a mobile phone screen, on a light blue background
NYT Strands hints and answers for Monday, March 24 (game #386)
NYT Connections homescreen on a phone, on a purple background
NYT Connections hints and answers for Monday, March 24 (game #652)
Quordle on a smartphone held in a hand
Quordle hints and answers for Sunday, March 23 (game #1154)