As AI changes cyber forever, training is the key to keeping pace

A digital face in profile against a digital background.
(Image credit: Shutterstock / Ryzhi)

Artificial intelligence is already changing professional working patterns across almost every industry. It has the power to drastically reduce the time we spend on routine tasks and free us up to think more strategically in our day-to-day professional lives.

This is no different for the IT and cybersecurity sector - at ISACA, our survey of business and IT professionals in Europe found that almost three quarters (73%) of the businesses we surveyed reported that their staff use AI at work.

Yet the key issue with AI, as transformative as it can be, is that we need to ensure we are using it responsibly and securely. After all, LLMs are trained on data which is oftentimes sensitive, and we need proper guardrails on these programs so that hallucinations do not affect the integrity of our work. Only 17% of the organizations we surveyed have a formal, comprehensive AI policy in place which outlines the business’ approach to these issues and provides best practices for use, despite the fact that employees are using AI at work.

Chris Dimitriadis

Chief Global Strategy Officer at ISACA.

AI is changing the threat landscape

At the same time, cyber criminals also have access to AI, and they’re using it to strengthen their criminal enterprises and capabilities, making their threats more convincing and effective than ever before. Not only does this pose a threat to the individual, but it poses a significant threat to businesses as well. Businesses are interconnected organizations with networks of suppliers and professional relationships - when one suffers a breach, all organizations across the network are at risk.

The recent CrowdStrike IT outage highlights just how vulnerable businesses are should they experience even a single IT fault or cyber attack. When one service provider in the digital supply chain is affected, the whole chain can break, causing large-scale outages – a digital pandemic. One rogue update, the unfortunate result of a lack of foresight and expertise, sparked chaos across a number of critical industries, from aviation and healthcare to banking and broadcasting.

Sometimes such incidents are caused by unintentional mistakes when updating software, and sometimes it is the result of a cyberattack. But the irony is that cybersecurity companies are also part of the supply chain, and those same companies that are fighting to establish cyber resilience may too become victims themselves, affecting service continuity.

Cyber professionals are acutely aware of this fact – when we asked our survey respondents about generative AI’s potential to be exploited by bad actors, 61% of respondents were extremely or very worried that this might happen. When comparing this to our data from last year’s survey, the sentiment has virtually not improved.

Training and upskilling are the key to long-term resilience

AI is being used twofold - bad actors are weaponizing the technology to develop more sophisticated attacks, and in response, it is being used by cyber professionals to keep pace with the evolving threat landscape and better detect and respond to those threats. Employees know that they need to keep pace with cyber criminals, upskill themselves, and really get to grips with AI, but when we asked our survey respondents how familiar they are with AI, almost three quarters (74%) were only somewhat familiar or not very familiar at all.

The CrowdStrike incident has brought the need for a more robust and resilient digital infrastructure to the fore, and the rise of AI will only make cyber threats more significant. It’s important that as an industry, we invest in upskilling and training to avoid similar crises in the future, and advancements in technologies like AI could be the key to working more efficiently. The right protocols must be established well ahead of time to move quickly when attacks and outages happen to minimize the damage and disruption. But this isn’t possible without the people with the skills to establish bespoke security frameworks and ensure everyone involved is trained on how to follow them.

If businesses are to both protect themselves and their partners in the long-term as well as see the benefits of using AI, they need to have the right skills in place in order to be able to identify new threat models, risks and controls. Training in AI across the cybersecurity sector is sorely needed - at the moment, 40% of businesses provide no training to employees in tech positions. Further, 34% of respondents believe that they will need to increase their knowledge of AI in the next 6 months, and in total, an overwhelming 86% of respondents feel that this training will be necessary within the next two years.

By taking an approach to AI which prioritizes training and comprehensive workplace policies, businesses and employees alike can rest assured that they are harnessing AI’s potential and keeping pace with cyber threats as they evolve in a secure and responsible manner, protecting both the business itself and every other enterprise within their wider network.

We've featured the best IT infrastructure management service.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Chris is the Chief Global Strategy Officer of ISACA.

Read more
A padlock resting on a keyboard.
AI-powered cyber threats demand enhanced security awareness for SMEs and supply chains
A person holding out their hand with a digital AI symbol.
How will the evolution of AI change its security?
An AI face in profile against a digital background.
The truth about GenAI security: your business can't afford to “wait and see”
A stylized depiction of a padlocked WiFi symbol sitting in the centre of an interlocking vault.
Sounding the alarm on AI-powered cybersecurity threats in 2025
An abstract image of a lock against a digital background, denoting cybersecurity.
Why AI is playing a growing role in helping SOC teams keep up with cyber threats
Closing the cybersecurity skills gap
AI security: establishing the first and last layer of defense
Latest in Pro
Woman shocked by online scam, holding her credit card outside
Cybercriminals used vendor backdoor to steal almost $600,000 of Taylor Swift tickets
Customer service 3D manager concept. AI assistance headphone call center
The era of Agentic AI
Woman using iMessage on iPhone
UK government guidelines remove encryption advice following Apple backdoor spat
Cryptocurrencies
Ransomware’s favorite Russian crypto exchange seized by law enforcement
A hand reaching out to touch a futuristic rendering of an AI processor.
Balancing innovation and security in an era of intensifying global competition
Wordpress brand logo on computer screen. Man typing on the keyboard.
Thousands of WordPress sites targeted with malicious plugin backdoor attacks
Latest in News
WhatsApp
WhatsApp just made its AI impossible to avoid – but at least you can turn it off
ChatGPT vs Gemini comparison
I compared GPT-4.5 to Gemini 2.0 Flash and the results surprised me
Apple iPhone 16 Plus
Apple officially delays the AI-infused Siri and admits, ‘It’s going to take us longer than we thought’
The Meta Quest Pro on its charging pad on a desk, in front of a window with the curtain closed
Samsung, Apple and Meta want to use OLED in their next VR headsets – but only Meta has a plan to make it cheap
AMD Ryzen 9000 3D chips
AMD officially announces price and release date for Ryzen 9 9900X3D and 9950X3D processors
Google Pixel 9
There's something strange going on with Google Pixel phone vibrations after the latest update