Cyber resilience in the age of AI
AI poses cybersecurity risks but also creates opportunities
The AI Safety Summit has been hailed as a “diplomatic breakthrough” by UK Prime Minister Rishi Sunak, after signing the Bletchley Agreement - an international declaration to address the risks posed by the technology. Working in cybersecurity, I was pleased to see that this agreement, coupled with the release of the 'Capabilities and risks from frontier AI' paper, underscored the multifaceted concerns surrounding cybersecurity vulnerabilities and the potential for AI-driven targeted malware. But beyond identifying the risks to cybersecurity, it’s also important that we discuss how AI can supercharge security technology, so we encourage innovation in this space.
The AI Safety Summit rightfully acknowledged the need for AI-powered cyber attacks to be high on the agendas of both private and public organizations. AI empowers cybercrime groups to streamline their attacks, elevating their effectiveness in targeting organizations and breaching defenses. According to the recently published FIDO Alliance's Online Authentication Barometer, 54% of respondents noted a surge in suspicious messages and scams, with 52% attributing the sophistication of phishing techniques to threat actors leveraging AI.
But the AI Safety Summit missed a beat - the opportunity to underscore how AI can be a catalyst for positive change in the cybersecurity industry. While it is important to address cybersecurity risks on a global scale, a negative discourse that exaggerates cybersecurity threats distracts us from identifying real and actionable solutions to these challenges. As organizations grapple to keep pace with a rapidly evolving technology and threat landscape, they must address both the risks and opportunities of AI in cybersecurity.
Senior Product Manager at OPSWAT.
Misuse risks of AI
To safeguard against AI-powered attacks, organizations must comprehend how AI evolves the threat landscape. One of the primary concerns are zero-day attacks orchestrated by state-sponsored cybercrime groups. AI's ability to morph malware with unknown signatures allows malicious actors to exploit vulnerabilities with zero-day exploits, evading detection for prolonged periods.
AI's analytical prowess amplifies social engineering tactics like phishing by analyzing vast amounts of public data to personalize attacks. This leads to a higher success rate for phishing campaigns and an increased likelihood of breaches. Cybercrime groups can scale both malware attacks and targeted phishing campaigns, posing a severe threat.
Rather than fearing AI’s potential, understanding possible attack vectors for AI-powered threats enables organisations to create targeted prevention and response plans. Leveraging AI, including its analytical prowess, will aid the cybersecurity industry in neutralising an unprecedented wave of AI-powered attacks.
Empowering cybersecurity: A shift to proactive threat hunting and enhanced zero-trust security
AI can help the cybersecurity industry meet a rapidly evolving threat landscape and it can also facilitate the adoption of best practices to safeguard against the rise in misuse risks of AI.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Traditional security tools relying on existing knowledge of malware behavior may fall short against new AI-powered polymorphic malware. Embracing threat hunting over a passive approach enables organizations to keep pace with evolving threats.
AI enables enhanced threat detection and hunting. Algorithms analyze diverse datasets, ranging from system logs to network traffic mapping out patterns and user behavior. Creating a deep understanding of normal activity empowers cybersecurity professionals to proactively hunt for unknown threats by alerting any deviation from a baseline of user behavior.
Reducing the noise in alerts, AI provides rich context by drawing from various data points, minimizing false positives. This not only streamlines the workflow for security analysts who would otherwise have to sort through alerts manually, but also allows them to focus on isolating genuine threats.
These capabilities of AI tools also reinforce a Zero Trust model. Identification of real-time anomalies can trigger user access to be revoked as soon as suspicious behavior is detected. The AI algorithm will also learn and adapt to improve detection of suspicious entities.
Data analysis can also produce real-time risk assessments built from contextual data which is specific to individual users. Privileges to company resources can be automatically adjusted in line with the risk assessment score. This approach simplifies adherence to the least privilege principle of zero-trust security.
While the AI Safety Summit rightly brought attention to the risks of AI in cybersecurity, it missed a crucial opportunity to emphasize the positive role AI can play in fortifying defenses. Understanding AI-powered attacks as real, yet manageable threats create space for organizations to safeguard against these risks through cybersecurity best practices and using AI-powered cybersecurity tools. As organizations navigate the dual landscape of AI's risks and opportunities, a balanced approach is essential to harness the potential of AI in securing the digital future.
We've featured the best cloud analytics.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Matt Wiseman is Senior Product Manager at OPSWAT.