Phishing scams playbook: Adapting to keep up with malicious AI

Fish hook on a keyboard
Image Credit: Shutterstock (Image credit: wk1003mike / Shutterstock)

With the rapid advancement of technology, the scale and sophistication of cyberattacks are escalating. Anne Neuberger, the US Deputy National Security Advisor, highlighted this concern in 2023, referencing FBI and IMF data that forecast the annual cost of cybercrime in the US to surge beyond $23 trillion by 2027. This alarming projection underscores the urgent need for security systems to evolve at pace with the weaponization of AI, which has significantly enhanced the complexity and efficacy of scams.

Originally, phishing attacks were relatively simplistic, with fraudsters impersonating legitimate entities via email to deceive individuals into disclosing sensitive information, like passwords and credit card numbers. These cybercriminals also trick victims into engaging with malicious links or opening infected files, leading to the automatic installation of malware on their devices.

However, the advent of SMS business communications, QR codes, and advanced voice manipulation technologies has made phishing schemes increasingly difficult to detect and significantly broadened their potential for damage. This raises a critical question: how does the integration of AI into these tactics amplify the chances of deceiving individuals into divulging confidential information or engaging with harmful links?

Gerasim Hovannisyian

Founder and CEO of EasyDMARC.

Quishing: QR code phishing

Since the onset of the COVID-19 pandemic, there has been a sharp increase in the prevalence of "quishing" scams—a phishing technique that exploits QR codes. This rise coincided with businesses increasingly adopting QR code technology as a contactless alternative to physical documents. Once these QR codes are scanned, they can route users to malicious websites designed to harvest personal data, such as credit card information, or to automatically download malware onto the scanner's device.

In 2022, HP uncovered a sophisticated quishing scheme where individuals received emails masquerading as notifications from a parcel delivery service. These emails instructed recipients to make a payment through a QR code. HP's Q4 report from the same year revealed that such QR code-based phishing attacks were far more prevalent than previously recognized. The findings highlighted a growing trend among scammers to utilize QR codes as a means to mimic legitimate businesses, aiming to deceive individuals into surrendering their personal information.

Vishing: voice phishing

As AI advancements surge forward, the advent of voice-altering and generating technologies presents a significant and emerging threat. The Federal Trade Commission, recognizing this danger, issued an alert in 2023 cautioning against the deceiving potential of AI-generated voice clones in telephone calls. This warning was not unfounded; a notable case in 2020 involved a finance worker in Hong Kong who was duped into transferring £20 million to an overseas account by a scammer wielding deepfake technology.

Vishing, or voice phishing, scams boast an alarmingly high success rate. They exploit the element of surprise, compelling victims to make hasty decisions under pressure. This immediacy, inherent in voice calls, contrasts starkly with email-based scams, where recipients can pause to question the legitimacy of the request. Scammers can also customize their approach and adapt in real-time to the victim's responses and emotional state over the phone, an advantage that email scams lack. This personalized manipulation is significantly more difficult to replicate through text-based phishing attempts.

VALL-E, a notable example of such technology, can mimic a person's voice, emotion, and unique speech patterns with a mere three-second sample of their voice. This capability through alternative models becomes especially potent in the hands of cybercriminals targeting high-profile individuals, such as company CEOs, for whom there is abundant audio-visual material available online for AI training purposes. As deepfake technology advances and becomes more accessible, the boundary between reality and artificial fabrication grows increasingly blurry, amplifying the potential for vishing attacks to deceive and manipulate with unprecedented effectiveness.

Smishing: SMS phishing

A 2023 study by NatWest revealed that 28% of UK residents noticed an uptick in scam activities compared to the previous year, with fraudulent SMS messages leading the charge as the most prevalent form of phishing. While many of these SMS scams are easily identifiable as frauds, the ones that manage to evade detection can inflict considerable harm.

One notable incident in 2022 involved a self-taught hacker impersonating an IT professional to secure an Uber employee's password. This seemingly straightforward smishing (SMS phishing) attack paved the way for the hacker to gain extensive access to Uber’s internal networks. While it's tempting to view such incidents as isolated cases, the evolution of Generative AI is likely to empower even novice cybercriminals to execute more complex phishing operations with minimal technical knowledge.

Adding to the concern, the National Cybersecurity Centre (NCSC) issued a warning earlier this year about the potential of Generative AI to enhance the believability of scams. These AI tools are now being utilized to create fake "lure documents" that are free from the usual giveaways of phishing attempts, such as translation, spelling, or grammatical mistakes, thanks to the refinement capabilities of chatbots and accessible generative AI platforms. This advancement underscores a growing challenge in distinguishing genuine communications from fraudulent ones, raising the stakes in the ongoing battle against cybercrime.

How can cybersecurity keep up pace with Generative AI?

In March, Microsoft unveiled a report showing that 87% of UK organizations are now more susceptible to cyberattacks due to the increasing accessibility of AI tools, with 39% of these entities deemed as “high risk.” This alarming statistic highlights the urgent necessity for the UK to bolster its cyber defense mechanisms.

The advent of AI has significantly shifted the dynamics of cybersecurity, introducing sophisticated methods for cybercriminals to exploit, such as machine learning algorithms designed to uncover and leverage software flaws for more targeted and potent cyberattacks. Nonetheless, Forrester research indicates that 90% of cyberattacks will still involve human interaction, suggesting that traditional methods like phishing remain highly effective. This underscores the importance for businesses to not only fortify their defenses against basic tactics but also to stay up to date on how AI advancements may alter these attack strategies.

To mitigate common attack vectors, businesses should preempt phishing attempts by stopping malicious emails from reaching user inboxes in the first place. Implementing email authentication protocols such as SPF, DKIM, and DMARC can significantly reduce the chances of spoofed emails penetrating email defenses. Despite these measures, however, the likelihood of a phishing attempt eventually breaching cybersecurity barriers remains, making the cultivation of a robust security culture within organizations paramount. This involves educating employees to not only recognize but also correctly respond to threats.

The 2023 Verizon Data Breach Investigations Report notes that one in three employees might interact with phishing links, and one in eight may disclose personal information when prompted. This statistic is a stark reminder of the continuous need for improvement in organizational cybersecurity practices and emphasizes the critical role employees play in the cybersecurity ecosystem. Leveraging technology for detecting and neutralizing phishing threats is indispensable, but it cannot operate in a vacuum. As AI poses increasing challenges, fostering an informed and proactive workforce becomes crucial in intercepting the phishing attempts that elude digital safeguards.

We've featured the best business VPN.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS

Gerasim Hovannisyian is the Founder and CEO of EasyDMARC, a cloud-native B2B SaaS, solving email security and deliverability problems.