Navigating the rise of DeepSeek: balancing AI innovation and security

Concept art representing cybersecurity principles
Nytt DDoS-rekord (Image credit: Shutterstock / ZinetroN)

The rapid advancement of AI tools has intensified global competition, particularly between the United States and China.

The release of DeepSeek’s flagship large language model (LLM), followed closely by Alibaba’s Qwen, has ignited debate within the tech industry. Now, with the news that DeepSeek is fast-tracking the launch of its R2 model, moving up the release from May to as soon as possible, concerns around U.S. AI innovation, market stability, and national security are escalating.

As these developments unfold, they further underscore the global AI arms race, with companies and governments racing to establish dominance in AI-driven applications. The sudden emergence of low-cost, high-performance models has increased scrutiny over data policies, cost structures, and broader market implications.

Steve Povolny

Senior Director of Security Research & Competitive Intelligence at Exabeam.

Security concerns and regulatory scrutiny of DeepSeek

Beyond immediate market disruption, DeepSeek has raised serious security concerns. Recent research revealed that DeepSeek suffered a significant data breach, exposing over one million records and fueling fears about how AI models manage and protect user information.

This breach has amplified existing concerns about data security, particularly as AI models continue to expand their access to vast datasets. DeepSeek leverages open-source data from GitHub and Wikipedia as part of its training set. While these repositories provide vast information, they also introduce potential vulnerabilities related to misinformation, bias, and cybersecurity threats.

As a result, regulatory scrutiny of DeepSeek has intensified. Its operations have already been blocked in Italy, South Korea, and Taiwan, and a bipartisan bill has been introduced in the U.S. Congress to ban DeepSeek from government devices due to national security concerns.

Additionally, several states, including Texas, New York, and Virginia, responded by prohibiting the use of DeepSeek on government-issued devices and networks. These actions reflect the growing apprehension about foreign AI models, particularly regarding data governance and security risks.

While LLMs trained on vast data sources inevitably present some risk for misinformation and bias, these concerns do not represent a critical threat to AI development. Modern LLMs process terabytes of data, meaning any single data set, such as Wikipedia, is only a small fraction of the total input. Thus, while concerns about data accuracy are valid, they do not pose an existential threat to AI’s progression. Instead, they underscore the need for rigorous oversight and validation mechanisms to ensure responsible AI deployment.

The balance between AI innovation and security

The rise of DeepSeek and Qwen highlights the need for organizations to strike a balance between embracing AI innovation and ensuring security. While competition drives technological advancement, it also introduces significant risks that demand careful evaluation. Security leaders must adopt an initial “zero trust” approach before vetting and integrating AI into their workflows. Transparency in model training, data sourcing, and governance structures should be a prerequisite for adoption.

To achieve this, a proactive security strategy here is essential to shift from reactive AI security measures to real-time risk monitoring, behavioral analytics, and robust governance frameworks to safeguard data integrity and compliance. Security leaders should implement a comprehensive approach, which includes the following:

  • Evaluate AI model security & compliance: Organizations should conduct thorough security assessments to understand how AI models handle sensitive data, comply with regulatory requirements, and mitigate misinformation risks.
  • Deploy continuous AI threat monitoring: Implement behavioral analytics and anomaly detection to identify suspicious activity in AI-driven workflows, ensuring early detection of potential risks.
  • Strengthen data protection & access controls: Enforce Zero Trust principles, restricting access to AI-generated data while leveraging automated threat detection to mitigate potential security gaps.
  • Enhance incident response for AI threats: Organizations must update their incident response playbooks to include AI-specific risks, ensuring rapid response to data leaks, adversarial AI manipulation, and unauthorized model access.

DeepSeek’s emergence should be viewed as both a challenge and an opportunity. While U.S. markets have been thrown into turmoil by the influx of Chinese models, this disruption is likely to drive technological innovation, enhanced security frameworks, and more robust AI policies.

Organizations that take a proactive approach can harness the benefits of AI while mitigating potential risks. By strengthening security protocols and governance measures, businesses can safely integrate AI into their operations without compromising data integrity or compliance. Ultimately, by aligning innovation with security, organizations can navigate the evolving AI landscape with confidence and control.

We've featured the best AI phone.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Steve Povolny is Senior Director of Security Research & Competitive Intelligence at Exabeam.

Read more
A hand reaching out to touch a futuristic rendering of an AI processor.
Balancing innovation and security in an era of intensifying global competition
An AI face in profile against a digital background.
Bang goes AI? DeepSeek and the ‘Star Trek’ future
A person holding out their hand with a digital AI symbol.
DeepSeek kicks off the next wave of the AI rush
A hand reaching out to touch a futuristic rendering of an AI processor.
DeepSeek and the race to surpass human intelligence
Padlock against circuit board/cybersecurity background
AI security Is key to U.S. dominance in the AI arms race
An abstract image of digital security.
Looking before we leap: why security is essential to agentic AI success
Latest in Pro
An image of network security icons for a network encircling a digital blue earth.
US government warns agencies to make sure their backups are safe from NAKIVO security issue
Computer Hacked, System Error, Virus, Cyber attack, Malware Concept. Danger Symbol
Veeam urges users to patch security issues which could allow backup hacks
Concept art representing cybersecurity principles
Navigating the rise of DeepSeek: balancing AI innovation and security
UK Prime Minister Sir Kier Starmer
The UK releases timeline for migration to post-quantum cryptography
Gmail at 20
Your Gmail search results are about to get a huge change - and I'm not sure you're going to be happy with it
A person holding out their hand with a digital AI symbol.
Taking AI to the edge for smaller, smarter, and more secure applications
Latest in Opinion
Apple CEO Tim Cook
Forget Siri, Apple needs to launch a folding iPhone and get back on track
Concept art representing cybersecurity principles
Navigating the rise of DeepSeek: balancing AI innovation and security
A person holding out their hand with a digital AI symbol.
Taking AI to the edge for smaller, smarter, and more secure applications
Someone looking at a marketing graph
Why ‘boring’ tech will be 2025's biggest marketing trend
Agent 47 holding up duel pistols with a PSVR 2 headset outline over his head
I can’t believe it either, Hitman on PSVR 2 is actually, finally a great VR port of the World of Assassination trilogy – and my new favorite way to play the series
An AI face in profile against a digital background.
Getting your data ready as the AI race heats up