US Navy bans use of DeepSeek “in any capacity” due to “potential security and ethical concerns"
Neither work-related nor personal use are allowed
- The US Navy has banned the use of new chatbot DeepSeek
- DeepSeek is a Chinese owned AI
- The chatbot has emerged as a ChatGPT competitor
New AI chatbot DeepSeek has caused a stir recently due to its disruption of the market after its open source Large Language Model appeared to be severely undercutting existing models.
But DeepSeek is a Chinese firm, owned and operated by a hedge fund in Hangzhou, which has spooked US tech firms and government institutions alike, with the US Navy instructing all members to avoid using the technology in ‘any capacity’, due to “potential security and ethical concerns associated with the model’s origin and usage.”
The move is reportedly part of the Department of the Navy’s Chief Information Officer’s generative AI policy, and email recipients were asked to “refrain from downloading, installing, or using the DeepSeek model.”
AI’s privacy problems
The privacy policy for DeepSeek would probably unsettle the privacy-conscious among us, given the chatbot apparently does collect the personal information of users, which is stored on servers in China.
However, it's worth noting this is not specific to DeepSeek, and ChatGPT is also a privacy nightmare. Most of us have probably grown accustomed to the claims of tech companies harvesting our data, but that doesn’t mean we should forget it's happening - especially with big and familiar industry names.
But the privacy policy isn’t the only concern, as Deepseek suffered from its success in the form of ‘large-scale malicious attacks against the platform. The incident, most likely a Distributed Denial-of-Service (DDoS) attack, meant the platform was forced to temporarily pause new signups.
"Open-source AI models like DeepSeek, while offering accessibility and innovation, are increasingly vulnerable to supply chain attacks triggered during large-scale cyberattacks” said Aditya Sood, VP of Security Engineering and AI Strategy at Aryaka.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“These attacks, where adversaries exploit the reliance on third-party dependencies, pre-trained models, or public repositories, can have severe consequences. Adversaries may tamper with pre-trained models by embedding malicious code, backdoors, or poisoned data, which can compromise downstream applications.”
Via CNBC
You might also like
- Take a look at our pick of the best remote desktop software around
- Check out our recommendations for best AI website builders
- CEOs increasingly admitting AI could be the key to business success
Ellen has been writing for almost four years, with a focus on post-COVID policy whilst studying for BA Politics and International Relations at the University of Cardiff, followed by an MA in Political Communication. Before joining TechRadar Pro as a Junior Writer, she worked for Future Publishing’s MVC content team, working with merchants and retailers to upload content.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.