Chinese researchers repurpose Meta's Llama model for military intelligence applications

An image of Meta's Llama 3
(Image credit: Meta)

  • Chinese researchers adapt Meta's Llama model for military intelligence use
  • ChatBIT showcases risks of open-source AI technology
  • Meta distances itself from unauthorized military applications of Llama

Meta’s Llama AI model is open source and freely available for use, but the company’s licensing terms clearly state the model is intended solely for non-military applications.

However there have been concerns about how open source tech can be checked to ensure that it is not used for the wrong purposes and the latest speculations validate these concerns, as recent reports claim Chinese researchers with links to the People’s Liberation Army (PLA) have created a military-focused AI model called ChatBIT using Llama

The emergence of ChatBIT highlights the potential and challenges of open source technology in a world where access to advanced AI is increasingly viewed as a national security issue.

A Chinese AI model for military intelligence

A recent study by six Chinese researchers from three institutions, including two connected to the People's Liberation Army's Academy of Military Science (AMS), describes the development of ChatBIT, created using an early version of Meta’s Llama model.

By integrating their parameters into the Llama 2 13B large language model, the researchers aimed to produce a military-focused AI tool. Subsequent follow-up academic papers outline how ChatBIT has been adapted to process military-specific dialogues and aid operational decisions, aiming to perform at around 90% of GPT-4’s capacity. However, it remains unclear how these performance metrics were calculated, as no detailed testing procedures or field applications have been disclosed.

Analysts familiar with Chinese AI and military research have reportedly reviewed these documents and supported the claims about ChatBIT’s development and functionality. They assert that ChatBIT’s reported performance metrics align with experimental AI applications but note that the lack of clear benchmarking methods or accessible datasets makes it challenging to confirm the claims.

Furthermore, an investigation by Reuters provides another layer of support, citing sources and analysts who have reviewed materials linking PLA-affiliated researchers to ChatBIT’s development. The investigation states that these documents and interviews reveal attempts by China’s military to repurpose Meta’s open-source model for intelligence and strategy tasks, making it the first publicized instance of a national military adapting Llama’s language model for defense purposes.

The use of open-source AI for military purposes has reignited the debate on the potential security risks associated with publicly available technology. Meta, like other tech companies, has licensed Llama with clear restrictions against its use in military applications. However, as with many open-source projects, enforcing such restrictions is practically impossible. Once the source code is available, it can be modified and repurposed, allowing foreign governments to adapt the technology to their specific needs. The case of ChatBIT is a stark example of this challenge, as Meta’s intentions are being bypassed by those with differing priorities.

This has led to renewed calls within the US for stricter export controls and further limitations on Chinese access to open-source and open-standard technologies like RISC-V. These moves aim to prevent American technologies from supporting potentially adversarial military advancements. Lawmakers are also exploring ways to limit U.S. investments in China’s AI, semiconductor, and quantum computing sectors to curb the flow of expertise and resources that could fuel the growth of China’s tech industry.

Despite the concerns surrounding ChatBIT, some experts question its effectiveness given the relatively limited data used in its development. The model is reportedly trained on 100,000 military dialogue records, which is comparatively small against the vast datasets used to train state-of-the-art language models in the West. Analysts suggest that this may restrict ChatBIT’s ability to handle complex military tasks, especially when other large language models are trained on trillions of data points.

Meta also responded to these reports claiming Llama 2 13B LLM used for ChatBIT’s development is now an outdated version, with Meta already working on Llama 4. The company also distanced itself from the PLA saying any misuse of Llama is unauthorized. Molly Montgomery, Meta's director of public policy, said, "Any use of our models by the People's Liberation Army is unauthorized and contrary to our acceptable use policy."

Via Tom's Hardware

You might also like

Efosa Udinmwen
Freelance Journalist

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking. Efosa developed a keen interest in technology policy, specifically exploring the intersection of privacy, security, and politics. His research delves into how technological advancements influence regulatory frameworks and societal norms, particularly concerning data protection and cybersecurity. Upon joining TechRadar Pro, in addition to privacy and technology policy, he is also focused on B2B security products. Efosa can be contacted at this email: udinmwenefosa@gmail.com

Read more
A person holding out their hand with a digital AI symbol.
Meta Llama LLM security flaw could let hackers easily breach systems and spread malware
Zuckerberg Meta AI
Meta purportedly trained its AI on more than 80TB of pirated content and then open-sourced Llama for the greater good
DeepSeek
Experts warn DeepSeek is 11 times more dangerous than other AI chatbots
A phone showing the DeepSeek app in front of the Chinese flag
OpenAI says DeepSeek used its models illegally, and it has evidence to prove it, new report claims
DeepSeek on an iPhone
US Navy bans use of DeepSeek “in any capacity” due to “potential security and ethical concerns"
Sam Altman and OpenAI
Open AI bans multiple accounts found to be misusing ChatGPT
Latest in Pro
Google Gemini AI
Gmail is adding a new Gemini AI tool to help smarten up your work emails
IBM office logo
IBM to provide platform for flagship cyber skills programme for girls
Teams
Microsoft Teams is finally adding a tiny but crucial feature I honestly can't believe it never had
Judge sitting behind laptop in office
A day in the life of an AI-augmented lawyer
Cyber-security
Why Windows End of Life deadlines require a change of mindset
cybersecurity
What's the right type of web hosting for me?
Latest in News
Samsung HW-Q990D soundbar with Halloween theme over the top
Samsung promises to repair soundbars bricked by its disastrous software update for free – but it'll probably involve shipping
Google Gemini AI
Gmail is adding a new Gemini AI tool to help smarten up your work emails
DJI Mavic 3 Pro
More DJI Mavic 4 Pro leaks seemingly reveal launch date, price and key features of the triple camera drone – here's what to expect
Android 16 logo on a phone
Here's how Android 16 will upgrade the screen unlocking process on your Pixel
Man sitting on sofa, drinking coffee, looking at phone in surprise
Thousands of coffee lovers warned to stop using their espresso machines immediately after reports of burns and lacerations
Visual Intelligence identifying a dog
AirPods with cameras for Visual Intelligence could be one of the best personal safety features Apple has ever planned – here's why