Meta purportedly trained its AI on more than 80TB of pirated content and then open-sourced Llama for the greater good
Meta employees warned against using pirated content
![Zuckerberg Meta AI](https://cdn.mos.cms.futurecdn.net/wVydSQNQJReMzxQNudy2w7-1200-80.jpg)
- Zuckerberg reportedly pushed for AI implementation despite employee objections
- Employees allegedly discussed ways to conceal how the company acquired its AI training data
- Court filings suggest Meta took steps to unsuccessfully mask its AI training activities
Meta is facing a class-action lawsuit alleging copyright infringement and unfair competition over the training of its AI model, Llama.
According to court documents released by vx-underground, Meta allegedly downloaded nearly 82TB of pirated books from shadow libraries such as Anna’s Archive, Z-Library, and LibGen to train its AI systems.
Internal discussions reveal that some employees raised ethical concerns as early as 2022, with one researcher explicitly stating, “I don’t think we should use pirated material” while another said, “Using pirated material should be beyond our ethical threshold.”
Meta made efforts to avoid detection
Despite these concerns, Meta appears to have not only ploughed on and taken steps to avoid detection. In April 2023, an employee warned against using corporate IP addresses to access pirated content, while another said that “torrenting from a corporate laptop doesn’t feel right,” adding a laughing emoji.
There are also reports that Meta employees allegedly discussed ways to prevent Meta’s infrastructure from being directly linked to the downloads, raising questions about whether the company knowingly bypassed copyright laws.
In January 2023, Meta CEO Mark Zuckerberg reportedly attended a meeting where he pushed for AI implementation at the company despite internal objections.
Meta isn't alone in facing legal challenges over AI training. OpenAI has been sued multiple times for allegedly using copyrighted books without permission, including a case filed by The New York Times in December 2023.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Nvidia is also under legal scrutiny for training its NeMo model on nearly 200,000 books, and a former employee had disclosed that the company scraped over 426,000 hours of video daily for AI development.
And in case you missed it, OpenAI recently claimed that DeepSeek unlawfully obtained data from its models, highlighting the ongoing ethical and legal dilemmas surrounding AI training practices.
Via Tom's Hardware
You may also like
Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking. Efosa developed a keen interest in technology policy, specifically exploring the intersection of privacy, security, and politics. His research delves into how technological advancements influence regulatory frameworks and societal norms, particularly concerning data protection and cybersecurity. Upon joining TechRadar Pro, in addition to privacy and technology policy, he is also focused on B2B security products. Efosa can be contacted at this email: udinmwenefosa@gmail.com
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.