Why DeepSeek R1 could be right for your business, and why the hysteria around it is wrong
DeepSeek’s R1 model beat American tech giants at their own game, and here’s how

DeepSeek R1, released January 20. 2025, is an open source large language model (LLM), on par with the capabilities of OpenAI’s o1 model, that you can scale to run on your own hardware, or the cloud infrastructure of your choice, today, and it won’t cost you anything — well, maybe a GPU.
With this development, artificial intelligence, the futurist term adopted by companies such as OpenAI, Anthropic, Nvidia and Google, who want you to believe that LLMs can achieve more than they currently can and probably ever will, and only through their input and infinite money pile, has been freely proliferated for all — has sent these companies into freefall and damage control.
DeepSeek R1 in the press
Certain reporters on the tech industry have followed suit — suggesting DeepSeek is a Chinese state psyop (untrue; it’s a startup that came out of a hedge fund), conflating of the R1 and V3 models (though R1 is based on V3, the latter of which is the model used by the web/app version of DeepSeek, which you may have heard about about in the context of it having to be jailbroken — bypassing their safeguards with enough prompt engineering — into referencing the 1989 Tiananmen Square massacre), or accusing DeepSeek of plagiarism because some cherry-picked output ‘believes’ that it’s ChatGPT (we’ll get onto this latter point in a minute).
More sinister are articles that have inferred DeepSeek’s models are dangerous in some way, or ‘just asking questions’ about whether it’s ‘safe’ to use at all. News coverage has editorialised studies by security research firms as “worrying” or “concerning”, hammering home the point with plausible deniability by using hero images with the Chinese flag, or a snarling man sat at a keyboard in a balaclava, looming in the background.
To be clear, yes state actors are a thing, but you cannot see that a new tech thing has come out of China, jump to the conclusion that the Chinese state is behind it, and still expect to be taken seriously as a mammal, let alone a journalist.
These stories’ headlines couch a “stupid” and plausibly “xenophobic” agenda (not my words, but those of Better Offline podcast host, CEO of PR firm EZPR, and tech journalist Ed Zitron) via quotes from these studies, and allude to jailbreaking while also backlinking to an article admitting that ChatGPT and other LLMs are also susceptible to it for that sweet search engine optimization.
It’s bewildering to then read headlines that can’t even get to the end of falsely implying that DeepSeek has code or work from OpenAI, and it’s important to refute this false implication, because it’s readily available information, if you look under the mound of garbage, that DeepSeek trains its models using synthetic data — AI generated output from other LLMs, like ChatGPT. It’s an unfortunate hallucination, perhaps, but there’s nothing wrong or unethical with this approach, and even Elon Musk has admitted that synthetic data is the way forward for AI training, and reams of it are already available in repositories online.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
False accusations that Deepseek has stolen code from OpenAI fall apart easily for another obvious reason; R1 is open source, so you - anyone - can go into the code and see that that’s patently untrue. That’s the whole point, and, by ignoring it, client journalists are muddying the waters around DeepSeek.
They’re motivated to do this because, with the release of R1, ‘artificial intelligence’ — America’s next top tech bubble — has been freely proliferated, which is a direct threat to the business model of the sites that they work on — that ‘line go up’ mentality shared by American politicians.
The AI industry (doesn’t exist)
Now that DeepSeek R1 is out there, you can’t be sold an LLM, AI tools, or AI writers, because it’s just there. There’s no incentive to buy a $200 ChatGPT Pro subscription (which OpenAI still sells at a loss, by the way), or to race to buy a beefy Nvidia GPU thanks to the model being scalable depending on your hardware, plus that peer-to-peer networks are able to be leveraged to pool processing power for AI workloads.
AI is now for everyone, which is The One Thing They Didn’t Want to Happen. OpenAI’s CEO Sam Altman, for instance, wanted you buying into the narrative that pumping catastrophic amounts of money into training models, and building electricity-juicing, water-slurping data centers was the only way to make AI happen, because that’s what they want so badly to make money (and it really, really doesn’t; OpenAI lost $5 billion in 2024).
Altman has offered mealy-mouthed praise to DeepSeek for R1’s efficiency, but, ever graceful in being shown to be wearing no clothes, also said that OpenAI will ““obviously deliver much better models.”
This claim is disgraceful in defeat and patently inaccurate — with R1, DeepSeek is thought to have achieved with $6 million (£4.8m) what OpenAI spends tens of millions of dollars doing — the training costs of OpenAI’s closest competitor to R1, OpenAI o1, are unclear, but the less-capable GPT-4 cost in the region of $100 million dollars to train.
Democratizing LLMs
There’s now an alternative way through. DeepSeek’s models are on par with OpenAI’s - DeepSeek R1, is described on Github as being ‘on par with OpenAI o1’ (and is even better than it at reasoning in some cases), the company’s most advanced model, while V3 is thought to be on par with GPT-4.
It’s great that a solid open source model now exists, and that it could be created efficiently and cheaply. Those facts make R1 easy to build on via platforms like Hugging Face, and AI’s sudden decentralization bodes well for more use cases to be born out of AI. The Deepseek-pooh-poohing posturing by AI companies doesn’t even hold up to scrutiny, and so neither does the press closing ranks; because plenty of bigger companies are all taking advantage of R1 being open source - Nvidia are now hosting DeepSeek R1 as a NIM microservice, and, in late January 2025, Microsoft, currently OpenAI’s largest investor, added distilled versions of DeepSeek R1 to the Azure AI Foundry.
The only company which I reckon will come out on top is Nvidia, because it makes the hardware used by hyperscalers - the large-scale data centers you may have heard so much about - built by AI companies in the search for more infrastructure so that they can keep throwing money at the solution without a problem, even now. Nvidia lost a sixth of its value after the launch of DeepSeek R1 on January 20 — but all signs point to demand for its top-end Blackwell GPUs remaining strong.
But I want to leave you with the important thing, which is that, should you be looking to implement AI into your business, with DeepSeek, these savings are passed onto you, with DeepSeek R1 being 30 times cheaper to run than OpenAI’s models, in part because you can do on consumer-level hardware using a distilled version of the model.
That’s bad for big, subscription-driven, AI companies, and the outlets that prop them up, because you can’t argue with cost. The AI bubble isn’t profitable, and DeepSeek represents an existential threat to a business model that has yet to actually begin to function. That’s why you’re having this drivel sluiced into your eyeballs, constantly, and why I think it’s worth giving DeepSeek R1 a try if you have even a passing interest in AI implementation.
Luke Hughes holds the role of Staff Writer at TechRadar Pro, producing news, features and deals content across topics ranging from computing to cloud services, cybersecurity, data privacy and business software.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.