AI smartphone and laptop sales are said to be slowly dying – but is anyone surprised?

A person holding out their hand with a digital AI symbol.
(Image credit: Shutterstock / LookerStudio)

Almost every year, we get a report telling us that something in the PC industry is dying, or fading away, or the days of some aspect of computer technology are numbered.

So, when I saw an article about Micron not selling enough memory chips for AI PCs and smartphones, which meant the company downgraded its revenue forecasts for the coming quarters, and so some folks are panicking that ‘AI is dying’ – well, it did not surprise me in the slightest.

This industry does love a bit of doom and gloom at times, but much of this errant noise is purely down to public understanding of modern-day AI as a whole – certainly in the enthusiast sector.

Let me be clear here: AI isn't dying – we know that. Hell, all you have to do is look at how well Nvidia is doing to get a good grasp of just how wrong that assertion is. The thing is, out of all the numerous AI laptops and phones, or other gadgets, out there – everything that's currently being marketed with the AI tagline (I go on another long rant about that here) – the fact is that the vast bulk of AI processing doesn't come from your tiny laptop. It just doesn't.

Even the best custom-built gaming PC right now barely has the capability of running ChatGPT at 10% of its total capacity. And that is even if you could do so, as it's not an open source program that anyone can just go and download.

Sadly, it requires far too much data and processing power to fully simulate that kind of program locally on the desktop. There are workarounds and alternative apps, but they generally pale in comparison to the likes of Gemini or GPT in both depth of knowledge and response times. Not exactly surprising given you're trying to compete with multiple server blades operating in real-time. I'm sorry, your RTX 4090 just ain't going to cut it, my friend.

RTX 3080 GPU installed in a PC

(Image credit: SvedOliver / Shutterstock)

And that's another important point here – even looking at your custom PC, anyone that tells you that a CPU with a built-in NPU can outgun something like an aging RTX 3080 in AI workloads is pulling the wool over your eyes. Use something like UL's Procyon benchmark suite with its AI Computer Vision test, and you'll see that the results for a desktop RTX 4080 versus an Intel Core Ultra 9 185H-powered laptop are around 700% to 800% higher. That's not a small margin, and that's giving the Intel chip the benefit of the doubt and not using the Nvidia TensorRT API too, where the results are even better for Team Green.

The thing is, the companies, tools, and techniques that are doing well in the AI ecosystem are already well-established. If you have an RTX graphics card, the likelihood is you've already got plenty of performance to run rings around most modern-day 'AI' CPUs with an NPU built in. Secondly, pretty much every AI program worth running utilizes server blades to deliver that performance – there's very little that runs locally or doesn't have some form of hookup with the cloud.

Google has now pretty much rolled out Gemini to the bulk of its Android OS devices, and it'll be landing on its Nest speakers as well in the coming months (with a beta version technically already being available, thanks to some fun Google Home Public Preview skullduggery). And to be clear, that's a four-year-old speaker at this point, not exactly cutting-edge tech.

An Nvidia RTX 4090 in its retail packaging

(Image credit: Future)

This is just the beginning

Many years back, I had a discussion with Roy Taylor, who at the time was at AMD as Corporate Vice President of Media & Entertainment, specializing in VR and the advancements in that field.

My memory is a little hazy, but the long and short of the conversation was that as far as graphics card performance was concerned, to get a true-to-life experience in VR, with a high enough pixel density and sufficient frame rate to ensure a human couldn't tell the difference, we'd need GPUs capable of driving petaflops of performance. I think the exact figure was around the 90 PFLOPs mark (for reference, an RTX 4090 is still well over 100x less potent than that).

In my mind, local AI feels like it falls very much in the same camp as that. It's a realm of apps, utilities and tools that won't likely ever inhabit your local gaming PC, but will instead reside solely on server blades and supercomputers. There's just no way an isolated computer system can compete – even if we were to halt all AI development at its current state, it would take us years to catch up in terms of overall performance. That's not necessarily a bad thing or the end of the world either.

There is a silver lining for us off-the-grid folk, and it all hinges on GPU manufacturers. Naturally, AI programming, particularly machine learning, predominantly operates through parallel computing. This is something that GPUs are wildly good at doing, far better than CPUs, and particularly Nvidia GPUs utilizing Tensor cores. It's the tech behind all those DLSS and FSR models we know and love, driving up frame rates without sacrificing in-game graphical fidelity.

However, developing a GPU from the ground up takes time – a long time. For a brand-new architecture, we're talking several years. That means the RTX 40 series was likely in development in 2020/2021, at a guess, and similarly, the RTX 50 series (when the next-gen arrives, supposedly imminently) probably began life in 2022/2023, with different teams shuffling about from task to task as and when they became available. All of that prior to the thawing of the most recent AI winter and the arrival of ChatGPT.

What that tells us is that unless Nvidia can radically pivot its designs on the fly, it's likely that the RTX 50 series will still continue on from Lovelace's (RTX 40 series) success, giving us even better AI performance, for sure. But it won't be until the RTX 60 series that we really see AI capacity and performance supercharged in a way that we've not seen before with these GPUs. That may be the generation of graphics cards that could make localized LLMs a reality rather than a pipe dream.

You might also like

TOPICS
Zak Storey
Freelance contributor

Zak is one of TechRadar's multi-faceted freelance tech journalists. He's written for an absolute plethora of tech publications over the years and has worked for Techradar on and off since 2015. Most famously, Zak led Maximum PC as its Editor-in-Chief from 2020 through to the end of 2021, having worked his way up from Staff Writer. Zak currently writes for Maximum PC, TechRadar, PCGamesN, and Trusted Reviews. He also had a stint working as Corsair's Public Relations Specialist in the UK, which has given him a particularly good insight into the inner workings of larger companies in the industry. He left in 2023, coming back to journalism once more. When he's not building PCs, reviewing hardware, or gaming, you can often find Zak working at his local coffee shop as First Barista, or out in the Wye Valley shooting American Flat Bows.

Read more
Project DIGITS - front view
I am thrilled by Nvidia’s cute petaflop mini PC wonder, and it’s time for Jensen’s law: it takes 100 months to get equal AI performance for 1/25th of the cost
An Nvidia RTX 4070 Super on a purple deskmat on a desk
Nvidia in 2024: year in review
An Nvidia GeForce RTX 4080 Super on a desk
What to expect from Nvidia in 2025
CES AI
CES 2025 is an AI inflection point and I can't wait to see what comes next
The Nvidia GeForce 5090 GPU on display at CES 2025
Nvidia releases stats that prove DLSS and Frame Generation are here to stay - sorry, angry gamers
An AMD Ryzen 7 9800X3D on a desk on top of its retail packaging
AMD in 2024: year in review
Latest in Laptops
The Apple MacBook Air M2 on a red background with text saying Lowest Price next to it.
The excellent MacBook Air M2 drops to an unbelievable price of $699 - yes, really
Collage of Apple tech on a pink background, including MacBook, iPad, AirPods and Apple Watch
Massive Apple sale live at Best Buy: get a MacBook Air for $699, Apple Watch for $299 and cheap AirPods
Apple MacBook Air M3
The M3 MacBook Air is officially discontinued, but the M2 MacBook Air will live on elsewhere and that's good news
13-inch and 15-inch MacBook Air M4 in Sky Blue
I saw Apple's new 13- and 15-inch MacBook Air with M4, and here's why Sky Blue is my new favorite color
The MacBook Air M2 on a blue background with text saying Price Cut.
The best MacBook Air (M4) preorder deal is at Best Buy – how to score the new Apple laptop for as little as $449
Dell XPS 13 and XPS 14 on a yellow background
Epic laptop deals are now live at Dell – here are the 5 best offers from $279.99
Latest in Opinion
Customer service 3D manager concept. AI assistance headphone call center
The era of Agentic AI
A hand reaching out to touch a futuristic rendering of an AI processor.
Balancing innovation and security in an era of intensifying global competition
Image of someone clicking a cloud icon.
Five ways to save time and money with your IT in 2025
Data center racks with cables and servers
The multidimensional strategy enterprises need for AI and cloud workloads
EDMONTON, CANADA - FEBRUARY 10: A woman uses a cell phone displaying the Open AI logo, with the same logo visible on a computer screen in the background, on February 10, 2025, in Edmonton, Canada
How to use ChatGPT to prepare for a job interview
GPT 4.5
ChatGPT 4.5 understands subtext, but it doesn't feel like an enormous leap from ChatGPT-4o