AMD Advancing AI 2024 event shows that the AI PC is making progress, but still needs time to cook
It's getting there
This week in San Francisco, AMD hosted its Advancing AI 2024 event for press, vendors, and industry analysts. Announcements were made, especially around data center products like the new AMD EPYC and AMD Instinct, along with the new AMD Ryzen AI Pro 300-series processors for enterprise users.
But I was especially interested in the software on display. AI has been big on promises, but so far it's come up short when it comes to delivering software to users that would justify them buying a new laptop with a dedicated NPU (neural processing unit), the so-called AI PCs so many people have been talking about in 2024.
Much of what I saw is pretty much what we've all seen over the past year: a lot of image generation, some chatbots, and video-conferencing tools like background replacement or blurring, and these aren't exactly the killer apps for the AI PC that might prove to be game changers.
But I also saw some interesting AI tools and demos that show that we're starting to move beyond those basic use cases and into more creative territory when it comes to AI, and it makes me think that there might be something to all this AI talk after all, even if it's sill going to take some time for the technology to truly bear fruit.
Something different for a change
There were a few standout AI tools at the AMD Advancing AI 2024 event that caught my attention as I walked the demo floor after Dr. Lisa Su's morning keynote.
The first was a series of demos of intelligent agents in a 3D game environment, the kind of barebones 3D space that will be familiar to anyone who's ever used Unity, Unreal Engine, or similar game-development platforms.
In one instance, the demo video showed AI playing multiple instances of 3D Pong, which wouldn't be all that impressive on its own, except that the simple paddles controlled by AI were only given a simple rule for the game, namely that the ball couldn't hit the wall behind it.
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
That might not seem like a big deal, but you have to remember that traditional computer programs have to define all sorts of rules that you might take for granted in an instance like this. For example, does an AI agent know that it's supposed to move the paddle to block the ball? Does it know that it's supposed to knock the ball into an opponent's wall to score points? Does it know how the physics of the ball ought to work? These are all things that normally need to be coded into a game so that its AI can play according to a set of rules; the AI agent controlling the paddles in the demo didn't know any of that, but it still learned the rules and played the game as it was meant to be played.
Another demo in the same environment involved a game of tag between four humanoid agents in a room with obstacles, with three agents having to hunt down and 'tag' the fourth agent. None of the agents are given the rules at the beginning, nor do they have any programmed markers for how to navigate around the various obstacles. Let loose into the environment, the three red agents quickly identified the blue agent and began chasing it down. The blue agent, meanwhile, did the best it could to evade its pursuers.
There were similar demos that showed agents exposed to an environment and learning the rules of the space as they went. Of course, these demos only showed the ultimate success of these agents doing what they were supposed to, so I didn't get to see how the agents were trained, what their training data was, or how long it took for the agents to get things right. Still, as far as AI use cases go, having more intelligent AI agents in games would be quite a development.
Other use cases I saw included Autodesk 3DS Max 2025 with a tyDiffusion plug-in that was able to generate a rudimentary 3D scene based on a text prompt. The 3D-rendered output wasn't going to win any design awards, but for engineers and designers I can absolutely see something like this being a rough draft that they could further develop to produce something that actually looked detailed and professional.
The demo that was most interesting to me, however, was one that featured local video playback that could be controlled (i.e., paused) using only facial movements. For those with disabilities, this kind of application of AI could be revolutionary. Again, it was a simple test of a concept, but it's much more compelling than using yet another image generator to make bizarre memes or videos for social media.
Generating media is still the fallback for consumer AI
There were a few other interesting demos I saw featuring AI-powered tools, but the most developed tools on display were image generators, data aggregators, and the like. One mature app I saw did a better job of presenting a sports news summary than Google Gemini, but creating a multimedia flipboard of content produced somewhere else will only be useful if there's original content to aggregate, and these kinds of AI search summarizers are not a viable long-term application.
Summaries of the work of human writers, journalists, photographers, and the like cut off critical revenue to those very creators who cannot continue to produce the content AI tools like Gemini need to function, and the fear with those kinds of AI applications is that they'll put a lot of creators out of work, reducing the quantity – and quality – of the source material the applications use to produce their responses to queries, ultimately reducing the quality of the results they produce to the point where people won't trust them or continue to use them.
This kind of AI product is ultimately a dead end, but right now it presents the most visually compelling example of how AI can and does operate, so companies will continue to rush to develop this kind of product to ride the AI wave. Fortunately, I also saw enough examples of how AI can do something different, something better, that will in time make AI and the AI PCs that power it something worth investing in.
Soon, but not just yet.
You might also like...
John (He/Him) is the Components Editor here at TechRadar and he is also a programmer, gamer, activist, and Brooklyn College alum currently living in Brooklyn, NY.
Named by the CTA as a CES 2020 Media Trailblazer for his science and technology reporting, John specializes in all areas of computer science, including industry news, hardware reviews, PC gaming, as well as general science writing and the social impact of the tech industry.
You can find him online on Threads @johnloeffler.
Currently playing: Baldur's Gate 3 (just like everyone else).