Arc graphics are exactly what gaming laptops need in 2022
Analysis: let's be honest, gaming laptops got boring
As long as gaming laptops have existed, there have only been two companies that have produced GPUs for them. Nvidia and AMD have had an iron grip on the market for years, and now Intel Arc mobile graphics processors are finally here. I don't know if an Arc 7 GPU is going to be faster than an RTX 3080 Ti – it probably won't – but since it's Intel at the helm, I at least know it's going to result in a good experience.
To be clear, while Intel has said that laptops using its Arc 3 graphics are available now, I have not even seen one in person, so all I have to go off of is the information that Intel has provided. I'm not exactly in the business of trusting internal benchmarks, and neither should you.
But let's be honest, while AMD and Nvidia both make mobile graphics solutions, the best gaming laptops on the market are using a combination of Intel processors and Nvidia graphics. AMD Navi started to chip away at Team Green's dominance, but Intel can hit a lot harder, and it's about more than raw frame rates.
A stable platform
Intel has had a rocky few years, as it tried to catch up with AMD's Zen architecture. But even when Intel was furthest behind in terms of raw performance, it still excelled where it really matters – especially for laptops – reliability. When you're looking at charts and numbers it's pretty easy to forget about what the experience of actually using something is like, and Intel processors have never really had the same kind of adjustment period as AMD processors do.
It seems like every time a new AMD chipset comes out on desktop, there are a number of critical bugs that Team Red has to jump on after release. For instance, shortly after the release of its 5000-series processors there were widespread reports of USB problems, where devices would just stop responding, according to Tom's Hardware.
Intel generally doesn't have the same kind of problems with its new platforms. And while Intel is admittedly going to be new at the whole discrete graphics thing, the company has proven that it puts a lot of value on the experience of the user. So even if performance isn't quite there with this first generation of Arc graphics, it at least will likely result in a user-friendly product. Maybe that's why Intel prioritized laptop GPUs instead of trying to take on the RTX 3080 immediately.
An actual competitor for DLSS
It's impossible to overestimate the impact DLSS has had on PC gaming since it debuted with the RTX 2080 in 2018. While it wasn't as exciting as first, it's become a critical technology for AAA game developers that want people to actually be able to play their games on affordable hardware.
And while AMD has come up with a competing technology in FSR, or FidelityFX Super Sampling, it just doesn't have the same visual quality as DLSS. However, it does have the advantage of being usable on any GPU.
What keeps DLSS out of reach of FSR is that it uses the Tensor Cores in Nvidia's RTX graphics cards to apply a deep learning algorithm specific to each game, that lets you render the game at a lower resolution with the regular CUDA cores, then use the Tensor cores to scale it up to your display's native resolution. Because this is a hardware accelerated approach that is so fine tuned, it's hard to even tell a difference between native resolution and DLSS on a Quality preset.
But XeSS could be a legitimate alternative to DLSS with a similar visual quality – because it's taking a similar approach. Let's, uh, break that down really quick.
For example, the Intel A370M, one of the first GPUs the company is launching, comes equipped with 8 Xe cores. Intel has thankfully released the layout for each of these cores. Each Xe core will come with 16 Xe Vector Engines (XVE) and 16 Xe Matrix Engines (XMX). The XVE threads will basically serve the same function as CUDA cores in Nvidia GPUs. Then, the XMX cores are specially designed for AI workloads, and are able to perform this specialized workload a lot faster than the standard Vector units.
I won't go too much into why because I'm not an engineer, but this is a very similar structure to Nvidia Ampere, and should be just as efficient at upscaling workloads – at least on paper.
XeSS won't actually be available until later this year, but I can't wait to get my hands on it to see how it performs, and more importantly, how games look when the technology is turned on.
Because let's face it, performance gains between Nvidia's DLSS and AMD's FSR are pretty similar – to the point where we get the same framerate in Cyberpunk 2077 when switching between them on the new RTX 3090 Ti – but the image quality is so much better with Nvidia's tech.
The technology is definitely there for Intel, too, and it looks like XeSS is going to be just as important for PC gamers as the other upscaling technologies. But, it's also important to keep in mind that it took a while for Nvidia to get DLSS looking as good as it does now. I remember when the tech first became available in Battlefield 1 and Metro Exodus, and it has come a long way. It'd be fantastic if Intel is going to be able to avoid those growing pains, but there will probably be some issues along the way.
But because it is an Intel technology it will more than likely actually work, and you probably won't have to mess with it too much to get it going.
Get the best Black Friday deals direct to your inbox, plus news, reviews, and more.
Sign up to be the first to know about unmissable Black Friday deals on top tech, plus get all your favorite TechRadar content.
Jackie Thomas is the Hardware and Buying Guides Editor at IGN. Previously, she was TechRadar's US computing editor. She is fat, queer and extremely online. Computers are the devil, but she just happens to be a satanist. If you need to know anything about computing components, PC gaming or the best laptop on the market, don't be afraid to drop her a line on Twitter or through email.