'it Is Basically Dlss. That’s The Way Graphics Ought To Be':... (2026)
Perhaps to the surprise of no one, Nvidia's time at CES 2026 was all about one thing: AI. That said, PC gaming wasn't entirely ignored, as DLSS 4.5 was ninja-launched with the promise of '4K 240 Hz path traced gaming'. However, DLSS is still AI-based and in a Q&A session with members of the press, CEO Jensen Huang made it clear that artificial intelligence isn't just for improving performance, it's how graphics needs to be done in the future.
This much we already know, as Nvidia banged its neural rendering drum starting at last year's CES and then throughout 2025, and it wasn't the only graphics company to do so. Microsoft announced the addition of cooperative vectors to Direct3D, which is pretty much required to implement neural rendering in games, and AMD's FSR Redstone is as AI-based as anything from Intel and Nvidia.
So, when PC World's Adam Patrick Murray asked Huang, "Is the RTX 5090 the fastest GPU that gamers will ever see in traditional rasterization? And what does an AI gaming GPU look like in the future?", it wasn't surprising that Nvidia's co-founder avoided the first question entirely and skipped straight to the topic of AI.
"I think that the answer is hard to predict. Maybe another way of saying it is that the future is neural rendering. It is basically DLSS. That’s the way graphics ought to be."
He then expanded with some examples of what he meant by this: "I would expect that the ability for us to generate imagery of almost any style from photo realism, extreme photo realism, basically a photograph interacting with you at 500 frames a second, all the way to cartoon shading, if you like."
The keyword here is generate. If one wishes to be pedantic, all graphics are generated, either through rasterization or neural networks. It's all just a massive heap of mathematics, broken down into logic operations on GPUs, crunching through endless streams of binary values. But there is one important difference with neural rendering, and it's that it requires far less input data to generate the same graphical output as rasterization.
Fire up the original Crisis from 2007, and all those beautiful visuals are generated from lists of vertices, piles of texture maps, and a veritable mountain of resources that are created during the process of rendering (e.g. depth buffers, G-buffers, render targets, and so on). That's still the case almost 20 years on, and the size and quantity of those resources are now truly massive.
Keep up to date with the most imp
Source: PC Gamer