There is an alternative 2025 where you get the Nvidia RTX 5090 of your dreams. That’s a timeline where Nvidia has busted Apple’s grip on TSMC’s most advanced process nodes, managed to negotiate an unprecedented deal on silicon production, and worked some magic to deliver the same sort of generational rendering performance increases we’ve become used to since the RTX prefix was born.
And it’s a 2025 where Nvidia hasn’t slapped a $400 price hike on the most powerful of its new RTX Blackwell graphics cards.
But in this timeline, the RTX 5090 is an ultra enthusiast graphics card that is begging us to be more realistic. Which, I will freely admit, sounds kinda odd from what has always been an OTT card. But, in the real world, a GB202 GPU running on a more advanced, smaller process node, with far more CUDA cores, would have cost a whole lot more than the $1,999 the green team is asking for this new card. And would still maybe only get you another 10–20% higher performance for the money—I mean, how much different is TSMC’s 3 nm node to its 4 nm ones?
The RTX 5090 is a new kind of graphics card, however, in terms of ethos if not in silicon. It’s a GPU designed for a new future of AI processing, and I don’t just mean it’s really good at generating pictures of astronauts riding horses above the surface of the moon: AI processing is built into its core design and that’s how you get a gaming performance boost that is almost unprecedented in modern PC graphics, even when the core at its heart hasn’t changed that much.
Nvidia RTX 5090: The verdict
(Image credit: Future)
The nexus point between hardware and software is where the RTX 5090 thrives.
The new RTX Blackwell GPU is… fine. Okay, that’s a bit mean, the GB202 chip inside the RTX 5090 is better than fine, it’s the most powerful graphics core you can jam into a gaming PC. I’m maybe just finding it a little tough not to think of it like an RTX 4090 Ti or Ada Titan. Apart from hooking up the Tensor Cores to the shaders, via a new Microsoft API, and a new flip metering doohicky in the display engine, it largely feels like Ada on steroids.
The software suite backing it up, however, is a frickin’ marvel. Multi Frame Generation is giving me ultra smooth gaming performance, and will continue to do so in an impressively large number of games from day one.
The nexus point between hardware and software is where the RTX 5090 thrives. When everything’s running like it should I’m being treated to an unparalleled level of both image fidelity and frame rates.
It’s when you look at the stark contrast between a game such as Cyberpunk 2077 running at 4K native in the peak RT Overdrive settings, and then with the DLSS and 4x Multi Frame Gen bells and whistles enabled that it becomes hard to argue with Nvidia’s focus on AI modeling over what it is now, rather disdainfully calling brute force rendering.
Sure, the 30% gen-on-gen 4K rendering performance increase looks kinda disappointing when we’ve been treated to a 50% bump from Turing to Ada and then a frankly ludicrous 80% hike from Ampere to Ada. And, if Nvidia had purely been relying on DLSS upscaling alone to gild its gaming numbers, I’d have been looking at the vanguard of the RTX 50-series with a wrinkled nose and a raised eyebrow at its $2K sticker price.
But the actual gaming performance I’m seeing out of this card in the MFG test builds—and with the DLSS Override functionality on live, retail versions of games—is kinda making me a a convert to this new AI world in which we live. I’m sitting a little easier with the idea of 15 out of 16 pixels in my games getting generated by AI algorithms when I’m playing Alan Wake 2 at max 4K settings just north of 180 fps, Cyberpunk 2077’s Overdrive settings at 215 fps, and Dragon Age: Veilguard at more than 300 fps.
Call it frame smoothing, fake frames, whatever, it works from a gaming experience perspective. And it’s not some laggy mess full of weird graphical artifacts mangled together in order to hit those ludicrous frame rates, either. Admittedly, there are times where you can notice a glitch caused by either Frame Gen or the new DLSS Transformer model, but nothing so game or immersion breaking that I’ve wanted to disable either feature and flip back to gaming at 30 fps or not run at top settings.
There are also absolutely cases where the DLSS version looks better than native res and times where those extra ‘fake frames’ are as convincing as any other to the naked eye. Honestly you’re going to have to be really looking for problems from what I’ve seen so far. And I’ve been watching side-by-side videos while they export, where you can literally watch it move one frame at a time; the frame gen options stand up incredibly well even under such close scrutiny.
(Image credit: Future)
(Image credit: Future)
If that noggin’-boggling performance were only available in the few games I’ve tested it with at launch then, again, I would be more parsimonious with my praise. But Nvidia is promising the Nvidia App is going to be offering the DLSS Override feature for 75 games and apps to turn standard Frame Gen over to the new frame multiplier. And you still don’t need to log in to the app to be able to flip the Multi Frame Generation switch.
And, rando PlayStation port aside, most of the games you’re going to want to play over the next 12 months—especially the most feature-rich and demanding ones—will more than likely include Nvidia’s full DLSS feature-set. Unless other deals take precedence… ahem… Starfield.
I will say the switch to the transformer model for DLSS hasn’t been the game-changer I was expecting from the demos I witnessed at CES, but it’s at the very least often better than the standard convolution neural network in terms of image quality. It’s just that it will add in some oddities of its own to the mix and doesn’t completely rid us of Ray Reconstruction’s ghosting.
But don’t get me wrong, more base level rendering grunt would always be welcome, but to get to these sorts of fps numbers with pure rendering power alone is going to take a lot of process node shrinks, more transistors than there are stars in the sky, and a long, long time. Oh, and probably cost a ton of cash, too.
Though even a little more raster power would push those AI augmented numbers up even further, and that’s something which will certainly be in my mind as I put the rest of the RTX 50-series through its paces. I, for one, am a little concerned about the RTX 5070 despite those claims of RTX 4090 performance for $549.
The RTX 5090, though, is as good as it gets right now, and is going to be as good as the RTX Blackwell generation gets… until Nvidia decides it wants to use the full GB202 chip. Yields on TSMC’s mature 4N node are surely pretty good this far down the line, eh?
(Image credit: Future)
(Image credit: Future)
(Image credit: Future)
Literally impossible to beat with any other hardware on the planet.
And, oh is it ever pretty. With all the comic girth of the RTX 3090 and RTX 4090, they are just stupid-looking cards. I’m always taken aback whenever I pull one out of its box to stick in a PC. Being able to come back to the dual-slot comfort zone is testament to the over-engineering Nvidia has done with the Founders Edition, even if both the RTX 5090 cards I’ve tested have been some of the squealiest, coil-whiney GPUs I’ve tested in recent history. But your mileage and your PSU may vary, they certainly don’t sound great with the test rig’s Seasonic power supply.
Despite being somewhat of a loss-leader for Nvidia, this RTX 5090 Founders Edition is also likely to be as cheap as an RTX 5090 retails for over the next year. With every other AIB version sure to be bigger, and most of them more expensive, the Founders Edition is the card you should covet. And the one you will be disappointed about when you almost inevitably miss out on what will surely be slim inventory numbers.
The GPU at its heart might not be super exciting, but the potential of all the neural rendering gubbins Nvidia is laying down the groundwork for with this generation could change that given time. Right now, however, it feels more like an extension of Ada, and with the outstanding AI augmented performance really symptomatic of where we’re at in time.
Still, when it comes to the raw gaming experience of using this svelte new RTX 5090 graphics card, it’s literally impossible to beat with any other hardware on the planet.
Nvidia RTX 5090: The RTX Blackwell architecture
(Image credit: Nvidia)
(Image credit: Nvidia)
As a layman, not a huge amount seems to have changed from the Ada architecture through to the GB202 Blackwell GPU. As I’ve said, on the surface it feels very much like an extension of the Ada Lovelace design, though that is potentially because Blackwell is sitting on the same custom TSMC 4N node, so in terms of core counts and physical transistor space there isn’t a lot of literal wiggle room for Nvidia.
There are 21% more transistors in the GB202 versus the AD102, and a commensurate 21% increase in die size. Compare that with the move from the RTX 3090 to RTX 4090, with the switch from Samsung’s 8nm node to this same 4N process, Ada’s top chip gave us 170% more transistors, but a 3% die shrink.
There are still 128 CUDA cores per streaming multiprocessor (SM), so the 170 SMs of the GB202 deliver 21,760 shaders. Though in a genuine change from Ada, each of those can be configured to handle both integer and floating point calculations. Gone are the dedicated FP32 units of old.
Though, interestingly, this isn’t the full top-tier Blackwell GPU. The RTX 5090 has lopped off one full graphics processing cluster, leaving around 2800 CUDA cores on the cutting room floor. I guess that leaves room for a Super, Ti, or an RTX Blackwell Titan down the line if Nvidia deems it necessary.
You are getting the full complement of L2 cache, however, with near 100 MB available to the GPU. But then you are also seeing 32 GB of fast GDDR7 memory, too, on a proper 512-bit memory bus. That means you’re getting a ton more memory bandwidth—78% more than the RTX 4090 could offer.
Neural Shaders
There are deeper, arguably more fundamental changes that Nvidia has made with this generation, however. Those programmable shaders have finally been given direct access to the Tensor Cores, and that allows for what the green team is calling Neural Shaders.
Previously the Tensor Cores could only be accessed using CUDA, but in collaboration with Microsoft, Nvidia has helped create the new Cooperative Vectors API, which allows any shader—whether pixel or ray tracing—to access the matrix calculating cores in both DX12 and Vulkan. This is going to allow developers to bring a bunch of interesting new AI-powered features directly into their games.
And it means AI is deeply embedded into the rendering pipeline. Which is why we do have a new slice of silicon in the Blackwell chips to help with this additional potential workload. The AI Management Processor, or AMP, is there to help schedule both generative AI and AI augmented game graphics, ensuring they can all be processed concurrently in good order.
(Image credit: Nvidia)
It’s that Cooperative Vectors API which will allow for features such as neural texture compression, which is touted to deliver 7x savings against VRAM usage—ostensibly part of Nvidia’s dedicated push to ensure 8 GB video cards still have a place in the future. But it also paves the way for RTX Neural Radiance Cache (to enhance lighting via inferred global illumination), and RTX Neural Materials, RTX Neural Skin, and RTX Neural Faces, which all promise to leverage the power of AI models to get us ever closer to photo realism. At least get us close to the sort of image quality you’ll see in offline rendered films and TV.
The new 4th Gen RT Cores aren’t to be left out, and come with a couple of new units dedicated to improving ray tracing. Part of that push is something called Mega Geometry, which massively increases the amount of geometry possible within a scene. It reminds me a whole lot of when tessellation was first introduced—the moment you turn off the textures and get down to the mesh layer in the Zorah demo, which showcases the tech, you’re suddenly hit by what an unfeasible level of geometry is possible in a real-time scene.
This feature has largely been designed for devs on Unreal Engine 5 utilising Nanite, and allows them to ray trace their geometry at full fidelity. Nvidia has put so much store in Mega Geometry that it has designed the new RT Cores specifically for it.
DLSS 4 and Multi Frame Generation
(Image credit: Nvidia)
The final hardware piece of the RTX Blackwell puzzle to be dropped into the new GPU is Flip Metering. The new enhanced display engine has twice the pixel processing capability, and has been designed to take the load away from the CPU when it comes to ordering frames up for the display. The Flip Metering feature is there to enable Multi Frame Generation to function smoothly—displaying all those extra frames in between the rendered ones in good order is vital in order to stop it feeling “lumpy”. That’s not my phrase, that’s a technical term from Nvidia’s Mr. DLSS, Brian Catanzaro, and he should know.
In terms of the feature set, DLSS itself has also had a potentially big upgrade, too. Previously it used a convolutional neural network (CNN) as the base model for DLSS, which is an image-focused model, and made sense for something so image-focused as upscaling. But it’s no longer the cutting edge of AI, so DLSS 4 has switched over to the transformer architecture you will be familiar with if you’ve used ChatGPT—the GPT bit stands for generative pre-trained transformer.
It’s more efficient than CNN, and that has allowed Nvidia to be more computationally demanding with DLSS 4—though I’ve not really seen much in the way of a performance difference between the two forms in action.
Primarily it seems the transformer model was brought in to help Ray Reconstruction rid itself of the smearing and ghosting it suffers from, though it’s also there for upscaling, too. Nvidia, however, is currently calling that a beta. Given my up and down experience with the transformer model in my testing, I can now understand why. It does feel very much like a v1.0 with some strange artifacts introduced for all the ones it helps remove.
I’ve saved the best new feature for last: Multi Frame Generation. I was already impressed with the original version of the feature introduced with the RTX 40-series, but it has been hugely upgraded for the RTX 50-series and is arguably the thing which will impress people the most while we wait for those neural shading features to actually get used in a released game.
It’s also the thing which will really sell the RTX 50-series. We are still talking essentially about interpolation, no matter how much Jen-Hsun wants to talk about his GPU seeing four frames into the future. The GPU will render two frames and then squeeze up to three extra frames in between.
Using a set of new AI models it no longer needs dedicated optical flow hardware (potentially good news for RTX 30-series gamers), and is able to perform the frame generation function 40% faster and with a 30% reduction in its VRAM footprint. That flip metering system now means the GPU’s display engine queues up each frame, pacing them evenly, so you get a smooth final experience.
The 5th Gen Tensor Cores have more horsepower to deal with the load, and the AMP gets involved, too, in order to keep all the necessary AI processing around both DLSS and Frame Generation, and whatever else they get up to in the pipeline, running smoothly.
Nvidia RTX 5090: The performance
(Image credit: Future)
The raw performance of the RTX 5090 is relatively impressive. As I’ve mentioned earlier, I’m seeing around a 30% improvement in 4K gaming frame rates over the RTX 4090, which isn’t bad gen-on-gen. We have been spoiled by the RTX 30- and 40-series cards, however, and that does make this bump seem a little less exciting.
The main increase is all at that top 4K resolution, because below that the beefy GB202 GPU does start to get bottlenecked by the processor. And that’s despite us rocking the AMD Ryzen 7 9800X3D in our test rig—I’ve tossed the RTX 5090 into my own rig with a Ryzen 9 7950X in it and the performance certainly drops.
And in games where the CPU is regularly the bottleneck, even at 4K, the performance delta between the top Ada and Blackwell GPUs is negligible. In Homeworld 3 the 4K performance increase is just under 9%, even worse, at 1080p the RTX 5090 actually takes a retrograde step and drops 7% in comparison.
This is a graphics card built for DLSS, and as such if you hit 4K DLSS Quality settings you’re actually rendering at 1440p.
Where the GPU is the star, however, the extra 4K frame rates are matched by the overall increase in power usage. This thing will drain your PSU and I measured the card pulling down nearly 640 W at peak during our extended Metro Exodus benchmark. The commensurate performance increase does, however, follow so the performance per watt at 4K remains the same compared with the RTX 4090.
But yes, it does start to fall down when you drop to 1440p and certainly 1080p. If you were hoping to smash 500 fps at 1080p with this card we might have to have a little chat. It will still draw a ton of power at the lower resolutions, too, which means its performance per watt metrics drop by 15%.
You might say that’s not such a biggy considering you’ll be looking to play your games at 4K with such a beast of a GPU, but this is a graphics card built for DLSS, and as such if you hit 4K DLSS Quality settings you’re actually rendering at 1440p. That 30% 4K uplift figure is then kinda moot unless you’re sticking to native rendering alone.
Which you absolutely shouldn’t do because Multi Frame Generation is a game-changer, in the most literal sense. The performance difference going from Native, or even DLSS Quality is stark. With Alan Wake 2 now hitting 183 fps, with 102 fps 1% low, it’s a glorious gaming experience. Everything in the graphics settings can be pushed to maximum and it’ll still fly.
More importantly, the latency is only marginally higher than with just DLSS settings—the work Nvidia has done to pull that down with Multi Frame Generation is a marvel. As is the Flip Metering frame pacing. This is what allows the frames to come out in a smooth cadence, and makes it feel like you’re really getting that high-end performance.
Cyberpunk 2077 exhibits the same huge increase in performance, and is even more responsive than Alan Wake 2, with just 43 ms latency when I’ve got 4x Multi Frame Generation on the go.
And even though Dragon Age: The Veilguard is pretty performant at 4K native, I’ll happily take a 289% increase in perceived frame rate, especially when the actual PC latency on that game barely moves the needle. It’s 28 ms at 4K native and 32 ms with DLSS Quality and 4x MFG.
Another benefit of the DLSS and MFG combo is that it pulls down the power and thermal excesses of the RTX 5090. I’ve noticed around a 50 W drop in power consumption with MFG in action, and that means the temps go down, and the GPU clock speed goes up.
Still, the overall combination of high power, high performance, and a new, thinner chassis means that the GPU temperature is noticeably higher than on the RTX 4090 Founders Edition. Running through our 4K native Metro Exodus torture test, the RTX 5090 Founders Edition averages 71 °C, with the occasional 77 °C peak. That’s a fair chunk higher than the top Ada, though obviously that’s with a far thinner chassis.
For me, I’d take that extra little bit of heat for the pleasure of its smaller footprint. What I will say, however, is that I did experience a lot of coil whine on our PC Gamer test rig. So much so, that Nvidia shipped me a second card to test if there was an issue with my original GPU. Having now tested in my home rig, with a 1600 W EVGA PSU, it seems like the issue arose because of how the Seasonic Prime TX 1600 W works with the RTX 5090, because in my PC the card doesn’t have the same constantly pitching whine I experienced on our test rig.
The RTX 5090 being a beastly GPU, I’ve also taken note of what it can offer creatives as well as gamers. Obviously with Nvidia’s AI leanings the thing can smash through a generative AI workload, as highlighted by the way it blows past the RTX 4090 in the UL Procyon image benchmark.
Though the AI index score from the PugetBench for DaVinci Resolve test shows that it’s not all AI plain sailing for the RTX 5090. GenAI is one thing, but DaVinci Resolve’s use of its neural smarts highlights only a 2.5% increase over the big Ada GPU.
Blender, though, matches the Procyon test, offering over a 43% increase in raw rendering grunt. I’m confident that extra memory bandwidth and more VRAM is helping out here.
Nvidia RTX 5090: The analysis
(Image credit: Future)
(Image credit: Future)
(Image credit: Future)
The RTX 4090’s 80% performance bump is living in recent memory, rent-free in the minds of gamers.
When is a game frame a real frame? This is the question you might find yourself asking when you hear talk of 15 out of 16 pixels being generated by AI in a modern game. With only a small amount of traditional rendering actually making it onto your display, what counts as a true frame? I mean, it’s all just ones and zeros in the end.
So, does it really matter? For all that you might wish to talk about Multi Frame Generation as fake frames and just frame-smoothing rather than boosting performance, the end result is essentially the same: More frames output onto your screen every second. I do understand that if we could use a GPU’s pure rendering chops to hit the same frame rates it would look better, but my experience of the Blackwell-only feature is that often-times it’s really hard to see any difference.
Nvidia suggests that it would take too long, and be too expensive to create a GPU capable of delivering the performance MFG is capable of, and certainly it would be impossible on this production node without somehow making GPU chiplets a thing. It would be a tall order even just to match the performance increase the RTX 4090 offered over the RTX 3090 in straight rendering.
But that’s the thing, the RTX 4090’s 80% performance bump is living in recent memory, rent-free in the minds of gamers. Not that that sort of increase is, or should necessarily be expected, but it shows it’s not completely beyond the realms of possibility. It’s just that TSMC’s 2N process isn’t even being used by Apple this year, and I don’t think anyone would wait another year or so for a new Nvidia series of GPUs.
Though just think what a die-shrink and another couple year’s maturity for DLSS, Multi Frame Gen, and neural rendering in general might mean for the RTX 60-series. AMD, be afraid, be very afraid. Or, y’know, make multiple GPU compute chiplets a thing in a consumer graphics card. Simple things, obvs.
Still, if the input latency had been an issue then MFG would have been a total non-starter and the RTX Blackwell generation of graphics cards would have felt a lot less significant with its rendering performance increase alone. At least at launch. The future-gazing features look exciting, but it’s far too early to tell just how impactful they’re going to be until developers start delivering the games that utilise the full suite of neural shading features.
(Image credit: Future)
(Image credit: Future)
It would have certainly been a lot tougher for Nvidia to slap a $2,000 price tag onto the RTX 5090 and get away with it. With MFG it can legitimately claim to deliver performance twice that of an RTX 4090. Without it, a sole 30% 4K performance bump wouldn’t have been enough to justify a 25% increase in pricing.
What I will say in Nvidia’s defence on this is that the RTX 4090 has been retailing for around the $2,000 mark for most of its existence, so the real world price delta is a lot smaller. At least compared to the RTX 5090’s MSRP. How many, and for how long we’ll see actual retail cards selling for this $1,999 MSRP, however, is tough to say. It’s entirely likely the RTX 5090’s effective selling price may end up closer to the $2,500 or even $3,000 mark once the AIBs are in sole charge of sales as the Founders Edition stock runs dry.
I can see why Nvidia went with the RTX 5090 first as the proponent of Multi Frame Generation. The top-end card is going to benefit far more from the feature than cards lower down the stack, with less upfront rendering power to call on. Sure, Nvidia claims the RTX 5070 can hit RTX 4090 performance with MFG, but I’m going to want to see that in a few more games before I can get onboard with the claims.
The issue with frame generation has always been that you need a pretty high level of performance to start with, or it ends up being too laggy and essentially a bit of a mess. The most demanding games may still be a struggle for the RTX 5070 even with MFG, but I guess we’ll find out soon enough come February’s launch.
Until then, I’ll just have to sit back and bask in the glorious performance Nvidia’s AI chops are bringing in alongside the RTX 5090.