Guys please do not judge image quality based on offscreen camera footage. Crushed blacks and blowout highlights are caused by the limited dynamic range of the camera used. Image quality is different in person.
Whatever technology they come up with, I just want to see better temporal stability. Flickering shadows/reflections/GI etc sucks as sudden changes to contrast will naturally draw your eye away from where you're supposed to be focusing and is distracting.
It will come in time, problem is people have a hard time letting go of raster performance, so if Nvidia goes all in on RT and Denoising then they're gonna get trashed by all the reviewers saying 0% improved.
literally could not care less about geometry density at this point, or even slightly better shadows or slightly better lighting. Temporal artificating is the single largest issue with modern image resolve and it is incredibly frustrating. The current solutions seems to be to just use 4K to make the artifacts smaller and less noticeable. The DLSS transformer model is better but still has major issues, ray reconstruction and path tracing introduce much more temporal issues though.
@@fawneight7108 improved, but still present and noticeable. That improvement came from changing from CNN to a transformer model. I unfortunately don't think there's much more they can do with the tech to make it better.
It would be great to see a greater focus on realtime physics as well. We already had almost photorealistic walls, but water in games just looks bad (at least any trying to be realistic).
I agree I think chaos was a step in the wrong direction and hasn’t delivered on it’s promises at all and to the contrary has been less stable and more expensive for all things involved on the physics thread (hits, overlaps, sweeps) not only simulated checks. I miss physx and apex destruction
I think the water in Hellblade looked remarkable, and also the water interaction physics of Indiana Jones were really well done as well. but it does seem to be the first thing to get cut when the game is going over budget on the rendering time(we saw how RDR2 had pretty basic water on consoles but it could be scaled up to a pretty fantastic level of physics simulation on pc) or even development time. The best looking water we have now is leagues and bounds above what we had even half a decade ago, but the median water quality in AAA games has not moved drastically for the past decade at least.
Incredible stuff! :) I didn't know transparencies were tough to trace against. These demos have really showcased how reducing other bottlenecks can drastically improve existing path-traced lighting!
That's because when a ray hits the leaves of a tree, and the leaves are actually just 2D textures with a lot of "empty" areas where alpha is zero, you have to do a texture lookup to check the alpha value of where the ray is hitting just to know if the ray is actually hitting something, which wastes a lot of memory bandwidth.
@@glitchvid I don't see why not. That's just an acceleration structure, which is of course a critical part of tracing 3D geometry too. It could necessitate two alpha textures, though - since you'd still likely want an unmodified mip chain for alpha testing the rasterized triangles.
I feel like you can do the leaves' shadows with screen space shadows honestly. Would look 90% as good and run 100x faster. Instead leaves in this demo don't have any shadows for some reason. I don't like how nvidia likes to intentionally misrepresent how good raster can look by just not implementing obvious features in their demos.
The slider they were moving showed leaves shadows vs without...look closer. SSR is fine but path traced is so much better, something about real time flowing shadows that takes immersion up to 100
This might be a dumb question, but wouldn't super detailed geo lead to simpler/smaller sized textures? Normal maps, roughness, those would not be necessary. Also many of the small details which force albedo to use higher resolutions (for example, cracks and details on a brick wall) could be replaced by actual geometry.
“The future” is interesting but will take a while to come to DXR or VKRT, and then to show up in games. However my first thought with this is that NVN2 probably has already gotten a lot of these upgrades since most of them are Ampere compatible. Efficient BVH update streaming could really change the equation for a machine that would conventionally have to do very little RT due to the CPU/memory bottleneck. Since it’s entirely Nvidia and Nintendo controlled, there is no reason features like that can’t already be part of NVN2 (or coming soon) and could be leveraged in ways that help a portable punch above its weight, rather than just making for pretty Blackwell “future” demos.
Games might look decent (not much improvements in the last decade just much higher system requirements) but most games are so extremely static. Props are glued to the tables, tables are glued to the floor. Barely any physics interaction in most games. Graphics improved a little but overall we're regressing.
Under nanite mesh , is low\mid-poly proxy mesh, For RT shadows and Reflections. Now RT-cores use new hardware acceleration to fast way find clusters of nanite mesh. and then triangle.
What other kind of effects might we expect? I guess one obvious example would be with fences (you know, metal, gridded type metal fences which are entirely geometry). Would be super to see some frosted glass of stain glass, something like that..
Im so curious if this is like actually gonna be a big thing or if it's gonna turn out that there's actually huge flaws like in rtx frame gen that don't get shown in the marketing
Alex doesn't Nanite already allow nanite for dynamic objects in the latest UE5 thereby letting devs to utilise geometry for foliage instead of alpha transparency? Because not sure why RTX Mega geometry doing geometry for foliage would be a new thing then. So ultimately, the main thing about RTX Mega geometry seems to be that it's nanite but for BVH, allowing for high accuracy RT shadows and lighting. But I'm quite surprised that Epic wasn't already working on that as that'd have been the next logical step for Nanite.
@@Danuxsy lmao , get same trim s the dude with a buzz cut with a shape up , such a masculine looking hairstyle you can't go wrong with it , there's many masculine looking hairstyles that might not suit a particular man but this isn't one of them
obviously not as this is all NVidia exclusive tech. Consoles are AMD. And with more and more studios switching to UE this is all very terrible news for consoles players.
@@Mart-E12 forgot to specify, this technique is solely for the ray tracing. in before, if I'm not mistaken, to do shadows for each individual leaf you had to create bvh for every single one, mega geometry makes one big bvh for all objects in the scene (as shown in nvidia digram on their site).
it's about scalability know how doesn't matter that much well at least if you don't target early adopters etc amd/intel/consoles targets normal people first shit, i'm living pretty good life in a westernized high gdp country and myself + most people I know won't invest 1000 bucks for a gpu they still buy consoles just a bunch of people really into this stuff getting high end gpus, rgb fans etc
They had a video about that few days ago. And honestly its not easy to explain simply but it makes RTX run better because it saves the GPU from doing a lot of work it previously had too. watch this video RTX Mega Geometry Is A Big Deal... But What Does It Actually Do?
No thank you. New GPU's are going to be selling way above MSRP due to limited supply and scalpers. Adding more complex geometry to an already horribly built engine that relies on up-scalling and FG that folks can't afford to run anyway is a big no thank you.
What is the difference between Mega Geometry and Nanite though. Is it the Path Tracing version of Nanite since Nanite does not work in Path Tracing mode?
Well this shouldn't be in any games for a while. Also you can turn off any of those features anyway. So it's up for devs to include/remove features they don't use.
In short MAGE GEOMETRY is an Nvidia feature that hides extremely high levels of "invisible geometry" no different from how they did out of sight hair rendering in FF7 on unrendered invisible cows when hair works was introduced to artificially handicap competitor GPU
Not knowing the hardware requirements, memory requirements, CPU requirements and so forth really just makes this a demo. Judging from where we are at the moment, getting ray/path tracing performance up on true midrange cards should be a very high priority IMO. This looks like another upsell feature that I'm not too thrilled about unless it is actually usable in realistic usage scenarios.
shadows so dark makes foliage look like clay model. Earlier version wasn't accurate but gave a blend of subsurface scattering with its inaccuracy on foliage...
You're looking at off-camera footage. You can have judge shadows whatsoever, especially the gray balance of them. You should know this if you're a digital foundry watcher
I detest unreal engine with every fiber in my body. It's destroying gaming. It gives developers a pass to release unoptimized games and to hire cheap disposable labor.
That is true and with multi-framegen I fear it will only get worse. Nvidia came up with DLSS to make raytracing a thing but devs use it as an excuse to be lazy.
@DBTHEPLUG practically all modern games are made in the dogcrap engine. F-ng Metal Gear Solid Delta. KONAMI has the Fox Engine and decide to make that game in unreal. That engine is like the plague. My God.
What is the point of all this technology if it cannot run at proper frame rates, even on the highest-end hardware, without relying on fake frame generation tricks?
If you think the future is just hardware getting bigger and faster in some linear fashion and things only being done with "real" frames you're going to be in for a rude awakening in the upcoming years. If fake frames can eventually look and feel just as good if not better than "real" frames is there a problem?
@@toomuchstarbucks6787They can keep doing that and they can keep hiring and firing people at a moments notice for products that release buggy, behind schedule, over budget, and therefore struggle to make a profit. I'm going to continue playing older titles or titles from smaller studios. This large studio model, it doesn't work. Look how hard everyone has to work, for a game to look marginally better than a game from years before. It's ruining the industry.
@@toomuchstarbucks6787 Frame Generation as it stands will *never* be what you're saying. The DLSS upscaling AI has gotten extremely good and seems to be getting even better with DLSS 4. Multi-Frame Gen is a horrific addition and is only serving Jensen's ability to get on stage and say "big number go up!" Upscaling is fine so long as it's not used as a crutch by mediocre developers. It is mathematically impossible for Frame Gen to be as good as raw performance. Switching to AI alogorithms doing the rendering (which I think is what you're trying to say, correct me if I'm wrong) is *not* frame generation.
@@toomuchstarbucks6787My favorite part of the 5 series release is PC gamers complaining about “fake” performance. I would kill to play a game with all the bell and whistles turned on, running at TWO HUNDRED AND FIFTY FRAMES. Yet here we are. PC gamers are some of the strangest creatures on the planet. The future is bright, but there is not a world possible without PC gamers complaining about something.
@@toomuchstarbucks6787 There wouldn’t be a problem if it were a reality. Software-based frame generation inherently has limitations and compromises, and this has been the case since the very beginning of software-based solutions. Hardware still has the potential to grow in many different ways, and we haven’t yet reached the end of hardware-based performance growth. There is still much more that can be achieved with advancements in material science. However, if the goals are derailed and software-based shortcuts become the main focus, the future doesn’t look very promising for gaming and computing, in my opinion.
Who tf cares? How about we go back and have decently optimised games that prioritises gameplay and not gimmicks like RT - requiring $1500 gpus with upscaling and fake frames. That'd be nice.
RT can be used on £300 cards, its not a issue of games using the latest tech. its an issues of how this tech is implemented. Optimised games can use RT e.g. Doom Eternal and even CP77 has good RT performance.
I’m starting to feel that Digital Foundry’s opinions, and the channel in general, are somewhat toxic for the gaming community. They hype up everything related to Nvidia and Unreal Engine, even though these two companies can be seen as harmful to gamers. It’s in Nvidia’s interest that games remain unoptimized so they require DLSS-thereby running best on Nvidia hardware and driving sales. Unreal Engine is a convenient partner, introducing power-hungry features like Nanite instead of offering genuine optimization tools. Digital Foundry never addresses this, and it’s ultimately bad for gamers. I believe we already know enough about the new RTX lineup, and talking about it so much before the release is becoming boring. Let’s wait to see how they actually perform in real-world conditions. All the AI features would be a nice addition if the raw performance increases by over 50% across all models-at the same price as the previous generation. DLSS is clever, but it doesn’t work for all types of software. Greetings to all the simulation and VR enthusiasts.
What are you talking about? This is a tech demo, and an exciting one. DF regularly criticises Unreal, and give credits to other engines with great results. Maybe you missed all the Indiana Jones content, which is not Unreal-based…
You should remove your tinfoil hat my brother, there is no conspiracies. Unreal engine is just garbage and people would buy nvidia cards no matter what, optimized or unoptimized. And yes I agree, DF are complete nvidia shills, but there is no conspiracy.
This industry and the film industry have been pushing software updates past the limits of computing the ENTIRE TIME. This isn't some new conspiracy against consumers. It's progress. Jesus when I was in college rendering a single frame like this took a whole day. A short film of just a few minutes took weeks or months.
@drunknmasta90 Have you heard of something in your gpu called RT cores. Those take away performance, space, power and increase hit, for no additional raw performance.
Guys please do not judge image quality based on offscreen camera footage. Crushed blacks and blowout highlights are caused by the limited dynamic range of the camera used. Image quality is different in person.
You think they'd know this....
who are you talking to ?
Yeah, I'm super excited for when they can actually do proper hands on testing and get us clean footage.
@@Gregoses
Lil guy named OP
I’m sure they know this
Whatever technology they come up with, I just want to see better temporal stability.
Flickering shadows/reflections/GI etc sucks as sudden changes to contrast will naturally draw your eye away from where you're supposed to be focusing and is distracting.
It will come in time, problem is people have a hard time letting go of raster performance, so if Nvidia goes all in on RT and Denoising then they're gonna get trashed by all the reviewers saying 0% improved.
Really improved with DLSS 4.
literally could not care less about geometry density at this point, or even slightly better shadows or slightly better lighting.
Temporal artificating is the single largest issue with modern image resolve and it is incredibly frustrating.
The current solutions seems to be to just use 4K to make the artifacts smaller and less noticeable.
The DLSS transformer model is better but still has major issues, ray reconstruction and path tracing introduce much more temporal issues though.
@@fawneight7108 improved, but still present and noticeable.
That improvement came from changing from CNN to a transformer model.
I unfortunately don't think there's much more they can do with the tech to make it better.
@ no, you're not even close!
It would be great to see a greater focus on realtime physics as well. We already had almost photorealistic walls, but water in games just looks bad (at least any trying to be realistic).
I agree I think chaos was a step in the wrong direction and hasn’t delivered on it’s promises at all and to the contrary has been less stable and more expensive for all things involved on the physics thread (hits, overlaps, sweeps) not only simulated checks. I miss physx and apex destruction
Just use Nvidia physx again it was the best.
Fluid sim is by far the most complex and taxing physics sim you can ask for. What FluidFlux does in real-time is already unbelievable.
No one cares about gameplay they can't sell that, they want you to play PS1 games with "realistic graphics" with the same latency of PS1 game
I think the water in Hellblade looked remarkable, and also the water interaction physics of Indiana Jones were really well done as well. but it does seem to be the first thing to get cut when the game is going over budget on the rendering time(we saw how RDR2 had pretty basic water on consoles but it could be scaled up to a pretty fantastic level of physics simulation on pc) or even development time. The best looking water we have now is leagues and bounds above what we had even half a decade ago, but the median water quality in AAA games has not moved drastically for the past decade at least.
Incredible stuff! :)
I didn't know transparencies were tough to trace against. These demos have really showcased how reducing other bottlenecks can drastically improve existing path-traced lighting!
That's because when a ray hits the leaves of a tree, and the leaves are actually just 2D textures with a lot of "empty" areas where alpha is zero, you have to do a texture lookup to check the alpha value of where the ray is hitting just to know if the ray is actually hitting something, which wastes a lot of memory bandwidth.
@@cayo3351 I wonder if you could pack transparencies into a separate texture and use a modified MIP-chain to reduce the search space.
@@glitchvid I don't see why not. That's just an acceleration structure, which is of course a critical part of tracing 3D geometry too. It could necessitate two alpha textures, though - since you'd still likely want an unmodified mip chain for alpha testing the rasterized triangles.
@TechArtAlex yeah, could pack it in the RG channels of a BC4 texture.
I feel like you can do the leaves' shadows with screen space shadows honestly. Would look 90% as good and run 100x faster. Instead leaves in this demo don't have any shadows for some reason. I don't like how nvidia likes to intentionally misrepresent how good raster can look by just not implementing obvious features in their demos.
This
The slider they were moving showed leaves shadows vs without...look closer. SSR is fine but path traced is so much better, something about real time flowing shadows that takes immersion up to 100
@@jeremysumpter8939 I mean if it's so much better, why not just show it. 2:17 is not an honest comparison imo.
@@jeremysumpter8939Only because they *chose* to not put complete shadows on the left...
This might be a dumb question, but wouldn't super detailed geo lead to simpler/smaller sized textures? Normal maps, roughness, those would not be necessary. Also many of the small details which force albedo to use higher resolutions (for example, cracks and details on a brick wall) could be replaced by actual geometry.
“The future” is interesting but will take a while to come to DXR or VKRT, and then to show up in games. However my first thought with this is that NVN2 probably has already gotten a lot of these upgrades since most of them are Ampere compatible. Efficient BVH update streaming could really change the equation for a machine that would conventionally have to do very little RT due to the CPU/memory bottleneck. Since it’s entirely Nvidia and Nintendo controlled, there is no reason features like that can’t already be part of NVN2 (or coming soon) and could be leveraged in ways that help a portable punch above its weight, rather than just making for pretty Blackwell “future” demos.
Absolutely incredible detail even on UA-cam
Reflection are cool and all but how much time ray aggregation takes is killing it for me.
When a decent physics like the old physx
in ps7 era
Games might look decent (not much improvements in the last decade just much higher system requirements) but most games are so extremely static. Props are glued to the tables, tables are glued to the floor. Barely any physics interaction in most games. Graphics improved a little but overall we're regressing.
@ Maybe
That's awesome, keep pushing things forward!
Under nanite mesh , is low\mid-poly proxy mesh,
For RT shadows and Reflections.
Now RT-cores use new hardware acceleration to fast way find clusters of nanite mesh.
and then triangle.
What other kind of effects might we expect? I guess one obvious example would be with fences (you know, metal, gridded type metal fences which are entirely geometry).
Would be super to see some frosted glass of stain glass, something like that..
Damn, i though my oled had a dead pixel watching this video thanks for the heart attack lol
What's next? Nvidia Mega Planets, featuring full-sized planets rendered in real-time?
No Ratchet, it's not time for your next game to be that advanced yet.
@@DBTHEPLUG Aw, come on! I want to explore the galaxy to find some bolts!
So... Star Citizen?
This is already a thing
@ Oh
Im so curious if this is like actually gonna be a big thing or if it's gonna turn out that there's actually huge flaws like in rtx frame gen that don't get shown in the marketing
Alex doesn't Nanite already allow nanite for dynamic objects in the latest UE5 thereby letting devs to utilise geometry for foliage instead of alpha transparency?
Because not sure why RTX Mega geometry doing geometry for foliage would be a new thing then.
So ultimately, the main thing about RTX Mega geometry seems to be that it's nanite but for BVH, allowing for high accuracy RT shadows and lighting. But I'm quite surprised that Epic wasn't already working on that as that'd have been the next logical step for Nanite.
Yes, Nanite foliage was demoed during UnrealFest.
Me: "I can't wait to grab a 5080 to get some path tracing in minesweeper and 3d pinball space cadet."
Why not 5090
Never noticed Alex' scar before. It looks very cool imo
it's cuz of DLSS4 & Neural Textures XD
(no seriously did he ever explained how he got it?)
Where can I buy one?
wtf
Man these new Nvidia GPUs
RTAleX ON
Nice and all but what does it matter when compared to frame rate
Getting IDs Megatexture vibes from this, if anyone remembers that
Cool cool. How bout when the camera moves in the scene?
Can you show us real-time graphics tech from ANY other company than Nvidia and Epic please??
Alex cutting his own hair still
im in my late 20s and my mom cuts my hair still (it looks like shit)
@@Danuxsy lmao , get same trim s the dude with a buzz cut with a shape up , such a masculine looking hairstyle you can't go wrong with it , there's many masculine looking hairstyles that might not suit a particular man but this isn't one of them
This looks fantastic but is it going to be possible even on a next gen console if they're released in the next 4 years?
obviously not as this is all NVidia exclusive tech. Consoles are AMD. And with more and more studios switching to UE this is all very terrible news for consoles players.
none of this is coming to consoles not even next gen since its pretty much confirmed they will be using amd chips
Alan Wake 2 will be the first game to support RTX Mega Geometry. Looking forward to seeing it in action!
Well consoles being huge sellers, I’m sure AMD will have their nice but half assed variant which will prob suffice fine lol
There's a possibility if AMD, Sony and Microsoft works together to mimic NVidia techs innovation.
filmed camera footage, youtube compression, and i still see the ghosting and lumin bubbling.
it's stunning, but when will it be added into games
Imagine when they can do this in VR.
I have no idea what this guys talking about but seems very cool
I just want games to not look blurry
1:36 come on like we don't have transparent texture shadows for many years, they're doing this on purpose
transparent textures is usually a plane, it cant fully represent each individual leaf, and its curvature, it's just flat.
@alexanderdouble we had shadows for each individual leaf since Crysis
@Mart-E12 with rt?
@ no with traditional pixelated shadows
@@Mart-E12 forgot to specify, this technique is solely for the ray tracing. in before, if I'm not mistaken, to do shadows for each individual leaf you had to create bvh for every single one, mega geometry makes one big bvh for all objects in the scene (as shown in nvidia digram on their site).
With this graphics the 5090 will have 40MS latency with 100 frames so you will move your mouse and it will feel like 30fps but will show 100
Nvidia is too far ahead....how will AMD/Intel ever catch up?
it's about scalability
know how doesn't matter that much
well at least if you don't target early adopters etc
amd/intel/consoles targets normal people first
shit, i'm living pretty good life in a westernized high gdp country and myself + most people I know won't invest 1000 bucks for a gpu
they still buy consoles
just a bunch of people really into this stuff
getting high end gpus, rgb fans etc
Is it possible to use reSTIR PT in unreal engine 5?
mb if use Nvidia version of ue5
RTX Branch of Unreal Engine (NvRTX)
Guys can someone explain what mega geometry is? I glazed over at all the big words
It's: ChatGPT
They had a video about that few days ago. And honestly its not easy to explain simply but it makes RTX run better because it saves the GPU from doing a lot of work it previously had too. watch this video
RTX Mega Geometry Is A Big Deal... But What Does It Actually Do?
This mega geometry likely wont be in games until at least the 6000 series which renders this feature almost useless to own right now.
Amazing video, thank you Digital Foundry.
4 minutes in and none even explained what Mega Geometry even is. Waste of time.
No thank you. New GPU's are going to be selling way above MSRP due to limited supply and scalpers. Adding more complex geometry to an already horribly built engine that relies on up-scalling and FG that folks can't afford to run anyway is a big no thank you.
What is the difference between Mega Geometry and Nanite though. Is it the Path Tracing version of Nanite since Nanite does not work in Path Tracing mode?
Less focus on image quality and more focus on real performance please. We all know UE5 has issues.
a reason why to get 5080
All that slop and games running at 20fps without any fake frames and dlssmear
Well this shouldn't be in any games for a while. Also you can turn off any of those features anyway. So it's up for devs to include/remove features they don't use.
You sound intelligent.
Not
@@DemiSupremi well they are gonna add it to Alan wake 2 soon so there is that.
In short MAGE GEOMETRY is an Nvidia feature that hides extremely high levels of "invisible geometry" no different from how they did out of sight hair rendering in FF7 on unrendered invisible cows when hair works was introduced to artificially handicap competitor GPU
Is/was Alex into fencing?
The first thought that comes to mind is how much video memory is this going to need?!
Neither of you looking at the camera really is some serial killer behavior.
Not knowing the hardware requirements, memory requirements, CPU requirements and so forth really just makes this a demo. Judging from where we are at the moment, getting ray/path tracing performance up on true midrange cards should be a very high priority IMO. This looks like another upsell feature that I'm not too thrilled about unless it is actually usable in realistic usage scenarios.
shadows so dark makes foliage look like clay model. Earlier version wasn't accurate but gave a blend of subsurface scattering with its inaccuracy on foliage...
You're looking at off-camera footage. You can have judge shadows whatsoever, especially the gray balance of them. You should know this if you're a digital foundry watcher
@@bonster101 you do have a point here and happy to prove me wrong. I checked another source which gives a better view of it. kudos
I detest unreal engine with every fiber in my body. It's destroying gaming. It gives developers a pass to release unoptimized games and to hire cheap disposable labor.
@@juanrubio6132 making games demand insane performance for minimal visual uplift
Don't buy unoptimized games.
"Problem" solved.
That is true and with multi-framegen I fear it will only get worse. Nvidia came up with DLSS to make raytracing a thing but devs use it as an excuse to be lazy.
@DBTHEPLUG practically all modern games are made in the dogcrap engine. F-ng Metal Gear Solid Delta. KONAMI has the Fox Engine and decide to make that game in unreal. That engine is like the plague. My God.
@@juanrubio6132 MIcrosoft own ID software and the ID tech engine and use that UE trash for the next Halo.
Nice gameplay
What is the point of all this technology if it cannot run at proper frame rates, even on the highest-end hardware, without relying on fake frame generation tricks?
If you think the future is just hardware getting bigger and faster in some linear fashion and things only being done with "real" frames you're going to be in for a rude awakening in the upcoming years.
If fake frames can eventually look and feel just as good if not better than "real" frames is there a problem?
@@toomuchstarbucks6787They can keep doing that and they can keep hiring and firing people at a moments notice for products that release buggy, behind schedule, over budget, and therefore struggle to make a profit.
I'm going to continue playing older titles or titles from smaller studios. This large studio model, it doesn't work. Look how hard everyone has to work, for a game to look marginally better than a game from years before. It's ruining the industry.
@@toomuchstarbucks6787 Frame Generation as it stands will *never* be what you're saying. The DLSS upscaling AI has gotten extremely good and seems to be getting even better with DLSS 4. Multi-Frame Gen is a horrific addition and is only serving Jensen's ability to get on stage and say "big number go up!"
Upscaling is fine so long as it's not used as a crutch by mediocre developers. It is mathematically impossible for Frame Gen to be as good as raw performance.
Switching to AI alogorithms doing the rendering (which I think is what you're trying to say, correct me if I'm wrong) is *not* frame generation.
@@toomuchstarbucks6787My favorite part of the 5 series release is PC gamers complaining about “fake” performance.
I would kill to play a game with all the bell and whistles turned on, running at TWO HUNDRED AND FIFTY FRAMES.
Yet here we are. PC gamers are some of the strangest creatures on the planet. The future is bright, but there is not a world possible without PC gamers complaining about something.
@@toomuchstarbucks6787 There wouldn’t be a problem if it were a reality. Software-based frame generation inherently has limitations and compromises, and this has been the case since the very beginning of software-based solutions. Hardware still has the potential to grow in many different ways, and we haven’t yet reached the end of hardware-based performance growth. There is still much more that can be achieved with advancements in material science. However, if the goals are derailed and software-based shortcuts become the main focus, the future doesn’t look very promising for gaming and computing, in my opinion.
Imagine if they got their current tech working before all this nonsense
Who tf cares?
How about we go back and have decently optimised games that prioritises gameplay and not gimmicks like RT - requiring $1500 gpus with upscaling and fake frames. That'd be nice.
Trueee
RT can be used on £300 cards, its not a issue of games using the latest tech. its an issues of how this tech is implemented. Optimised games can use RT e.g. Doom Eternal and even CP77 has good RT performance.
That's right, we want real geometry with it's polygons, not all this fake geometry with its... more polygons.
This would be a good idea not this rich mans tech junk 0.4% of pc gamers play
@@OliM9595last time i tried it on my rtx 3060ti, it was useless. 90 series only feature
I’m starting to feel that Digital Foundry’s opinions, and the channel in general, are somewhat toxic for the gaming community. They hype up everything related to Nvidia and Unreal Engine, even though these two companies can be seen as harmful to gamers. It’s in Nvidia’s interest that games remain unoptimized so they require DLSS-thereby running best on Nvidia hardware and driving sales. Unreal Engine is a convenient partner, introducing power-hungry features like Nanite instead of offering genuine optimization tools. Digital Foundry never addresses this, and it’s ultimately bad for gamers.
I believe we already know enough about the new RTX lineup, and talking about it so much before the release is becoming boring. Let’s wait to see how they actually perform in real-world conditions. All the AI features would be a nice addition if the raw performance increases by over 50% across all models-at the same price as the previous generation. DLSS is clever, but it doesn’t work for all types of software. Greetings to all the simulation and VR enthusiasts.
What are you talking about? This is a tech demo, and an exciting one. DF regularly criticises Unreal, and give credits to other engines with great results. Maybe you missed all the Indiana Jones content, which is not Unreal-based…
You should remove your tinfoil hat my brother, there is no conspiracies. Unreal engine is just garbage and people would buy nvidia cards no matter what, optimized or unoptimized. And yes I agree, DF are complete nvidia shills, but there is no conspiracy.
@@Acer113 He just watched too much of Threat interactive channel xD It will pass
This industry and the film industry have been pushing software updates past the limits of computing the ENTIRE TIME. This isn't some new conspiracy against consumers. It's progress.
Jesus when I was in college rendering a single frame like this took a whole day. A short film of just a few minutes took weeks or months.
@@Acer113he's just using brokie logic and pretending it's a fight for gamers 😂
Am I the only who wants a gtx series comeback? I still don't care much for RTX even though I have a 3080.
huh? how woudl that even happen
Yeah it's called rtx off
Might as well go back to non unified shaders era and no FP32.
@yourlocalhuman3526 idk
@drunknmasta90 Have you heard of something in your gpu called RT cores. Those take away performance, space, power and increase hit, for no additional raw performance.
Look at all the boiling lol. No thanks.
I see Alex, I hit DISLIKE!
DF's ogre at the right side of the screen, a sight to behold.
How about they fix the performance issues with ue5...
How about ... NO
You do realise this is stuff done by Nvidia, not Epic, who are too busy fixing UE because they have to make Fortnite skins.