AI can be more just guessing between the pixels. It can learn and guess what the objects are and how they will behave and generate pixels accordingly. For example, human hand has 5 fingers, a well trained AI knows it should generate 5 fingers (and not 6 or 4) even if the actual nearby or in between pixels were not clearly showing
Your pixels on screen are an estimation of how an object looks as well because they're fixed points on a grid, even worse, they can move between adjacent pixels so subpixel data is used to approximate how that pixel would actually look like on a "larger pixel grid" (Temporal Anti Aliasing/Reconstruction). If you'd actually think about it games are nothing short of a huge approximation in general...
@Mark Jacobs I fully understand this, but just because you have first approximation does not mean it will always it will be better but adding another on top. Not in the general case. I can understand that some games can benefit substantially from this but that it because the geometries are generally larger or have some features that AI is heavily trained on. In MSFS and with VR in particular since the rendered pictured is cluttered by such huge amounts of tiny objects, sure, close to large buildings or overlooking a mountain it generally looks quite ok but this not where the problems are.
Unique 4000 series video and informative, thanks.
nice comparison bro
I’m excited for DLSS 3.0. I’m all for faster performance, lower temps, and being more energy efficient. I hope the new Warzone gets DLSS 3.0.
No matter how smart, AI will stil be guessing what's between the pixels compared to the actual pixels in between.
You have no idea how smart AI is going to get. What we have seen is only the very beginning and things are happening fast.
AI can be more just guessing between the pixels. It can learn and guess what the objects are and how they will behave and generate pixels accordingly. For example, human hand has 5 fingers, a well trained AI knows it should generate 5 fingers (and not 6 or 4) even if the actual nearby or in between pixels were not clearly showing
Your pixels on screen are an estimation of how an object looks as well because they're fixed points on a grid, even worse, they can move between adjacent pixels so subpixel data is used to approximate how that pixel would actually look like on a "larger pixel grid" (Temporal Anti Aliasing/Reconstruction).
If you'd actually think about it games are nothing short of a huge approximation in general...
@Mark Jacobs I fully understand this, but just because you have first approximation does not mean it will always it will be better but adding another on top. Not in the general case.
I can understand that some games can benefit substantially from this but that it because the geometries are generally larger or have some features that AI is heavily trained on. In MSFS and with VR in particular since the rendered pictured is cluttered by such huge amounts of tiny objects, sure, close to large buildings or overlooking a mountain it generally looks quite ok but this not where the problems are.
you got a good eye bro you should work for digital Foundry
no one knows why using DLSS glare appears, most noticeable in cyberpunk, what is it connected with? The technology is very cool!