@@FuZZbaLLbee most emulators use the CPU to emulate the graphics, at least the best ones. You would need an emulator that does a translation layer to a normal api (like the switch emulator with yuzu that calls nvidia apis) and apply a dlss reshade that doesn't exist yet as far as I know. So it's totally possible, but it doesn't makes sense for that generation of gaming plus developers need Nvidia permission to use the API. That being said, you could run the emulator from a pc and with gamestream remote play to a shield TV 2019 and use the dlss game upscale option. I really don't think it will work better than SMAA from most of those emulator but there is only one way to know.
"Or your graphics card is just amazing - mine is not." Just casually reminding you Mike the multiple GTX Titans you've got in the video about password cracking 😀
@@DasAntiNaziBroetchen depends on the university... I have access to lots of 10 series Nvidia GPUs and theres a 2080TI theyre looking at getting rid of, simply because for the most part since theyre not buying it with their own money , a college can simply put in a requisition for a fair few new GPUs and get several of the most expensive models every generation. it depends what they need them for of course, but, say, a university that has a well equipped computer Lab and server setup, will want the latest greatest hardware no matter the cost.
I'd love to see a followup video on this looking at DLSS 2.0/2.1. When looking at games where it's successfully implemented, it's clear that it actually exceeds what most people ever expected from it.
I am using DLSS 2.0 on Cyberpunk and am pretty impressed with the results. I was skeptical going in, but it really does work. Be amazing to see how far this progresses in another year.
Ironically not that hard to run these days and uses older AA tech :p Although funnily enough there's still parts that dip to 30-40 frames even today just because Crytek was betting on single core CPU performance getting faster than it did. Instead CPUs shifted to multi-core designs and Crysis 1's engine isn't well suited to that. Those sections also got stripped out of the console releases of Crysis 1 for similar reasons.
@The Monarch I agree overall, however I would point out that some modern games feature equally or more advanced graphics and physics, however the difference is that they fail to incorporate any of this into the actual gameplay experience. For example the way destruction physics and object collison is handled is regularly on par or beyond what Crysis did many years ago, but you don't really notice it or are compelled by it because the game designers don't want to design core gameplay mechanics around the fact that the environmentis is accurately simulated and largely destructable, so they just choose to mostly disable it except for cosmetic purposes. What Crysis did exceptionally well was how it translated all the amazing technical advances of the engine into actual gameplay. It mattered that the physics were great because in addition of being able to literally blow stuff apart with grenades you also had a nanosuit that gave you super strength, allowing you to punch almost any object into orbit (well... at lest if you increased your strength value a bit with the ingame console), giving you a real sense of freedom and empowerment as a player, because you were allowed to realisticly engage these virtual environments on your own terms and with your own methods, in a way that didn't make you feel boxed into a specific, predetermined approach the developers wanted you to take. It mattered that vast amounts of foilage and terrain could be rendered in great detail with high draw distance and realistic lighting, because that's what it takes to recreate the experience of stealthily hunting your prey through a dense jungle like you're the Predator from Predator. Developers always talk about immersion as though it is achieved by graphics and other technicalities alone, however what matters most in my opinion is making the player feel like the choices they make are ones they came up with themselves, instead of having to pick them out of a small, arbitrarily constrained set of predetermined paths. Crysis was the perfect techdemo/sandbox/game-hybrid.
The nice thing about DLSS is that in 2.0 version, it evolved so much it is now a generic super sampling algorithm, it does not depend on specific game training anymore. DLSS 2.0 is amazing
@@random_birch_forest Never said it was an iphone, not sure why you'd assume that? And being poor doesn't have that much to do with it either, I suppose I could buy a slightly bigger phone but it's still a small screen. I have a laptop, I just rarely watch anything on it because it isn't as convenient.
It is amazing how far DLSS has come since this video was published. I hope University of Nottingham got this professor an RTX graphics card to study. I know they were making jokes about it but honestly would be a good research project. DLSS is really neat!
Another great video from this channel. Regarding his answer of whether or not DLSS "works" or not (he said sometimes it does and sometimes it doesn't or something like that). To be more specific, it works incredibly well for still shots and low movement, and the faster the camera moves the less effective it becomes. So it's pretty nice to have in general. You will only catch the uglyness after a quick head spin but then quickly it all melts away back to nice and beautiful.
Would really love to see an actual programming language or any subject tutorial from Dr.Mike Pound. love the way he conveys knowledge, so easy to understand.
This is not only topical, but also incredibly interesting. There is a lot of talk about DLSS these days, but very few explanations on how it's done. Now that you have brought it up, I'd love to hear more about antialiasing techniques in 3D applications. These have advanced by leaps and bounds in the past years, especially with the introduction of temporal AA.
Another way to explain this: When super-sampling, the value for a 'created' pixel will always be an interpolation of the pixel it was created from and its surroundings (i.e. a gradient). What DLSS does, is take a machine-learning algorithm, and apply that to figure out the formula for the gradient for a pixel based on what is happening on the screen. The network is supposed to figure out what hard & soft edges are in a frame, so pixels can be interpolated without creating aliasing issues.
I remember the day I upgraded from my Verite Rendition v2200 OpenGL card to my first 3DFX Voodoo 2 12mb in the summer of 1998 (or maybe it was 1999) being so impressed by 1024×768 with no Anti-Aliasing at >30FPS and 1280×1024 at 20FPS that I thought the whole Anti-Aliasing debate was dead in the water, despite liking it at 320×240 but my framerate being too slow at 640×480 previously on the Verite. I was playing Quake Quake2 ZDoom, Half Life and Unreal at the time.
the point was to show that when you have DLSS the game runs on higher FPS as opposed to TAA which is using more GPU resources and is 15fps lower than DLSS to produce the same graphics.
Really helpful video. Explained the technicalities very well. Went through loads of articles that just didn't explain the detail well at all before stumbling onto this video
What I'd like to see is partial super-sampling. What if in an FPS for example we could render the center of the screen at true 4K, an outer perimeter at upscaled 1080p, and maybe the very edges/corners at upscaled 720p. That would make a lot of sense since it matches how our own eyes work, we mostly care about motion at the edges of our vision, and we can only focus on things we're looking directly at.
@@DJPsyq Boggles me why more games don't use this tech. Mix this with dynamic resolution and you have a winner for consistent framerates and decently crisp image quality.
You still might want to use DLSS, even if you can run the game just fine, simply because it does a better job at anti-aliasing than other methods. It also happens way more than 60 fps, modern gaming monitors go as high as 360hz, and these upscaling methods have to be able to keep up with that.
Spent quite a bit of time at 9:19 trying to find the difference, only to remember I'm sitting here learning about upscaling 1080p to 4k with anti-alias on my 768p monitor.
Crazy stuff... I kinda thought real time ray tracing was supposed to eliminate some of that stuff, but apparently not. Another great video guys, thanks!
In this vid for 5 minutes now, and it's already interesting, i wanted to know what supersampling actually does exactly, and this man explains it super understandable, for me as a "normal" it guy.
I see the influence of marketing on the choice of terminology. At first I was skeptical that a 4K screen would have only 4 times the number of pixels as a 1080p screen. This is not a naive guess. I worked for two decades in image processing and our cutting edge film recorders in 1990 were billed as 4k, because they were 4096 pixels wide. 4K screens are, indeed just 4 times the number of pixels as 1080p. This is for two reasons. Firstly, because 1080p is named after the vertical dimension, where the horizontal dimension is 1920. Secondly, the horizontal dimension of 4K is only 3840 pixels. So, if they had continued the naming convention, 4K would be 2160p, and 4K has just twice the dimensions each way as 1080p, but doesn't 4K sound so much more impressive a jump up from 1080p?
@Computerphile Dunno if it was pointed out before but he misspoke at 0:28: he said "run it at a lower frame rate" but actually means "run it at a lower resolution".
DLSS 2.0+ is made of two primary components. The "AI" upscaling as talked about in this video is the second, while the first is just a temporal upscaling ("supersampling") method similar to TAA, and it's that first part which does that vast majority of the beneficial work. The "AI" component helps the apparent speed of the temporal resolve but doesn't really add much detail. For this reason, depending on the implementation, the developers may also choose to include an additional sharpening pass. Speaking of sharpening passes... AMD / ATI currently doesn't have anything which competes with this at all. The FidelityFX upscaling is just their Contrast-Adaptive Sharping (CAS) made resolution-independent with a basic upscaler (Bicubic or similar), and CAS itself is just an enhanced Lumasharpen. While certainly a useful and flexible way of improving the apparent fidelity of the image, it does not compete with the temporal sampling solution seen in DLSS. There's a whole conversation to be had on how games are made and optimized, and how Nvidia is pushing the technology of the market in their favor.
Being a MLE, I'm learning variational auto-encoder myself, fascinating how AE can compress so much info into just a couple latent variables. Think VAE and GAN are catching all the attention with the ability to re-create real-like data(image, video, text) :) nice talk Mike.
Speaking of Computerphile and games, where the heck is Miegakure? I'm dying and I'm worried I won't get a chance to play it; I only have a few decades left. Please do an update video to pressure him to finish it.
He's talking about the network doing both morphic antialiasing and supersampling at the same time. While that would be a big improvement in speed, the gains we're seeing so far are so small that I think nvidia isn't even attempting the antialiasing part and they're taking an already anti-aliased image as the input.
Interesting note. I initially thought 4K was 16 times bigger than 1080p, thinking it was four times bigger vertically and horizontally. Turns out I was wrong. 4K refers to the horizontal resolution, whereas 1080p refers to the vertical resolution. They changed naming conventions to make it seem like you're getting more improvement in resolution than you actually are.
Exactly, I love Computerphile videos but this one was 7 minutes of just setting up the problem 5 times with the same words, and then the explanation is ok, but not really deep.
I think the nice thing about doing something like this on a game is that the network can be trained not only on "games" in general but they can be trained on the particular game that they are expected to work with. And I think also it could take advantage of other parameters that a game can offer that, say, a tv show can't. For instance you could theoretically get your 1080p aliased render to also render a 4th channel in addition to the RGB, maybe the 4th channel is a rendered wire frame or maybe it's depth or maybe it's a value that represents a particular quality that the object has, then you train the network on that 4 vector data... I would imagine the output would be significantly higher quality with access to more data that a game can provide.
I wonder if the thing he misspoke about could also be a potential solution. He accidentally said "lower framerate" instead of "lower resolution", but now I'm wondering if you could run at a lower framerate and just ai interpolate frames
It would be great to see a new video about it, how nvidia generalized the network so you dont have to train for a specific game, with much better quality, basically only needing a motion vector pass.
5:20 Correction: Multi Sample Anti-aliasing (MSAA) does use multiple samples per pixel but ONLY for pixels which have an edge going through them, so pixels in the middle of a polygon aren't affected. It is highly unlikely that all pixels have edges running through them. Full Screen Anti-aliasing(FSAA), aka Super Sampling Anti-aliasing (SSAA) aka render scale, does it for ALL pixels on the screen like he describes however.
Slight correction. Multisampling calculates shader/texture once per pixel and stores result to all subsamples in pixel which are occluded by the polygon. Color, Z/Stencil buffers have all subsamples and every one is updated each time pixel is written. This is reason why only edges are affected and it still handles cases like intersecting polygons correctly. If MSAA would only affect edges of the pixel it wouldn't handle intersecting polygons correctly. (Like the 16xAA Parhelia used.)
reminds me of an Article I read the other week showing around 100 images of faces which have been created by deep learning AI, its astounding the level this technology is at now, from looking at those images there is no way you could tell that the people had never existed and that they were created by AI, can't wait to see how far this can go
A few additional points to mention... DLSS competes with other things that use the tensor cores only, like real time ray-tracing, in terms of performance cost. It doesn't impact the normal (non-RTX) load on the card by the game, unless the card's thermal solution is unable to keep up with regular cores, CUDA cores, and tensor cores all being loaded, which would cause thermal throttling. If you lower render resolution to 1080p and DLSS to 4K, without ray-tracing, you get the full performance benefit of lower render resolution, with no performance cost from DLSS (apart from the static frame time). If you use ray tracing and DLSS, then DLSS only impacts the performance of the ray tracing features. Also worth mentioning how other technologies like GSync frame doubling also improve the framerate on top of all of this.
I would definitely recommend Digital Foundry's videos on DLSS 2.0 after watching this video. Some great results on games like Control and Death Stranding
I'm curious to know what they are training the network with specifically. It sounds like pixel data, but I can't imagine, even game specific, how a neural net could upsample. The variation in frame composition seems like you would get a lot of artifacts or noisy behaviors. The network doesn't presumably know if you are looking at a car burning or an open sky, for instance, which wouldn't remotely upsample the same.
It occurs to me that there's no reason that, if you're rendering this way, the NN input has to look anything like a fully rendered 4k bitmap with exactly 3 colour channels and nothing else, it only has to be able to provide the information needed to produce one. It can be whatever combination of layers you want at whatever resolutions you want, with all sorts of possibilities for what they could be. Say you calculate: a 1 ray/pixel raytrace layer at 1/2 resolution; an unlit rasterized layer at the target resolution for detailed texture info; and maybe some kind of edgefinding and z-buffer layers at 2-4x target resolution. Whatever is found to work best as a tradeoff.
7:54 _unlocking your face with your phone_ That's... deep bruh. You hit hard the tiktokers and facebookers, maybe instagrammers too, who know. Big slap on their brain ;-)
As of February 2019, the implementations of DLSS on the market (in games including Battlefield V and Final Fantasy XV) are terrible, and provide a worse experience than running without DLSS enabled. For analysis and comparisons, search for the following videos: "Battlefield V DLSS Tests: Trying to Find the Upside, ft. 2060 & 2080 Ti" by Gamers Nexus "Battlefield V DLSS Tested, The Biggest RTX Fail Of Them All" by Hardware Unboxed
I think it would be best to have DLSS settings for a short list of general visual styles, and then each game will just tell the card the appropriate one. Like have cartoon and photorealistic options, so it knows whether to make things smooth gradients with sharp edges, or noisy an detailed (and if video compression for say Netflix could also get those two options, that would be great). When you zoom in to the level of dozens of pixels, there aren't really that many major ways to vary how a game looks. The DLSS might be able to figure out the appropriate action itself, but it would probably run faster if it had two or maybe a few modes, so had to think less about context for each operation.
I'm on a Computerphile Marathon today, love watching this guy
Worldaviation 4K check out Robert Miles, he also has his own channel. Easy to binge
Mike Pound is a boss!
@@julius4858 thanks
666 upvotes, cant upvote it anymore.
Watching this for first time Christmas 2021 and a lot had changed since this was filmed. DLSS is the real deal and a major performance enhancer now.
“Maybe a game comes out and has a huge demand on your GPU”. Cyberpunk 2077 helped this age very well. The game relies heavily on DLSS
true
facts
@Matt M doest that mean that it would potentially work for retro games as well? PS2, Dreamcast or GameCube?
Don't tell that to the console players, they are still going bald from hatred 🤣
@@FuZZbaLLbee most emulators use the CPU to emulate the graphics, at least the best ones. You would need an emulator that does a translation layer to a normal api (like the switch emulator with yuzu that calls nvidia apis) and apply a dlss reshade that doesn't exist yet as far as I know. So it's totally possible, but it doesn't makes sense for that generation of gaming plus developers need Nvidia permission to use the API.
That being said, you could run the emulator from a pc and with gamestream remote play to a shield TV 2019 and use the dlss game upscale option. I really don't think it will work better than SMAA from most of those emulator but there is only one way to know.
"Or your graphics card is just amazing - mine is not."
Just casually reminding you Mike the multiple GTX Titans you've got in the video about password cracking 😀
University equipment which he accesses through the uni server...
@@hypnoticlizard9693 which typically when updated and replaced will be sold to university staff at a huge discount. :)
@@Great.Milenko At which point it's pretty much useless for anyone to game on.
@@DasAntiNaziBroetchen depends on the university... I have access to lots of 10 series Nvidia GPUs and theres a 2080TI theyre looking at getting rid of, simply because for the most part since theyre not buying it with their own money , a college can simply put in a requisition for a fair few new GPUs and get several of the most expensive models every generation. it depends what they need them for of course, but, say, a university that has a well equipped computer Lab and server setup, will want the latest greatest hardware no matter the cost.
well, those didn't have tensor cores yet
I click on basically every video with Dr. Pound in it. Been on a binge recently, and I love listening to him explain stuff.
I'd love to see a followup video on this looking at DLSS 2.0/2.1.
When looking at games where it's successfully implemented, it's clear that it actually exceeds what most people ever expected from it.
I see Computerphile's Peter Parker and I click like. Very nice vid, bud!
LOL. At first I thought...his name's not peter.....but after a few minutes it sunk in. He looks like Tobey McGuire
The new second generation DLSS is actually pretty impressive. It can improve framerate quite a lot with almost negligable visual quality.
True!
> with almost negligable visual quality.
hehehe.
Just experienced this in Death Stranding, 85fps DLSS off, 105fps with it on. (1440p, on a RTX 3060Ti)
I am using DLSS 2.0 on Cyberpunk and am pretty impressed with the results. I was skeptical going in, but it really does work. Be amazing to see how far this progresses in another year.
It's brilliant on Cyberpunk, the difference between me gaming at 1440p and 1080p.
That all too subtle Crysis reference! :D
That was the best part of the video :)
Ironically not that hard to run these days and uses older AA tech :p
Although funnily enough there's still parts that dip to 30-40 frames even today just because Crytek was betting on single core CPU performance getting faster than it did. Instead CPUs shifted to multi-core designs and Crysis 1's engine isn't well suited to that. Those sections also got stripped out of the console releases of Crysis 1 for similar reasons.
@The Monarch I agree overall, however I would point out that some modern games feature equally or more advanced graphics and physics, however the difference is that they fail to incorporate any of this into the actual gameplay experience. For example the way destruction physics and object collison is handled is regularly on par or beyond what Crysis did many years ago, but you don't really notice it or are compelled by it because the game designers don't want to design core gameplay mechanics around the fact that the environmentis is accurately simulated and largely destructable, so they just choose to mostly disable it except for cosmetic purposes.
What Crysis did exceptionally well was how it translated all the amazing technical advances of the engine into actual gameplay. It mattered that the physics were great because in addition of being able to literally blow stuff apart with grenades you also had a nanosuit that gave you super strength, allowing you to punch almost any object into orbit (well... at lest if you increased your strength value a bit with the ingame console), giving you a real sense of freedom and empowerment as a player, because you were allowed to realisticly engage these virtual environments on your own terms and with your own methods, in a way that didn't make you feel boxed into a specific, predetermined approach the developers wanted you to take.
It mattered that vast amounts of foilage and terrain could be rendered in great detail with high draw distance and realistic lighting, because that's what it takes to recreate the experience of stealthily hunting your prey through a dense jungle like you're the Predator from Predator. Developers always talk about immersion as though it is achieved by graphics and other technicalities alone, however what matters most in my opinion is making the player feel like the choices they make are ones they came up with themselves, instead of having to pick them out of a small, arbitrarily constrained set of predetermined paths.
Crysis was the perfect techdemo/sandbox/game-hybrid.
This channel is really underappreciated for the amazing content they post
0:28 I guess he didn't mean "lower framerate" but "lower resolution".
yes they need a subtitle/caption to fix that
RCT never suffered from low framerate
@@MikeDawson1 if only youtube had a tool which would allow you to show text on top of videos after you uploaded them.....
Alexander Mitchell annotations are gone
Guess he's gone down the physicist route where time=space.
The nice thing about DLSS is that in 2.0 version, it evolved so much it is now a generic super sampling algorithm, it does not depend on specific game training anymore.
DLSS 2.0 is amazing
I am pretty sure it does.
Devs have to develop the game to work with DLSS 2.0
And the list of games that do work is quite small.
@@brickbastardly +
We tried using this at my mapping company. It's performed ok.
Source?
I love this guy and the cameraman. World class questions.
Besides Tom Scott's appearances, I think Mike is my favorite Computerphile guest.
One of my favourite speaker / hosts on Computerphile. Found this super interesting. Thanks for the video!
Meanwhile I experience like 90% of my media intake at 720p on a slightly dirty 5 inch phone screen...
I'm sorry you are poor
Dude, you need DLSS now!!!
480p for me unless it needs it. if i leave it at 1080, my internet cuts out and i go over my data limiy.
Well, you picked iphone yourself:) you could have gone to a much better screen for the same price
@@random_birch_forest Never said it was an iphone, not sure why you'd assume that?
And being poor doesn't have that much to do with it either, I suppose I could buy a slightly bigger phone but it's still a small screen. I have a laptop, I just rarely watch anything on it because it isn't as convenient.
This guy is James Grime of Computerphile ;)
I might not know your reference but I do get the feel of it...
Stein Codes james grime is the host of numberphile
Not sweaty enough
If you have been, thanks for watching.
Search SingingBanana channel if you want to find more of his content
It is amazing how far DLSS has come since this video was published. I hope University of Nottingham got this professor an RTX graphics card to study. I know they were making jokes about it but honestly would be a good research project. DLSS is really neat!
Another great video from this channel. Regarding his answer of whether or not DLSS "works" or not (he said sometimes it does and sometimes it doesn't or something like that). To be more specific, it works incredibly well for still shots and low movement, and the faster the camera moves the less effective it becomes. So it's pretty nice to have in general. You will only catch the uglyness after a quick head spin but then quickly it all melts away back to nice and beautiful.
0:18 The game is a big game with lots of shi_ .. with lots of effects...lol
Edit: Thaaanks for the likes! Have a great day everyone!
Hahaa, think he was going to say shaders. Still though, might not lol
Huh. While watching i thought he was about to say "shaders" and reconsidered, since not everyone may know what a shader is.
omg :D didnt noticed at first
Censoring the word "shaders" on this channel? Don't think so.
I'm pretty sure he was going to say "'shaders" and then realize that he was going to have to explain what these were.
Brilliant explanation.
(And despite only briefly covering AntiAliasing, it's still the best explanation i've ever heard of that too!)
0:30 I think meant to say “lower resolution”
at 0:28 I think he means run at a lower resolution(lower than 4k) which will give u more fps, then recreate the image with deep learning to be like 4k
Would really love to see an actual programming language or any subject tutorial from Dr.Mike Pound. love the way he conveys knowledge, so easy to understand.
This is not only topical, but also incredibly interesting. There is a lot of talk about DLSS these days, but very few explanations on how it's done. Now that you have brought it up, I'd love to hear more about antialiasing techniques in 3D applications. These have advanced by leaps and bounds in the past years, especially with the introduction of temporal AA.
I love that this is filmed like The Office constant dramatic zooms in and out, really adds to the impact of the video.
Another way to explain this:
When super-sampling, the value for a 'created' pixel will always be an interpolation of the pixel it was created from and its surroundings (i.e. a gradient).
What DLSS does, is take a machine-learning algorithm, and apply that to figure out the formula for the gradient for a pixel based on what is happening on the screen. The network is supposed to figure out what hard & soft edges are in a frame, so pixels can be interpolated without creating aliasing issues.
the best computer science channel on youtube ! !
I remember the day I upgraded from my Verite Rendition v2200 OpenGL card to my first 3DFX Voodoo 2 12mb in the summer of 1998 (or maybe it was 1999) being so impressed by 1024×768 with no Anti-Aliasing at >30FPS and 1280×1024 at 20FPS that I thought the whole Anti-Aliasing debate was dead in the water, despite liking it at 320×240 but my framerate being too slow at 640×480 previously on the Verite.
I was playing Quake Quake2 ZDoom, Half Life and Unreal at the time.
Best computerphile guest is back !
What about Tom Scott? 🤔
I love how at 9:18 they show side-by-sides of Battlefield 5, and both look exactly the same.
the point was to show that when you have DLSS the game runs on higher FPS as opposed to TAA which is using more GPU resources and is 15fps lower than DLSS to produce the same graphics.
mike pound is one of my favourite computerphile ..... presenters? guests? hosts? lecturers? ........... yknow..... that thing.
Really helpful video. Explained the technicalities very well. Went through loads of articles that just didn't explain the detail well at all before stumbling onto this video
30 mins ago I was watching a* algorithm, and now I am watching DLSS...
Thank you computerphile very cool
I go from like 45 to 75+ fps simply by turning DLSS on. It's incredible
Almost 2024... probably time for an update on this.
DLSS has come a long way, and is now a staple/requirement for many games.
What I'd like to see is partial super-sampling. What if in an FPS for example we could render the center of the screen at true 4K, an outer perimeter at upscaled 1080p, and maybe the very edges/corners at upscaled 720p. That would make a lot of sense since it matches how our own eyes work, we mostly care about motion at the edges of our vision, and we can only focus on things we're looking directly at.
Nice idea!
What if I want to look at the side of the screen? You can't know where the player will be looking.
That’s called Nvidia Multi-Rate Shading and it’s already in games like Shadow Warrior 2
@@IceMetalPunk How about eye tracking? Render what you are looking at in highest detail.
@@DJPsyq Boggles me why more games don't use this tech. Mix this with dynamic resolution and you have a winner for consistent framerates and decently crisp image quality.
DLSS now, in 2021, is absolutely fantastic. DLSS on Quality Mode in Metro Exodus nearly DOUBLES my FPS when rendering at 4k!
You still might want to use DLSS, even if you can run the game just fine, simply because it does a better job at anti-aliasing than other methods.
It also happens way more than 60 fps, modern gaming monitors go as high as 360hz, and these upscaling methods have to be able to keep up with that.
I love that this was 3 years ago! Implies that most devices already do this!
1:43, my sarcasm meter went through the roof
Spent quite a bit of time at 9:19 trying to find the difference, only to remember I'm sitting here learning about upscaling 1080p to 4k with anti-alias on my 768p monitor.
I look forward to your further 'research'
"enhance!" General Naird
I would have never guessed, that one day Computerphile would allude to the "But can it run Crysis" jokes. Godspeed to you!
Crazy stuff... I kinda thought real time ray tracing was supposed to eliminate some of that stuff, but apparently not. Another great video guys, thanks!
knew about both of those terms before, could have never guessed this is what it meant
I just got a RTX 3060. And I’m amazed by DLSS. Thank you for the explanation!
In this vid for 5 minutes now, and it's already interesting, i wanted to know what supersampling actually does
exactly, and this man explains it super understandable, for me as a "normal" it guy.
And then they released DLSS 2.0 changing everything up completely.
I see the influence of marketing on the choice of terminology. At first I was skeptical that a 4K screen would have only 4 times the number of pixels as a 1080p screen. This is not a naive guess. I worked for two decades in image processing and our cutting edge film recorders in 1990 were billed as 4k, because they were 4096 pixels wide. 4K screens are, indeed just 4 times the number of pixels as 1080p. This is for two reasons. Firstly, because 1080p is named after the vertical dimension, where the horizontal dimension is 1920. Secondly, the horizontal dimension of 4K is only 3840 pixels. So, if they had continued the naming convention, 4K would be 2160p, and 4K has just twice the dimensions each way as 1080p, but doesn't 4K sound so much more impressive a jump up from 1080p?
You are correct...
1920x1080=2073600
2073600x4=8294400
3840x2160=8294400
@Computerphile Dunno if it was pointed out before but he misspoke at 0:28: he said "run it at a lower frame rate" but actually means "run it at a lower resolution".
DLSS 2.0+ is made of two primary components. The "AI" upscaling as talked about in this video is the second, while the first is just a temporal upscaling ("supersampling") method similar to TAA, and it's that first part which does that vast majority of the beneficial work. The "AI" component helps the apparent speed of the temporal resolve but doesn't really add much detail. For this reason, depending on the implementation, the developers may also choose to include an additional sharpening pass.
Speaking of sharpening passes...
AMD / ATI currently doesn't have anything which competes with this at all. The FidelityFX upscaling is just their Contrast-Adaptive Sharping (CAS) made resolution-independent with a basic upscaler (Bicubic or similar), and CAS itself is just an enhanced Lumasharpen. While certainly a useful and flexible way of improving the apparent fidelity of the image, it does not compete with the temporal sampling solution seen in DLSS.
There's a whole conversation to be had on how games are made and optimized, and how Nvidia is pushing the technology of the market in their favor.
Being a MLE, I'm learning variational auto-encoder myself, fascinating how AE can compress so much info into just a couple latent variables. Think VAE and GAN are catching all the attention with the ability to re-create real-like data(image, video, text) :) nice talk Mike.
Speaking of Computerphile and games, where the heck is Miegakure? I'm dying and I'm worried I won't get a chance to play it; I only have a few decades left. Please do an update video to pressure him to finish it.
6:12 who added "tentacles" in the subtitles?
He's talking about the network doing both morphic antialiasing and supersampling at the same time. While that would be a big improvement in speed, the gains we're seeing so far are so small that I think nvidia isn't even attempting the antialiasing part and they're taking an already anti-aliased image as the input.
A part 2 is definitely needed with DLSS 2.0 that's here.
Interesting note. I initially thought 4K was 16 times bigger than 1080p, thinking it was four times bigger vertically and horizontally. Turns out I was wrong. 4K refers to the horizontal resolution, whereas 1080p refers to the vertical resolution. They changed naming conventions to make it seem like you're getting more improvement in resolution than you actually are.
4K sounds way more marketable. It's much better than saying the alternative.
2160p doesn't sound as sexy I suppose.
First six minutes felt like an endless loop describing the same problem four or five times over again. Made me feel like it’s 4K instead of 1080p 😜
Exactly, I love Computerphile videos but this one was 7 minutes of just setting up the problem 5 times with the same words, and then the explanation is ok, but not really deep.
This is a great channel.
Great video, never quite understood anti-aliasing techniques until this.
Man I wouldn't mind joining this Uni just to learn from this man
I think the nice thing about doing something like this on a game is that the network can be trained not only on "games" in general but they can be trained on the particular game that they are expected to work with.
And I think also it could take advantage of other parameters that a game can offer that, say, a tv show can't. For instance you could theoretically get your 1080p aliased render to also render a 4th channel in addition to the RGB, maybe the 4th channel is a rendered wire frame or maybe it's depth or maybe it's a value that represents a particular quality that the object has, then you train the network on that 4 vector data... I would imagine the output would be significantly higher quality with access to more data that a game can provide.
I wonder if the thing he misspoke about could also be a potential solution. He accidentally said "lower framerate" instead of "lower resolution", but now I'm wondering if you could run at a lower framerate and just ai interpolate frames
It would be great to see a new video about it, how nvidia generalized the network so you dont have to train for a specific game, with much better quality, basically only needing a motion vector pass.
5:20 Correction: Multi Sample Anti-aliasing (MSAA) does use multiple samples per pixel but ONLY for pixels which have an edge going through them, so pixels in the middle of a polygon aren't affected. It is highly unlikely that all pixels have edges running through them.
Full Screen Anti-aliasing(FSAA), aka Super Sampling Anti-aliasing (SSAA) aka render scale, does it for ALL pixels on the screen like he describes however.
Slight correction.
Multisampling calculates shader/texture once per pixel and stores result to all subsamples in pixel which are occluded by the polygon.
Color, Z/Stencil buffers have all subsamples and every one is updated each time pixel is written.
This is reason why only edges are affected and it still handles cases like intersecting polygons correctly.
If MSAA would only affect edges of the pixel it wouldn't handle intersecting polygons correctly. (Like the 16xAA Parhelia used.)
Its already so good!
Fantastic content today! Thank you for the thorough explanations
please sponsor this man a machine, a gpu and a bunch of games! :-D
don't forget its for science!
It's probably more of a lack of time thing. He sounded like he used to game.
@C S That's just silly, how about a dedicated gaming Rig with an i9 9900K a RTX 2080ti and a bunch off RTX ON games
@@UmVtCg 9900k is dooable, 2080ti is dooable. RTX games... not so much.
His institution has the money .... I'm sure if he ask, they will do it.
reminds me of an Article I read the other week showing around 100 images of faces which have been created by deep learning AI, its astounding the level this technology is at now, from looking at those images there is no way you could tell that the people had never existed and that they were created by AI, can't wait to see how far this can go
God..! I love this channel..!!
this dude knows everything
A few additional points to mention... DLSS competes with other things that use the tensor cores only, like real time ray-tracing, in terms of performance cost. It doesn't impact the normal (non-RTX) load on the card by the game, unless the card's thermal solution is unable to keep up with regular cores, CUDA cores, and tensor cores all being loaded, which would cause thermal throttling. If you lower render resolution to 1080p and DLSS to 4K, without ray-tracing, you get the full performance benefit of lower render resolution, with no performance cost from DLSS (apart from the static frame time). If you use ray tracing and DLSS, then DLSS only impacts the performance of the ray tracing features.
Also worth mentioning how other technologies like GSync frame doubling also improve the framerate on top of all of this.
just noticed your channel ..... WOW !!! simply WOW !!! thank you so much for the interesting lesson😃😃 !!
this guy pretty much described the Cyberpunk 2077 launch
0:29 Pardon. Did you mean resoloution?
Haha that Crysis bump, love it!
I like the way things are going
BRO, great video. Thanks
"I love motion blur"
HERETIC
Me too
@@filiphedman4392 Shun the nonbeliever!!
SHUNNNNN
It makes my head hurt
HEATHEN!
Motion blur is the work of the devil. Much like fwd bmws.
I would definitely recommend Digital Foundry's videos on DLSS 2.0 after watching this video. Some great results on games like Control and Death Stranding
I'm curious to know what they are training the network with specifically. It sounds like pixel data, but I can't imagine, even game specific, how a neural net could upsample. The variation in frame composition seems like you would get a lot of artifacts or noisy behaviors. The network doesn't presumably know if you are looking at a car burning or an open sky, for instance, which wouldn't remotely upsample the same.
I could listen to this guy speak for hours. Reminds me of Gavin.
Dr. Mike - I love motion blur
Linus wants to know your location ☠️
Two of the best examples of DLSS are Rise of the Tomb Raider and Shadow of the Tomb Raider. I have never seen DLSS in such a good condition.
0:27 Don’t you mean “resolution” not “framerate”?
Dlss 2.3 and dlss in general is amazing tech !!!
It occurs to me that there's no reason that, if you're rendering this way, the NN input has to look anything like a fully rendered 4k bitmap with exactly 3 colour channels and nothing else, it only has to be able to provide the information needed to produce one. It can be whatever combination of layers you want at whatever resolutions you want, with all sorts of possibilities for what they could be.
Say you calculate: a 1 ray/pixel raytrace layer at 1/2 resolution; an unlit rasterized layer at the target resolution for detailed texture info; and maybe some kind of edgefinding and z-buffer layers at 2-4x target resolution. Whatever is found to work best as a tradeoff.
This might be why dlss 2.0 is so much better, thinking about it.
7:54 _unlocking your face with your phone_
That's... deep bruh. You hit hard the tiktokers and facebookers, maybe instagrammers too, who know. Big slap on their brain ;-)
I see Dr. Pound, I click :)
As of February 2019, the implementations of DLSS on the market (in games including Battlefield V and Final Fantasy XV) are terrible, and provide a worse experience than running without DLSS enabled. For analysis and comparisons, search for the following videos:
"Battlefield V DLSS Tests: Trying to Find the Upside, ft. 2060 & 2080 Ti" by Gamers Nexus
"Battlefield V DLSS Tested, The Biggest RTX Fail Of Them All" by Hardware Unboxed
Just get more vram and throw normal AA at stuff, when AMD sort out the terrible drivers the R7 can do that no problem :)
DLSS will always make worse result than running with out it. The question is if the result is better than running on a lower resolution
@@matsv201 Take a look at the second suggested videos. There is a direct comparison. I won't spoil it for you, but it is not what you expected.
@@BlueTJLP
The second suggested video will not be the same for me and for you. Because it depends on what you watched before.
matsv201 he means the second video he lists
Would love an update to this video including the way DLSS 2.0 does is differently to DLSS 1.0
Please provide subtitles
Cyberprank 2077 would not be playable with RTX without DLSS
I think it would be best to have DLSS settings for a short list of general visual styles, and then each game will just tell the card the appropriate one. Like have cartoon and photorealistic options, so it knows whether to make things smooth gradients with sharp edges, or noisy an detailed (and if video compression for say Netflix could also get those two options, that would be great). When you zoom in to the level of dozens of pixels, there aren't really that many major ways to vary how a game looks. The DLSS might be able to figure out the appropriate action itself, but it would probably run faster if it had two or maybe a few modes, so had to think less about context for each operation.
I’m a simple man. I see Mike Pound, I click
What a exciting but dystopian future when we will be able to unlock our faces with our phones
Mike Pound!!!! I love this guy