My old gaming monitor was 1440P 144hz (TN panel) but for some reason it had a lot of input latency at 60hz, so having around 60fps would feel really bad on top of no variable refresh rate and the low fps. Upgraded last year to one with really low input latency at all refresh rates and it's so much more tolerable when I can't run a game at high frame rates. I almost don't mind it now
@@SiisKolkytEuroo I think they're confused and are meaning to talk about pixel response times, as that's primarily the issue that non oled panels face with VRR. Most VRR panels SUCK at lower refresh rates and cause it to be blurry. I think this is partially a reason why a lot of people who go for a high refresh rate monitors then think 60 fps is unplayable after. Sites like rtings offer reviews of pixel response times over the range of refresh rates, max, 120hz,and 60hz. This is the reason I simply cannot wait for proper OLED panels to high the monitor market. I'm assuming QD OLED with the way the market is trending.
@SiisKolkytEuroo I guess you guys have never read an in-depth monitor review like on Rtings Input latency from just the monitor receiving a signal and displaying a change can be measured on its own. That shouldn't be very high as there's additional latency added by the game engine and your refresh rate. For some reason, some monitors behave badly at lower refresh rates, adding additional latency that wasn't there at higher rates. My current MSI MAG274QRF-QD was measured as having 3.8ms of input latency at 165hz, and 9ms at 60hz. This is considered very good. My old monitor would add something like 20-25ms at 60hz, instead of the 9ms on the MSI monitor. You can definitely feel that when combined with the time for frames to be rendered and additional latency caused by the game (some games like RDR2 add a lot). Thankfully, most monitors don't have this issue and a few in-depth review sites measure input latency at multiple refresh rates as well as with VRR so you can know before buying that it'll be a problem or not.
@@2kliksphilip I've thought that an interesting thing to try would be to separate the screen into different elements, and then update each of them at different framerates - a low fps is hard to notice when the screen isn't moving very much. so lower the output fps intentionally to parts of the scene that don't move quickly, to free up processing power for the parts that are moving much faster, and could therefore benefit from a higher fps. (eg. in a game like doom, assign lower output fps to the gun model, hud, and stationary objects, but a high fps to the background and edges of the screen. or if you are moving slowly, for example exploring in minecraft, most of the background can also be rendered at a low fps, and have the edges of the screen go up.) a big issue with rendering the same object at multiple framerates is obviously tearing. this is why it would need to be split up by element. Another issue with this would be recording, you wouldn't want this to be something that gets recorded as it would look silly when going frame by frame, but there are other things where this is also true and we use them anyway. I'm also not entirely sure
I find the input latency these options have is a big turnoff for me; as you said, it is very noticeable, and bogs down the gameplay. That said, I do like where this technology is heading.
I actually don't like where this is heading. Don't get me wrong, this technology is absolutely impressive. But to be so greedy for smooth visual fidelity that you are willingly accepting generationally increasing visual artefacts might eventually end in a decreased gameplay quality, because even though you might not perceive the loss of quality consciously, the overall experience is going to be "kinda off".
@@finn6612 Well when DLSS 1 came out everyone said it sucked and not to use it so there is a chance In a few years they get the latency down to an unnoticeable amount.
@@DevNug DLSS 1 was literally being compared with smart sharpening filters like contrast adaptive sharpen back in the day and was LOSING those comparisons. Now it's something that's so good it's a no brainer to turn on at higher resolutions. You're right, things improve very quickly.
I think this would be most useful for really old games like okami that have their physics tied to their frame rate and could use the frame rate boost even if artificially.
True, though unfortunately it relies on in-engine motion vectors that don't seem like something you could really just mod in/inject like you can with FSR 1.0.
Thanks so much for pointing this out. I have the exact same problem with DLSS3 image interpolation. If you already have high frame rates you don't need even more to make it even smoother. What matters most (at least for me) is the input lag!
Agreed. This tech is I think useful if the specific game is already have low input latency thanks to its well optimized engine, isn't some competitive game where reaction is super important, has Nvidia Reflex support, and is CPU bottlenecked to 40-60 fps so through DLSS 3 you at least get a smoother image at around 100 - 120 fps.
Your reviews, opinions are the most important for me in the whole internet. Why? Because you do it from an ACTUAL gamer perspective, with focus on the feel instead of technicalities.
I finished A Plague Tale Requiem on my 4090 today at 4K120. I used DLSS3 with DLAA OR DLSS2 at quality with no frame generation swapping back and forth through out the game. I played the game on a dualsense controller. Using a controller the increased input lag of DLAA + DLSS3 vs DLSS2 quality wasn't noticeable for me, the difference in input lag according to nvidia's overlay was around 20ms between them. The biggest issue I had using DLSS3 was going over 120 FPS and getting screen tearing. DLSS3 was extremely impressive at certain points in the game where I was CPU or engine limited (my CPU is a 12700k at 5.4ghz 2 cores, the rest of the p cores at 5.3ghz with 5200mhz DDR5). This mainly happened in cities or a couple of times when the game seemed to only want to run at 60ish fps leaving CPU and GPU very under utilized. I didn't notice any visual glitches at all with DLSS3, although this game has a lot of walk and talk sections where the HUD is hidden so is probably fairly ideal in that regard. Most of the game DLSS3 was over kill, although if they add ray tracing in and or on the mid range or low end 40 series cards it would be very useful - at least to me. Overall I was quite impressed with it and it doesn't look like your TVs fake interpolated frames (soap opera effect) at all - it looks like real higher FPS to me in this game, but really wish they have a reliable way for it to function with frame rate caps and or vsync without destroying input lag even more. I couldn't get a FPS cap or vsync to work at all in this game with DLSS3, even forced through control panel etc. I 100% agree with Philip's assessment that it'll be worse than useless in some situations like competitive FPS etc and awesome in others like slower paced single player. I think for games like A Plague Tale it was a good fit, at least to me.
@@rdmz135 I really tried to notice a difference when using the dualsense and I couldn't while trying to nitpick. I suspect if I was playing something like Cyberpunk with keyboard and mouse I'd notice but dunno if I'd care enough to not use it. I'll find out when they release the expansion for it :)
Spot on here , if you are on a 120hz display you will have issues going forward as frame generation with dlss will put you over your refresh rate and you get massive stutter/tearing issues to the point it was unplayable for me I was able to disable dlss and increase resolution scale to 115 and then frame generation was really good as long as I didn't hit the 120fps mark but at that point I much preferred just using dlss quality as it felt a bit more responsive and there were less visual glitches with foilage and I could enable vsync and get a much smoother experience
Dude you are the best UA-cam er. When you beat dirt rally 2 and revealed only at the end that you ripped it on keyboard is still a highlight for me lol.
The way DLSS 3 currently works will make that very difficult though. Perhaps when Nvidia finds a way to generate good enough frames to insert between existing ones without depending on 2 frames (I'm not sure if that's even feasible for the forseeable future, but whatever, I'd love to be proven wrong in that aspect).
that would be horrible though, the only reason dlss3 works rn is because u r constantly switching between real frame and fake frame, so every other frame is a proper real frame, this is only possible when doubling the framerate (lets say 30 to 60, or 60 to 120), but if you are trying to turn 50 to 60, than the real frames just won't add up, 50 is not dividable by 60, wich means that the overwhealming majority of frames shown would be fake frames, because of that any artifact will be way bigger
@@Deliveredmean42 the latency problem is unfixable, from the nature of how DLSS 3 works, its generating a new frame based on 2 frames, the one before and the one after, even if the algorythm is perfect, and ads no latency at all, just from how it gets the information itll have to wait 33ms to turn 60fps to 120fps, while real 120fps would only have 8ms One solution could be to not relly on the frame after, just on the frame before, wich could allow you to reduce it from 33 to 16, at that point it would always be better than no DLSS 3, since u don't loose latency, u just gain smothness, but it still wont help when compared to real 120fps
Do NOT hit the Vsync limit or use any FPS caps while using DLSS3 frame generation. It will raise the input lag and it also introduce stuttering with a FPS cap. You need to stay below your maximum refreshrate only by increasing the graphics settings. It can get tricky when your display is only 120Hz max. I think DLSS3 is a feature for graphic whores, who like to play single player games. For example Spiderman at 5760*3240 via DL-DSR + DLSS2 quality + DLSS3 frame generation and all other settings maxed out. Looks absolutely insane and plays wonderfully without any obvious artifacts or bad input lag on an 120Hz OLED. Average base framerate while swinging near above the streets is roughly 45-65FPS and 65-90FPS with frame generation enabled in this game.
Does DLDSR come with any additional input latency? Don't remember if DF took that into account when testing that feature with DLSS, but in conjunction with frame generation the input latency could be even worse IF that were to be the case.
@@joos3D DLDSR is just AI-Downsampling and I don't notice any increase of input lag at all vs native resolution at the same FPS. With DLDSR enabled your base FPS will be lower than native resolution of course, which could be not enough for frame generation to run well.
so cool seeing this tech being a thing.. I think it's possible to separate controls and attach it to the generated image.. but can't wrap my head around it either. When everything is dependent on fps, I feel like the more complicated the game, the harder it would be to implement it.
Good video! I guess one of the points I was waiting to be brought up was the effectiveness of DLSS 3 when generating frames for low FPS games, like 30 for example. One of the things Digital Foundry found was that there's often not enough information in 30 fps to interpolate well in various scenarios, but it doesn't seem like that was your experience.
Ever since timewarp was added by oculus I have always wondered why it can't be applied to mouse movement glad someone else had the same idea. I think it's totally doable, extrapolation and interpolation should be combined just like vr does. extrapolation can actually give you lower lag then native rendering because you can just shift the image at the last possible moment when you wouldn't have time to re-render a whole frame the traditional way. at lower framerate the difference in lag between timewarp rotation and normal rendering will be dramatic, also mousefeel will be consistent at any framerate! and you won't get stutter. it's just win win win. It seems crazy to me that this hasn't been explored yet. you could have the mouse feel of 1000fps at any framerate!.
This is a great take on the technology. I could definitely see myself using DLSS3 in any game I would use a controller in, where I would never notice the additional input lag.
@@campersruincod6134 You notice it way less on a controller is my point. Joysticks are inherently less precise, require a certain amount of movement to activate (which can change depending on the age of the controller), and offer slower crosshair movement in general, with a different accelerating curves in every game. Mouse movement is basically instant and 1:1 with your crosshair. It's considerably easier to notice input delay on a mouse than a controller, especially considering many console games run at 30fps, which already doubles or triples the perceived input delay, depending on what FPS you are used to on PC.
Great video 3klicks, I always appreciate your in-depth analysis of everything. I miss your map analysis videos of CSGO, scrutinising all the changes was interesting
The input latency and the tearing would probably mean I'd never use DLSS3 as I'm quite sensitive to the feel of the mouse, competitive title or not. Thankfully if you have a 4090 everything's going to run well anyway
Tearing is a non issue with G-Sync below the max refresh (until they add framerate cup compatibility) and the latency is in line with what you get at 60fps or even higher depending on a lot of factors, especially if the game don't support Reflex; but it's a thing one have to try on their machine to see if the tradeoff of a bit of additional latency for a massive improve with stroboscopic stepping and motion clarity is worth which is game, performance, machine dependent and subjective
Framerate cap also works with G-Sync. Not ingame, but in the nvidia settings and you need to set a cap that is power than your Monitors refreshrate. For example 120 Fps cap in a 144Hz Monitor.
Fun fact: you can technically upload & play 120FPS video to UA-cam, you just have to convert it to half speed before uploading, then play it back in 2x speed on UA-cam.
@@2kliksphilip Perhaps something changed or perhaps it doesn't work on mobile? I just tested to make sure by recording a slow motion vid on my phone while playing back a 60fps YT video at 2X speed on my PC, switching from 480p (so 30fps native, 60fps effective) to 1080p60 (120fps effective) during it. It definitely is playing back at a proper 120 FPS.
I am actually hyped on how Nvidia wants to top this in the next generations. Last year I upgraded from a 980ti which I used for 6 years to a 3080 to use it for another 5 to 6 years. I can only dream how sick my next upgrades jump will be if they already improved this damn much.
Given how much they are complaining about Moore's law, IDK what cards will be like in 6 years. The improvement this generation was nuts, but the cost hike was too. But with Intel Arc continuing on as planned, so possibly 3 major players in the GPU space going forward (and we've all seen what Intel can do when pushed by competition), the future isn't looking so bleak. We're reaching peak pixel density on monitors. No one needs 8k. 4k with supersampling is visually identical at reasonable sitting distances, though the ability to get up close and se more detail is nice. So in 6 years, we might be sitting at endgame visual technology. 6 years ago we had the GTX 1080, which did all 1080p titles at 60FPS+. Now we've got all 4K titles at 120FPS pretty much. 8K 120 RT in 6 years??? Just gotta sell a kidney to afford it!
I think this has a lot of potential on consoles where they're targeting 120fps at high resolutions. It would need to be an option obviously to disable if you wanted but it could also help with new releases a few years down the line and getting 60fps then.
You are exactly right about taking input data and reducing the latency. Since they are using a machine learning step, additional data can be incorporated with the proper training. DLSS 2 was improved this way by adding more metadata like motion vectors. It may require some level of mapping for that particular game into a standard input dataset like defining what control moves the view and character or jumping etc. This way a future frame can be created based on the latest sampling of input, and potentially a huge increase in the amount of generated frames too if the tensor cores have performance headroom. The game logic/physics would also need to run at this higher rate to actually benefit from the finer control. This might be a while (DLSS 4/FSR 4), but in general deep learning solves problems like these pretty well, when there is diverse non-cohesive data, if built right.
You're correct. One simple prove is that people all watching game play on 60 fps at most at youtube or twitch, nobody would say it feels lagging or not smooth. Frame rate is all about the input responsiveness.
Most GPU work is usually queued the display of frame-to-frame is entirely dependent on whether the GPU itself is ready to display it (and as devs we don't really have full control over this); DLSS is just adding another unit of work to that queue and it can be beneficial to latency if you aren't already "fast" so to speak. Framerate and input latency aren't exactly glued to the hip, input is buffered into the games update-cycle and draw-calls usually operate on an entirely different loop. Can read up on this by searching "game update loop vs game draw loop" and more often than not for "good" engines they'll have these running on different threads (which can then spawn more threads, etc.). A popular design for our era of games usually involves batched multithreading, this design usually speeds things up from a processing perspective but we have to wait for vertical sync (or for the GPU to effectively "push" the frame and the monitor to display it). So we queue that work up and process as quickly as we can on the CPU side; there are likely many cases where players are looking at frames that are 1-3 frames behind too or pieces of said frame may be utilizing work cached from a previous frame. There is CPU work being done for a lot of reasons too, not all of it is strictly game logic but just synchronizing data & preparing it to be sent off to the GPU (which isn't always available to be sync'd too). I am not 100% familiar with the DLSS workflow, but if it's using previous frame data it'll almost always mean input delays but that "might" be okay because for a 60 FPS game the input can be sync'd as quickly as 16.33ms~ so if you can boost frames to 120 FPS and it adds +4ms to input you have a new input latency of 12.33~ms (which is an overall net improvement). It's less desirable if said boost isn't sufficient though, say it's 80 FPS instead... then you are looking at 16.5ms which means the game will generally "feel" off. This is already very speculative of how each engine does it's input reads. Inputs are usually evented from the OS, then the game logic will keep a buffer of the most-up-to-date inputs collected from OS events and then compare the previous frame buffer with the newest frame buffer and store that information for the update-loop to read & process. Meaning usually more often than not, 500 FPS is basically "peak" input latency; after that is hit your input hardware usually isn't quick enough to pick up any changes (let alone the OS + Engine), I would guess realistically minimum latency is achieved around 300 FPS with human beings being able to react effectively around ~240 FPS (our lowest reflexive reaction time is like 8ms; in a perfect world this would be about 125 FPS, but because of buffering and such our targets need to be higher). Hope this helps to answer some questions; just a hobby game-dev.
I think I know why DLSS 3 includes HUD elements for its frame generation. If I recall correctly from the DLSS 3.0 presentation, they specifically said that the frame generation is made possible in real time by using the final framebuffer instead of 3D data. They only use the motion vectors for masking, but the AI for the interpolation only uses the final image. This is why it can not make a difference whats on the screen at the time. At this point I don't think its possible to come around this issue, since adding new information to the process (like Z buffer or other 3D data) would make it more precise but also slower, defeating the purpose of the interpolation.
THANK YOU for this video. I do not care at all what it looks like when you slow the footage down. I want to know what it FEELS like as a gamer. This video is perfect, and hits the nail on the head of what gamers actually care about: the feel and general experience of the new tech.
I saw your comment about separating HUD elements from the game world, this is already a thing in most Engines like Unity/UE. More things than you might think are separated using a priority system. Afaik, the HUD is rendered first, then rougher objects like walls, roads, houses etc, then translucent objects, and lastly refractive objects, mostly based on the type of material the object uses. And so the cleanest solution to DLSS3 interpolating the HUD would be integrating the frame generation into the game engine itself rather than having a universal frame generator for every single game.
I played through RDR2 with DLSS on and it was pretty good. The biggest issue I noticed was when there were a lot of smaller "high contrast" areas, for example grass on slightly snowy ground. When you moved the camera it blurred a bit and when it stopped immediately went back into high contrast so that was notable. Otherwise it really didn't show that much. From a few other games I did notice some of the smaller shortcomings. In "Resident Evil 8" smaller lighting details like lit candles vanished on a greater distance. In "Supraland: 6 inches under" some glass panes have a grid pattern on them, that became noticeably rougher when you stood just a bit away than on full resolution. I think DLSS is a pretty good tech and sure hope they manage to iron out the flaws with it some more in the future. I'm no programmer so I have no idea if that is possible but it may help with preserving some details better if certain items in the environment could be tagged in the engine to render as a higher rez overlay and then put that on top of the rendered frame.
@@2kliksphilip The problem is that they have to rely on clunky and intricate solutions to get desired performance and appearance out of consoles, solutions that work well on a specific case and can fell apart at the slightest change, high chances if ported to PC
Good video on graphical topics, as usual for you Philip. You always manage to explain stuff like this in a down to earth way, so I like coming to you for news and developments a lot. Say... Speaking of news and developments, you've mentioned Cyberpunk a lot in these videos lately. Do you think you'd ever do an update video on what you think of the game after it's been patched now? I'd love to know your thoughts on the world and story, whether you think after patching it can be viewed as a good game. Recently played through it myself and I really quite liked the game.
@@2kliksphilip Thanks for your reply! All valid dislikes for sure, although I will say I think the game directly critiques the whole "Die to become a legend and have everyone know your name!" stuff. I think the death of Jackie is supposed to say, "Fly too close to the sun like Icarus, and you're gonna get burnt and make your friends sad when you fall out of the sky." They can all be edgy legends or whatever, but fact is they're gonna end up burnt and eventually friendless. The best ending, The Star, where you leave with the nomads, is precisely the best ending because the characters finally disengage with that toxic culture of sacrifice in the name of glory and fame. Can't comment on the gameplay loop, I don't have the best context for what makes a good one, so it's hard to differentiate and say what's good or bad. I could see things getting boring if you grinded them out, yeah. But I did like how the game wasn't super harsh on building your character, one of the few games where you can allocate skill points and not be super duper punished for placing them in the wrong spots, which suited me well because I'm kinda dumb and bad at planning, so it was nice to have something be on my more casual level. Anyways I could type out a bunch of my feelings, but maybe when Phantom Liberty drops in 2023, you could revisit it and make a more detailed video on how you feel about the game compared to launch and go deep into how it struggles in some areas and why it's just not to your personal tastes? Maybe when that DLC is out, you'll have an oppurtunity to chime in with how you think they still have a core flaw within the gameplay regardless of silly DLCs and patches. Always really liked that type of content from you since you're a lot more down to earth than a lot of youtubers out there who get caught up in the hype and in the hate. Just a fun idea for a relatively unnecessary but possibly quite interesting video.
@@2kliksphilip Honestly? Fair enough. I also can admit that even as a big time enjoyer of the game who sunk in 94 hours and would totally drop another 100 over the next year, some fans can really just excuse any issues and endlessly hype up the game. I think I can enjoy the game because I utterly avoided any hype, I don't follow Reeves or any other celebrities, I haven't played the Witcher 3. Going into it with no biases and no hype underneath my belt really set me up so I didn't have expectations that the game could fail to meet. Not wanting to wait to the end to have everything come to fruition is true, and I do feel like they should have given us a choice to be the only not edgy person around, to really have V be this level-headed person in a city of hormones. The endings can kinda piss you off, too. They are all sad and also quite sudden, real whiplash. I didn't feel I had a lot of choice. I think that's real bad for a game that touted a lot of choice and possibility, but good if you want to justify it as "That's how a Cyberpunk world works, you don't truly have a choice and everything sucks." They were still emotionally impactful to me, but honestly? In order to receive resolution for the characters and actually feel satisfied, I've had to go and read fan works which provide a less ambiguous ending and much more elaboration on the feelings and thoughts of the characters. Isn't that a bit goofy? I think a lot of CDPR's work here is getting done by fans. Still, I'd overall say I feel better off for having played the game. That's likely just due that it's suited a lot to me as a person. I'm young and still have a bit of that edgy spirit in me, that constant urge to make some meaning with your life. The main story fell a bit flat with me, but the side quests with the romantic characters were very impactful to me, made me realize I quite want a relationship with a partner who can really act as a support, act as someone I can count on, a partner who will also attempt to romance me and put in the effort to be reciprocal and caring. I don't think Cyberpunk is the only game that could have shown that to me, but I'm still glad to have experienced those quests. Sorry for the paragraphs of text btw! I get a little excited and have a lot to say. :)
@@2kliksphilip Hmm, well I have a theory. From what I know of you, you seem like a person who has surrounded yourself with people you can depend on, like good friends who you've been friends with for years, a partner you own a house with, parents you share interests and connections with. I think it was impactful to me and not you because I don't have a lot of people I can depend on, I live a pretty isolated life and the people I do interact with have let me down a lot, proved themselves as people I can't really rely on. I think the game then is impactful to me, because I experience something out of my norm when playing: people really going out of their way to care and help. That's a bit bleak, I know I know, but I like that ability to put myself in the shoes of this character who accumulates this massive network of friends that are really great people. On the gameplay, I think one thing that turned me off was the FOV, felt kinda sickening at times and I can't easily explain why, I felt like I had very little peripheral vision even at 100 or 110 FOV. Also, while I think there are some good things about the weapons, they very clearly could have been explored so much deeper and made so much more interesting. Like you do, I think I might end up forgetting that I've ever used certain Iconic weapons, many just aren't as interesting to me, for example they don't come nearly as close to things like the Daedric artifacts in Skyrim in terms of memorability. And Daedric Artifacts aren't even that cool, most of them suck! Good graphics. I have a 4770 and a RX 580. Run at like 37 avg fps on Low 1080p but damn, game's eye candy. Maybe some day I'll be playing at at 8k, like you.
I was hyped for the DLSS3 mainly because I game on a strobed display, and having a proper strobed experience on a 240/280Hz panel demands that you have an FPS equally as high as your refresh rate, which is asking a lot from my CPU in quite a lot of games where I'd be lucky to even get 144+.
Very interesting video. I had been putting some armchair theory towards figuring out what to think about DLSS 3. But there's no replacement for hands-on experience :) I think I would tend to just avoid it. I 'always' disable motion blur (and depth of field, and v-sync).
Finally someone who explains the issues I feared would be pretty important, but everyone kept dismissing. DLSS 3 is great for 4K 120hz gaming but yeah, I think a 4080/4070 will be powerful enough to not need frame generation. Maybe SpiderMan and movie games look better but everything else...
Timewarp for VR I believe actually comes at a bit of an extra cost. It's used for Wireless VR headsets to mitigate the massive latency wireless introduces for VR. I believe it actually renders at a higher resolution outside of what you set it to. At 1080p, it might render like 1200p, but NOT show you the extra edge space it's rendering. The extra edge space is reserved to pan across when moving your head. Calculated in headset, individually from what's going on in game. Hence the perceived latency reduction. The problem is that this won't reduce all other input. Gunshots and movement will respond slower, and so will jumping, or other interactions. It only compensates for head movement, and in Cyberpunk it'll only compensate for the player camera movement at like an extra 5-10% performance cost for the extra pixels rendered that are out of bounds 90% of the time. You can actually bug out Timewarp my moving your head really fast in an erratic fashion. It'll show black lines on the side of your VR screen where it's panning to but nothing is rendered if you move too fast. This would be difficult to mitigate on a monitor and much more noticeable, unless AI just filled in the gaps.
I found that when using higher sensitivity’s u could feel the latency more in fps games, and with a slower more csgo styled sensitivity that most pros use made the games feel way better with DLSS 3 and slow sensitivity. Thanks for this video ❤️❤️👍 I am having lots of fun doing almost the same stuff your doing right now, hopefully we will get this to gaming consoles in the future by xbox and sony.
As you already said, it depends on a lot of things and especially the game itself. In my opinion dlss3 is almost a no brainer in games that are like Cyberpunk, RDR2 or other singleplayer adventure games because the input lag you get is simply not a big enough deal while in CS or Overwatch or any other comp game you obviously don't want it on since it's that precious input latency + you most likely already have your monitor's refresh rate fps or higher to begin with
Thanks for your analysis! I came to the same conclusion as you and I didn't even use it yet. I do have a bit of experience in FPS games, dealing with how poorly my 3950X microstutters in many games, plus plenty of interpolation and video techniques through editing and video encoding. I was presuming that the visual artifacts were going to be a non issue (since VFX + video editing is all about focusing the viewer's eye), and that the input latency was going to be game dependent. I wouldn't exactly call FS2020 a game where those extra 20-40 milliseconds matter, as I'd imagine that getting 50+ FPS in LAX or JFK would be much better than getting 25 FPS.
a VERY important thing to remember with the DLSS artifacts when you're actually playing is how the human brain processes visual information. your brain fills in the gaps subconsciously, so if something is a little off your brain's basically going to auto-correct it anyway. that is, until you slow down and look at the frame on its own.
Holy shit The idea of using async timewarp to reduce mouslag is Pure Genius. How has noone thought about this before!! Grab mouse input **right** before vblank, and "distort" the screen to match the mouse movement missed since start of frame! hooooly shit.
Best thing is there are so many blind people on both sides hating or loving a tech they never tried themselves. People just need to learn that a new feature is always good to have. You don't have to turn it on. It doesn't affect the person not using it. I don't know why those people hating a tech they never tried want a certain tech never developed. Also same goes for the lovers. It isn't just positive. Every game that has it you have to try it for yourself and then decide to either use the feature or not. Don't know what the problem is with so many people.
Love your take, its always "you dont need that gpu if you have a 1080p monitor" that or "you dont need that good of a processor for your gpu "that. What if I want to say get a very high end system for 1080p, to run my 240hz - or even 360hz monitor? Smoothness, responsiveness and general feel is the most important factor for me and its why I cant stand the games with floaty, imprecise aim mechanics.
nothing better than 250+ fps, stable, no hicks, no latency. that's why i love the last or old generation of games. they run smoothly. while i appreciate new Game Engines and what they do, i really miss the simple Quakish engines that just run instantly. on a 240hz+ monitor, turning around ingame, it will move the whole picture without any lags or hicks. experienced players will even notice mouse input lag on certain games or engines. "Fighter pilots have been tested and can identify the type of plane in an image with just one frame at 255 fps. Noticing a flash of light can go into the 1000 fps territory." I'm not a fighter pilot, but i want an engine/graphics card to offer me the fewest latencies and the most frames possible. like in real life. fancy graphics ? not necessarily needed in competitive plays.
Game developers could maybe focus more on general engine latency in future titles to compensate for these features. Digital Foundry tested latency a few years ago and shared the data in DF Direct Weekly #82 (at 47:25). Latency varies greatly between titles, some do a way better job than others.
It would be nice to see objective data like a input latency tool attached to the monitor, or a high speed camera to give us some numbers here instead of it based off feel. I trust your judgement, but feel can be off sometimes and Id like to see its actual impact and how its changed with reflex off/on. I feel the same with very high framerates. I'm more concerned with mouse aim feeling snappy and instant as a result of high frames. The game looking smoother in motion is nice, but its really all about that input delay feeling better. Nvidia is on point though with their tech and this is version 1 of this new feature. They are very aware of response times with reflex, I have faith they will improve on this to the same degree DLSS1 was improved on with DLSS2.
Knowledgeable and skilled sources done some tests and the latency with DLSS 3 frame generation is close or can even be lower than without DLSS 3 and Reflex off and is worth to remind that AMD don't support it (nor have something comparable) and many games also are missing it. The problem of comparing what you experience at high framerate is that along the lower latency you also get reduced stroboscopic stepping and improved motion clarity which I guess will affect your perception of responsiveness, DLSS 3 is a tech that need to be tried in first person.
you can really see the mouse input and game frames being separated when using (in my case) the quest 2 with a pc, when the games stutters, you can still look around the frozen frames
Using DLSS 3 right now on my 4080 on Plague Tale, Witcher 3 and Portal RTX. With Nvidia Reflex on you really have to force yourself to feel the difference at most times when compared to DLSS 3 off and without Reflex. Yes, Reflex can improve non-DLSS 3 responsiveness even more than native and if compared to that the latency can be ever so slightly more pronounced but TBH it really isn't as big an issue as people are making it out to be and you will feel that if you just start playing a game for a few mins with DLSS 3 and Reflex on and experience it yourself. Also, the crazy boost to frames makes it even less noticeable and worth it. So I did a test with the Nvidia Frame View tool and here are actual numbers: Game: A Plague Tale Innocence @ Ultra + DLAA Res: 1440p Case 1 - No DLSS 3 / No Reflex FPS = 100 to 103 PC Latency = 30 to 33 ms Case 2 - No DLSS 3 / Reflex ON + Boost FPS = 99 to 101 PC Latency = 27 to 30 ms Case 3 - DLSS 3 / Reflex On (can't be changed to Boost for now) FPS = 160 to 163 PC Latency = 37 to 40 ms I tested this out in another area too, max latency difference is the same (at around 10 ms) but the FPS boost is insane (60+ fps). I even tested it without Reflex and without DLSS 3 and saw that sometimes the game has the same latency as it gets with DLSS 3 and Reflex ON so there's no difference if you compare those two cases. Now, I'll be completely honest and say that I personally CAN feel the difference of 10 ms if I concentrate on the mouse movement really hard and keep testing it out with DLSS 3 on and off a few times in succession, but I am so hard pressed to actually feel any real difference during gameplay that it's really irrelevant. I have a 144 Hz screen and I actually find the game so much nicer to play with a higher frame rate than worry about the 10 ms added latency which I can't really feel anyway due to the smoothness of the visuals.
Quick note on 2:15. Racing games are in the same spot as CSGO where you need a lot of fps for several reasons. One is obviously input lag and response time. Another is that the force feedback of the wheel needs to get data from the game so low fps will give you worse ffb. In other words, simracing needs way more than 60fps in a competitive field.
Just tested it in FS2020 with target frame rate 60 and it's working pretty well! The latency becomes really bad (comparable with my TV with its own frame interpolation enabled), but it's still playable and overall a much smoother experience than the 45 - 55 fps I normally get due the main thread being saturated. Note that I had to force v-sync in the NVidia control panel (as Digital Foundry found out) and also disable RivaTuner's frame limiter (which in my case caused 1 frame to drop each second, resulting in slight, but noticeable stutter).
Exactly my thoughts. Dlss3 would make sense for consoles or situations in general where we want the best possible graphics and can't get more than 30 fps. Nvidia should make it work with vsync and target a specific frame rate like vr games do with double buffered vsync.
It would makes sense for them to eventually introduce adaptive interpolation, which kicks in when you have low fps for long enough. Maybe it can already see the "real" framerate, the only thing left would be to calculate some rolling average based on it. Doubt they'll spend time on that though, as super-high-fps benchmarks is what sells their cards
There are just too few games out with DLSS3 rn to test it properly. The issue I also have is, that most of these games have a fixed character camera, when I'm then moving my mouse it only needs to pull the info from the depth buffer and basically only needs to work out what gets priority in taking up more screen space since it not like updating 5 out of every 6 frames generated, I think the overhead is pretty minimal. We's need a game where if you have a fixed character camera, and object in the distance needs to raise from a very flat position while it's raising to like 45° angle then flip to the side and rotate continuously and also change the distance to the main character drasticly back and forth. Only thing I can imagine doing that in a game and with this dimensions would if you had an arcade-type yet fighter game...
DLSS 3 seems like something that will allow for 4k 120+ fps gaming in games like A Plague Tale Requiem. At least on a card as insane as the 4090. But it will also allow future games that look even better, and that are even more intensive than Requiem to get solid frame rates.
I personally think DLSS3 frame generation is great for games like Flight Sim where they are heavily CPU bound and doesn't require a high level of quick accuracy. to me its a game changer in that instance. It may differ in the use case of the sim though, people who race in the sim or low level flying may feel different.
The way DLSS works for now (interpolating between 2 already gpu-generated frames) make irrelevant having the AI generated frame taking the user inputs into account, since the next frame, the the input of the user from "back then" has already generated and will be used next. For it to work kind of like VR reprojection, you would need for the frame generation to work base on information from the last displayed frame (without knowing when the next real frame will be there).
DLSS 3 reminds me of DLSS 1, it's interesting tech, it's promising, it could one day be really awesome, but currently it's only good for some people, on some games, and in some cases, like how DLSS 1 was only used for 4k and made the image noticeably worse for a good increase to fps, was slightly better than just lowering the resolution, but for most people was pointless
I think the best usecase is in Games like MSFS 2020 where its not even the GPU that limits the FPS and the Gameplay is slow. DLSS3 boosts the FPS to a level where even in VR you can have stable 90+ fps to feed the Headset.
Regarding latency, this makes me wonder if games could have an option to enable DLSS3 only in non-latency-critical situations (driving, cutscenes, …) and disable it for shooting segments of a game. Of course, resolution or other graphics settings would have to be turned down automatically to allow for higher framerates during shooting segments, as well as having a fast enough CPU to avoid bottlenecks. I think it's still something worth trying :)
Now that's an interesting idea. I can see myself actually using frame generation in games like Cyberpunk if I could toggle it with a single key press and/or have it automatically toggle based on certain conditions like you suggest.
I wish we could use dlss 3 as a way to fix frame drops, like if you have let's say almost consistent 100 fps, but sometimes drop to 80 or so, dlss could be applied to some frames to bring it back up for a more consistent experience and in cases where there's frame drops anyways, slight changes in responsiveness should matter less
I disagree, this is where responsiveness matters most! Moments of low framerate outside of just poorly optimized games often coincide with times when alot is going on at most, like when alot of enemies are on screen, or you're in the middle of the pack in a racing game. You don't want your input to suddenly change in those moments.
@@robertewans5313 I mean the responsiveness would drop there anyways, in the case of dlss 3 it would just drop responsiveness instead of visual clarity when you have frame drops, assuming you use reflex of course
That would just make it worse, you will suddenly have spikes in input latancy and that will feel like microstutters. If you are stable around 120 fps and it suddenly drops to 60 and up again, what you notice is the input latency (micro stutter) and not that the image looks less smooth. So adding DLSS 3 to this will just make it worse
Async time warp is easier on VR because algorithm knows exact position of camera that your head is at from tracking sensors, which the headset itself feeds to the game engine. On traditional games it is more difficult because game engine needs to process the position of camera based on your inputs and feed it to the DLSS. It is more difficult but totally doable. Also foveated upscaling is another option. You can combine DLSS with foveated rendering for PC and VR What i really like about foveated upscaling concept is that it will make artifacts - which are already hard to notice - practically invisible
Yep, VR use sensor data and the quality of the generated frame is way worse than DLSS 3, the reason they gone for interpolation instead of extrapolation like in VR headsets is quality
@@Stef3m FPS matters little when it hurts games responsiveness. Advantage of extrapolation is that is increases the responsiveness at least the feeling of responsiveness
@@erenbalatkan5945 The lower the latency the better but I wouldn't use "hurts" to describe the effect of the latency added by DLSS 3 frame generation, in VR the screen is mounted on your head and camera control connected with head movement so latency and not dropping frame is fundamental or it generate motion sickness but the 90fps requirement is also for stroboscopic stepping and motion trails which are also way more noticeable in VR and that tech like DLSS 3 frame generation fix. There is a reason if I mentione quality, async time warp frame quality looks crap but in VR the high degradation is worth if allows to not drop frames, on desktop is a different story
I love these videos and thank you so much for helping inform us!!! Would it work to upload a short 4k 144hz video file online so we can download it and play it offline to see the effects more clearly? Cus yt's framerate locking is annoying
My first thought regarding DLSS 3 was that it's most likely only relevant in scenarios when native/DLSS 2 (at a reasonable quality level) only gets you around 40-60fps in games played with a controller or where you don't use precise mouse aiming. Mouse aim would get annoying due to the input lag and you don't really need it in general when you already have more than 60fps.
*If a game is built from the ground up with DLSS 3 in mind,* the developers can render the UI in a separate pass so that DLSS 3 never even has to deal with it. I can only expect every new game with DLSS3 will do exactly that, and in game where the UI is part of the world, like with Dead Space, there **shouldn't** be any problem at all. We'll see!
Regardless I still think frame generation is best suited for adding additional frames to already high framerates (and really should have another name). If you have a 240 Hz screen and you get perhaps 100-120 FPS but the motion clarity is better at 180 FPS upwards then for that case frame generation could be a real boon - even with 4090-like GPU performance you're not going to hit near 240 Hz in every scenario, plus you could always supersample the image instead of upscaling - and latency already would be low enough for the additional latency of frame generation to not matter.
Thank you! It's crazy the amount of people who dismiss the improvement in motion clarity of higher frame rates/refresh rates like 240Hz and beyond. It's massive.
@@bungleleaders6823 The people who dismiss this are the ones enabling motion blur in games🤣 artificial motion blur can make things look smoother by making stroboscopic effects less noticeable but using it causes eye tracked motion to be unnaturally blurry. The only fix to have clear motion with no stroboscopic effects at the same time is ultra high refreshrates + framerates. which interpolation can help us achieve. It's basic stuff but people still have terrible misunderstandings about it.
@@brett20000000009 I could see something in the future that uses a camera to track the eyes position and only selectively blur relative motions. But that would require very low latency operation so we would still need ultra high frame rates/refresh rates. Btw, thank you for this comment, it's a breath of fresh air to see someone who understands the stroboscopic stepping artifact!
@@bungleleaders6823 np! good to see more people getting it. eyetracking hardware assisted per object motion blur could make alot of sense for vr which already needs eyetracking. alot of people only focus on persistence blur but stroboscopic effects are equally important to motion. I think a compromise is using per object motion blur and only blurring objects that are moving way faster then a human could follow, or if the motion is very brief. some sort of algorithm could make a decent estimate and it would be much better then just fixed motion blur. The way camera shutter blur works doesn't really work well with how humans watch video on a display imo. it's only accurate for the cameras perspective and no one watches video with their eyes locked to the center of the screen.
Scientifically one should target only non competitive games and around final 120 fps from around native 70 fps. This is the correct ratio because Lcd monitors start to clear motion from ghosting and blurring after the 100hz mark. Even the Back Light Strobing doesn't work under that because of brightness flickering. Watch images at RTINGS. Also those fps are enough for less artifacts. So you overall trade 5% interpolation artifacts for smoother fps and crisper image with 70% less motion artifacts.
i guess DLSS3 will be strong in 3-4 Generations, and Strong in Slow games like MMO's or Singleplayer Games. The RTX4000 will be in few years a RTX6000 with Downsides (Inputlag) but still playable. Imagine the 1080ti with DLSS2 and DLSS3, it would be a Monster, even today. 1080p/1440p
I think the delay on mouse movements "can" sometimes be even good. Games that are slow paced and want to give the player some handicaps like condemned criminal origins was one game where I felt it was done well. There are many games where this can be somewhat fine (especially single player games), but this weird mouse smoothing is for some odd reason always added in places you would not expect it. Hardware mouse acceleration in windows, OSU!, Starcraft Broodwar, overwatch 2... I can definitely understand why a CS Go player would hate it.
If we didn't have more in depth creators like you something like dlss3 could've destroyed competition without real performance gain, because most people would just see the increase in fps and think its better while its not that simple in most cases, in the future it will be hard to make new technologies like that understandable for the majority of consumers, and if consumers buy without the real knowledge of the performance impact of these things it becomes more of a marketing competition than a performance one and nobody likes that.
Couldn't agree more, at first I thought i was missing on with rtx 4000 series but Im not really the person that would like the imput lag difference. I guess I'll buy amd form now on.
I'm playing the new A Plague Tale and with my 4090 already gets a nice high frame rate just using DLSS 2 on ultra settings using 4K. However there are a few times where the framerate will just tank into the 60s which are very noticeable and jolting for me. With DLSS 3 enabled pushing my framerate well into the triple digits, these same, rare, and recreatable instances where framerate tanks, go nowhere near double digits therefore gameplay is uninterrupted and not jolting. So for me the extra latency is acceptable and welcomed at least in this title.
yeh the input latency is a huge turn off for me on any game. bugs me more than any other setting. Seems crazy to me that dlss3 is only on 4000series cards when it doesn't actually need it right now lol.
I have a 6600xt and a 4k tv. Sometimes I hook up my pc to the tv and the 6600xt cant get more than 30-40 fps in 90% of the games I play at 4k high settings and I limit the refresh to 30hz and use the Tv's truemotion feature or whatever it's name is to get the 60fps feeling. It works amazing, yes you can feel the latency, but it's very smooth and now that it's integated directly into the gpu, with actual good tech to take advantage of it I'm pretty hyped to try it out.
Exactly ALL of my thoughts on this tech in one video, wild. Input latency is paramount for me even in singleplayer games
I’d understand in Half Life but I dont see why it is so important in non shooter games
@@henrik3775 sluggishly moving your character isn't fun no matter what genre it is
Input latency has always really bugged me. I personally hate it worse than lower framerate.
My old gaming monitor was 1440P 144hz (TN panel) but for some reason it had a lot of input latency at 60hz, so having around 60fps would feel really bad on top of no variable refresh rate and the low fps. Upgraded last year to one with really low input latency at all refresh rates and it's so much more tolerable when I can't run a game at high frame rates. I almost don't mind it now
@@kendokaaa what do you mean by monitor with low input latency? A monitor doesn't even have any inputs? How do you shop for such a monitor?
@@SiisKolkytEuroo an HDMI or displayport is an input, and the time to process and display the actual image is the input latency
@@SiisKolkytEuroo I think they're confused and are meaning to talk about pixel response times, as that's primarily the issue that non oled panels face with VRR. Most VRR panels SUCK at lower refresh rates and cause it to be blurry. I think this is partially a reason why a lot of people who go for a high refresh rate monitors then think 60 fps is unplayable after. Sites like rtings offer reviews of pixel response times over the range of refresh rates, max, 120hz,and 60hz.
This is the reason I simply cannot wait for proper OLED panels to high the monitor market. I'm assuming QD OLED with the way the market is trending.
@SiisKolkytEuroo I guess you guys have never read an in-depth monitor review like on Rtings
Input latency from just the monitor receiving a signal and displaying a change can be measured on its own. That shouldn't be very high as there's additional latency added by the game engine and your refresh rate. For some reason, some monitors behave badly at lower refresh rates, adding additional latency that wasn't there at higher rates. My current MSI MAG274QRF-QD was measured as having 3.8ms of input latency at 165hz, and 9ms at 60hz. This is considered very good.
My old monitor would add something like 20-25ms at 60hz, instead of the 9ms on the MSI monitor. You can definitely feel that when combined with the time for frames to be rendered and additional latency caused by the game (some games like RDR2 add a lot).
Thankfully, most monitors don't have this issue and a few in-depth review sites measure input latency at multiple refresh rates as well as with VRR so you can know before buying that it'll be a problem or not.
Oh man the amount effort you put in making two 15 minute(ish) videos a day on such topics is mind blowing, well done Phillip !
Keep it up 👍
@@2kliksphilip 3kliksphilip better
@@2kliksphilip I've thought that an interesting thing to try would be to separate the screen into different elements, and then update each of them at different framerates - a low fps is hard to notice when the screen isn't moving very much. so lower the output fps intentionally to parts of the scene that don't move quickly, to free up processing power for the parts that are moving much faster, and could therefore benefit from a higher fps. (eg. in a game like doom, assign lower output fps to the gun model, hud, and stationary objects, but a high fps to the background and edges of the screen. or if you are moving slowly, for example exploring in minecraft, most of the background can also be rendered at a low fps, and have the edges of the screen go up.)
a big issue with rendering the same object at multiple framerates is obviously tearing. this is why it would need to be split up by element. Another issue with this would be recording, you wouldn't want this to be something that gets recorded as it would look silly when going frame by frame, but there are other things where this is also true and we use them anyway. I'm also not entirely sure
thank you for this video phillip
I find the input latency these options have is a big turnoff for me; as you said, it is very noticeable, and bogs down the gameplay. That said, I do like where this technology is heading.
I actually don't like where this is heading. Don't get me wrong, this technology is absolutely impressive. But to be so greedy for smooth visual fidelity that you are willingly accepting generationally increasing visual artefacts might eventually end in a decreased gameplay quality, because even though you might not perceive the loss of quality consciously, the overall experience is going to be "kinda off".
@@finn6612 Well when DLSS 1 came out everyone said it sucked and not to use it so there is a chance In a few years they get the latency down to an unnoticeable amount.
@@finn6612 it will improve. I’d rather be able to use it now instead of them waiting until the product is perfect.
@@DevNug DLSS 1 was literally being compared with smart sharpening filters like contrast adaptive sharpen back in the day and was LOSING those comparisons. Now it's something that's so good it's a no brainer to turn on at higher resolutions.
You're right, things improve very quickly.
@@DevNug latency will not be improved much. 30fps using DLSS 3.0+ will never feel like 60fps.
I think this would be most useful for really old games like okami that have their physics tied to their frame rate and could use the frame rate boost even if artificially.
This is perfect for old games
True, though unfortunately it relies on in-engine motion vectors that don't seem like something you could really just mod in/inject like you can with FSR 1.0.
@@Leap623 no, you could mod it in, it would just have to be game specific mods that take a lot of effort
@@theneonbop like FSR 2.0 in Red Dead redemption 2
If you take that much effort to mod the game you could remove the artificial engine cap as well. It might be even simpler.
Thanks so much for pointing this out. I have the exact same problem with DLSS3 image interpolation. If you already have high frame rates you don't need even more to make it even smoother. What matters most (at least for me) is the input lag!
Agreed. This tech is I think useful if the specific game is already have low input latency thanks to its well optimized engine, isn't some competitive game where reaction is super important, has Nvidia Reflex support, and is CPU bottlenecked to 40-60 fps so through DLSS 3 you at least get a smoother image at around 100 - 120 fps.
@@weaverquest it's not useful at all why would you use dlss3 on 4090 why not just use dlss2 and if you use dlss3 on lower end then it scks
Your reviews, opinions are the most important for me in the whole internet. Why? Because you do it from an ACTUAL gamer perspective, with focus on the feel instead of technicalities.
I finished A Plague Tale Requiem on my 4090 today at 4K120. I used DLSS3 with DLAA OR DLSS2 at quality with no frame generation swapping back and forth through out the game. I played the game on a dualsense controller. Using a controller the increased input lag of DLAA + DLSS3 vs DLSS2 quality wasn't noticeable for me, the difference in input lag according to nvidia's overlay was around 20ms between them.
The biggest issue I had using DLSS3 was going over 120 FPS and getting screen tearing. DLSS3 was extremely impressive at certain points in the game where I was CPU or engine limited (my CPU is a 12700k at 5.4ghz 2 cores, the rest of the p cores at 5.3ghz with 5200mhz DDR5). This mainly happened in cities or a couple of times when the game seemed to only want to run at 60ish fps leaving CPU and GPU very under utilized. I didn't notice any visual glitches at all with DLSS3, although this game has a lot of walk and talk sections where the HUD is hidden so is probably fairly ideal in that regard.
Most of the game DLSS3 was over kill, although if they add ray tracing in and or on the mid range or low end 40 series cards it would be very useful - at least to me.
Overall I was quite impressed with it and it doesn't look like your TVs fake interpolated frames (soap opera effect) at all - it looks like real higher FPS to me in this game, but really wish they have a reliable way for it to function with frame rate caps and or vsync without destroying input lag even more. I couldn't get a FPS cap or vsync to work at all in this game with DLSS3, even forced through control panel etc.
I 100% agree with Philip's assessment that it'll be worse than useless in some situations like competitive FPS etc and awesome in others like slower paced single player. I think for games like A Plague Tale it was a good fit, at least to me.
Dlss 3 is perfect if you use a controller
@@rdmz135 I really tried to notice a difference when using the dualsense and I couldn't while trying to nitpick. I suspect if I was playing something like Cyberpunk with keyboard and mouse I'd notice but dunno if I'd care enough to not use it. I'll find out when they release the expansion for it :)
Spot on here , if you are on a 120hz display you will have issues going forward as frame generation with dlss will put you over your refresh rate and you get massive stutter/tearing issues to the point it was unplayable for me
I was able to disable dlss and increase resolution scale to 115 and then frame generation was really good as long as I didn't hit the 120fps mark but at that point I much preferred just using dlss quality as it felt a bit more responsive and there were less visual glitches with foilage and I could enable vsync and get a much smoother experience
Dude you are the best UA-cam er. When you beat dirt rally 2 and revealed only at the end that you ripped it on keyboard is still a highlight for me lol.
what I like to see is a way to set DLSS3 to a target FPS. If you dont get the targeted FPS there will be generated frames to smooth out the image.
The way DLSS 3 currently works will make that very difficult though. Perhaps when Nvidia finds a way to generate good enough frames to insert between existing ones without depending on 2 frames (I'm not sure if that's even feasible for the forseeable future, but whatever, I'd love to be proven wrong in that aspect).
that would be horrible though, the only reason dlss3 works rn is because u r constantly switching between real frame and fake frame, so every other frame is a proper real frame, this is only possible when doubling the framerate (lets say 30 to 60, or 60 to 120), but if you are trying to turn 50 to 60, than the real frames just won't add up, 50 is not dividable by 60, wich means that the overwhealming majority of frames shown would be fake frames, because of that any artifact will be way bigger
Interesting idea. I imagine it as showing the player the predicted frame until the resendered frame is coming from the GPU.
If they figure out the latency problem.
@@Deliveredmean42 the latency problem is unfixable, from the nature of how DLSS 3 works, its generating a new frame based on 2 frames, the one before and the one after, even if the algorythm is perfect, and ads no latency at all, just from how it gets the information itll have to wait 33ms to turn 60fps to 120fps, while real 120fps would only have 8ms
One solution could be to not relly on the frame after, just on the frame before, wich could allow you to reduce it from 33 to 16, at that point it would always be better than no DLSS 3, since u don't loose latency, u just gain smothness, but it still wont help when compared to real 120fps
Do NOT hit the Vsync limit or use any FPS caps while using DLSS3 frame generation. It will raise the input lag and it also introduce stuttering with a FPS cap. You need to stay below your maximum refreshrate only by increasing the graphics settings. It can get tricky when your display is only 120Hz max. I think DLSS3 is a feature for graphic whores, who like to play single player games. For example Spiderman at 5760*3240 via DL-DSR + DLSS2 quality + DLSS3 frame generation and all other settings maxed out. Looks absolutely insane and plays wonderfully without any obvious artifacts or bad input lag on an 120Hz OLED. Average base framerate while swinging near above the streets is roughly 45-65FPS and 65-90FPS with frame generation enabled in this game.
Does DLDSR come with any additional input latency? Don't remember if DF took that into account when testing that feature with DLSS, but in conjunction with frame generation the input latency could be even worse IF that were to be the case.
@@joos3D DLDSR is just AI-Downsampling and I don't notice any increase of input lag at all vs native resolution at the same FPS. With DLDSR enabled your base FPS will be lower than native resolution of course, which could be not enough for frame generation to run well.
DLDS is the shit except it way oversharpens things, and no it doesn’t add input latency as far as I’m aware
@@brkbtjunkie I've never tried it but isn't there a sharpness/smoothness slider?
@@joos3D There is a slider in the driver settings of course. Slider at 0 is maximum sharpness. I personally have set it to 60 and it looks perfect.
The cyberpunk example at around 8:30 really highlights the usefulness at lower framerates
Yea, Digital Foundry demonstrated you can use it to move from 80fps RT off to 110fps RT Psycho... that's quite something
Love how many videos you create recenetly. Keep it up man it's really great!
Issues with 2D elements like HUD, menus, overlays, etc, have been an issue going back to the very first AA methods. It'll get sorted eventually.
so cool seeing this tech being a thing..
I think it's possible to separate controls and attach it to the generated image.. but can't wrap my head around it either. When everything is dependent on fps, I feel like the more complicated the game, the harder it would be to implement it.
Good video!
I guess one of the points I was waiting to be brought up was the effectiveness of DLSS 3 when generating frames for low FPS games, like 30 for example.
One of the things Digital Foundry found was that there's often not enough information in 30 fps to interpolate well in various scenarios, but it doesn't seem like that was your experience.
Yeah, unless there's a noticeable difference in artifacts generating frames from 30 fps compared to 40 like in the video.
@@byrgenwerth2097 I guess I'd have to see for myself. Looks like this might be a really subjective thing, and UA-cam can only do so much to convey it.
In the DF analysis they actually said that it works well from 40fps on games like Cyberpunk 2077 so they actually agree with him
Ever since timewarp was added by oculus I have always wondered why it can't be applied to mouse movement glad someone else had the same idea. I think it's totally doable, extrapolation and interpolation should be combined just like vr does. extrapolation can actually give you lower lag then native rendering because you can just shift the image at the last possible moment when you wouldn't have time to re-render a whole frame the traditional way. at lower framerate the difference in lag between timewarp rotation and normal rendering will be dramatic, also mousefeel will be consistent at any framerate! and you won't get stutter. it's just win win win.
It seems crazy to me that this hasn't been explored yet. you could have the mouse feel of 1000fps at any framerate!.
Great video :) I really wanted to hear about the input lag feeling, You can always tell when the mouse is not 1 to 1 with inputs
This is a great take on the technology. I could definitely see myself using DLSS3 in any game I would use a controller in, where I would never notice the additional input lag.
Except you can notice input delay even when using a controller. I play a lot of CoD and can notice 10ms add input delay.
@@campersruincod6134 You notice it way less on a controller is my point. Joysticks are inherently less precise, require a certain amount of movement to activate (which can change depending on the age of the controller), and offer slower crosshair movement in general, with a different accelerating curves in every game. Mouse movement is basically instant and 1:1 with your crosshair. It's considerably easier to notice input delay on a mouse than a controller, especially considering many console games run at 30fps, which already doubles or triples the perceived input delay, depending on what FPS you are used to on PC.
@@OsaculnenolajO fair enough, I agree.
Great video 3klicks, I always appreciate your in-depth analysis of everything. I miss your map analysis videos of CSGO, scrutinising all the changes was interesting
My mind kept going to Steam Deck during this vid. Glad to see you came to the same conclusion -- it would be the most ideal use case without a doubt!
The input latency and the tearing would probably mean I'd never use DLSS3 as I'm quite sensitive to the feel of the mouse, competitive title or not. Thankfully if you have a 4090 everything's going to run well anyway
Except ray tracing 4k ultra cyberbug, you need dlss even with 4090
Tearing is a non issue with G-Sync below the max refresh (until they add framerate cup compatibility) and the latency is in line with what you get at 60fps or even higher depending on a lot of factors, especially if the game don't support Reflex; but it's a thing one have to try on their machine to see if the tradeoff of a bit of additional latency for a massive improve with stroboscopic stepping and motion clarity is worth which is game, performance, machine dependent and subjective
@@spookyskellyskeleton609 cyberclunk
Framerate cap also works with G-Sync. Not ingame, but in the nvidia settings and you need to set a cap that is power than your Monitors refreshrate. For example 120 Fps cap in a 144Hz Monitor.
@@Dragonblood401 And that avoid the spike of latency with DLSS 3?
Fun fact: you can technically upload & play 120FPS video to UA-cam, you just have to convert it to half speed before uploading, then play it back in 2x speed on UA-cam.
@@2kliksphilip Perhaps something changed or perhaps it doesn't work on mobile? I just tested to make sure by recording a slow motion vid on my phone while playing back a 60fps YT video at 2X speed on my PC, switching from 480p (so 30fps native, 60fps effective) to 1080p60 (120fps effective) during it. It definitely is playing back at a proper 120 FPS.
I am actually hyped on how Nvidia wants to top this in the next generations.
Last year I upgraded from a 980ti which I used for 6 years to a 3080 to use it for another 5 to 6 years. I can only dream how sick my next upgrades jump will be if they already improved this damn much.
Given how much they are complaining about Moore's law, IDK what cards will be like in 6 years. The improvement this generation was nuts, but the cost hike was too. But with Intel Arc continuing on as planned, so possibly 3 major players in the GPU space going forward (and we've all seen what Intel can do when pushed by competition), the future isn't looking so bleak.
We're reaching peak pixel density on monitors. No one needs 8k. 4k with supersampling is visually identical at reasonable sitting distances, though the ability to get up close and se more detail is nice.
So in 6 years, we might be sitting at endgame visual technology. 6 years ago we had the GTX 1080, which did all 1080p titles at 60FPS+. Now we've got all 4K titles at 120FPS pretty much. 8K 120 RT in 6 years??? Just gotta sell a kidney to afford it!
Maybe some proper pricing for a change
I think this has a lot of potential on consoles where they're targeting 120fps at high resolutions. It would need to be an option obviously to disable if you wanted but it could also help with new releases a few years down the line and getting 60fps then.
You are exactly right about taking input data and reducing the latency. Since they are using a machine learning step, additional data can be incorporated with the proper training. DLSS 2 was improved this way by adding more metadata like motion vectors. It may require some level of mapping for that particular game into a standard input dataset like defining what control moves the view and character or jumping etc. This way a future frame can be created based on the latest sampling of input, and potentially a huge increase in the amount of generated frames too if the tensor cores have performance headroom. The game logic/physics would also need to run at this higher rate to actually benefit from the finer control. This might be a while (DLSS 4/FSR 4), but in general deep learning solves problems like these pretty well, when there is diverse non-cohesive data, if built right.
Really honest review. And it's not often happens on UA-cam lol. Thanks!
dude when that music kicks in in your videos my dopamine rises UP
You're correct. One simple prove is that people all watching game play on 60 fps at most at youtube or twitch, nobody would say it feels lagging or not smooth. Frame rate is all about the input responsiveness.
Most GPU work is usually queued the display of frame-to-frame is entirely dependent on whether the GPU itself is ready to display it (and as devs we don't really have full control over this); DLSS is just adding another unit of work to that queue and it can be beneficial to latency if you aren't already "fast" so to speak.
Framerate and input latency aren't exactly glued to the hip, input is buffered into the games update-cycle and draw-calls usually operate on an entirely different loop.
Can read up on this by searching "game update loop vs game draw loop" and more often than not for "good" engines they'll have these running on different threads (which can then spawn more threads, etc.).
A popular design for our era of games usually involves batched multithreading, this design usually speeds things up from a processing perspective but we have to wait for vertical sync (or for the GPU to effectively "push" the frame and the monitor to display it).
So we queue that work up and process as quickly as we can on the CPU side; there are likely many cases where players are looking at frames that are 1-3 frames behind too or pieces of said frame may be utilizing work cached from a previous frame.
There is CPU work being done for a lot of reasons too, not all of it is strictly game logic but just synchronizing data & preparing it to be sent off to the GPU (which isn't always available to be sync'd too). I am not 100% familiar with the DLSS workflow, but if it's using previous frame data it'll almost always mean input delays but that "might" be okay because for a 60 FPS game the input can be sync'd as quickly as 16.33ms~ so if you can boost frames to 120 FPS and it adds +4ms to input you have a new input latency of 12.33~ms (which is an overall net improvement). It's less desirable if said boost isn't sufficient though, say it's 80 FPS instead... then you are looking at 16.5ms which means the game will generally "feel" off. This is already very speculative of how each engine does it's input reads.
Inputs are usually evented from the OS, then the game logic will keep a buffer of the most-up-to-date inputs collected from OS events and then compare the previous frame buffer with the newest frame buffer and store that information for the update-loop to read & process.
Meaning usually more often than not, 500 FPS is basically "peak" input latency; after that is hit your input hardware usually isn't quick enough to pick up any changes (let alone the OS + Engine), I would guess realistically minimum latency is achieved around 300 FPS with human beings being able to react effectively around ~240 FPS (our lowest reflexive reaction time is like 8ms; in a perfect world this would be about 125 FPS, but because of buffering and such our targets need to be higher).
Hope this helps to answer some questions; just a hobby game-dev.
I think I know why DLSS 3 includes HUD elements for its frame generation.
If I recall correctly from the DLSS 3.0 presentation, they specifically said that the frame generation is made possible in real time by using the final framebuffer instead of 3D data.
They only use the motion vectors for masking, but the AI for the interpolation only uses the final image.
This is why it can not make a difference whats on the screen at the time.
At this point I don't think its possible to come around this issue, since adding new information to the process (like Z buffer or other 3D data) would make it more precise but also slower, defeating the purpose of the interpolation.
lol if they fail at that, then they can pack their things and go home
UI should be rendered separately, just like with resolution upscaling method.
THANK YOU for this video. I do not care at all what it looks like when you slow the footage down. I want to know what it FEELS like as a gamer. This video is perfect, and hits the nail on the head of what gamers actually care about: the feel and general experience of the new tech.
I saw your comment about separating HUD elements from the game world, this is already a thing in most Engines like Unity/UE.
More things than you might think are separated using a priority system.
Afaik, the HUD is rendered first, then rougher objects like walls, roads, houses etc, then translucent objects, and lastly refractive objects, mostly based on the type of material the object uses.
And so the cleanest solution to DLSS3 interpolating the HUD would be integrating the frame generation into the game engine itself rather than having a universal frame generator for every single game.
I played through RDR2 with DLSS on and it was pretty good. The biggest issue I noticed was when there were a lot of smaller "high contrast" areas, for example grass on slightly snowy ground. When you moved the camera it blurred a bit and when it stopped immediately went back into high contrast so that was notable. Otherwise it really didn't show that much.
From a few other games I did notice some of the smaller shortcomings.
In "Resident Evil 8" smaller lighting details like lit candles vanished on a greater distance.
In "Supraland: 6 inches under" some glass panes have a grid pattern on them, that became noticeably rougher when you stood just a bit away than on full resolution.
I think DLSS is a pretty good tech and sure hope they manage to iron out the flaws with it some more in the future. I'm no programmer so I have no idea if that is possible but it may help with preserving some details better if certain items in the environment could be tagged in the engine to render as a higher rez overlay and then put that on top of the rendered frame.
@@2kliksphilip The problem is that they have to rely on clunky and intricate solutions to get desired performance and appearance out of consoles, solutions that work well on a specific case and can fell apart at the slightest change, high chances if ported to PC
Good video on graphical topics, as usual for you Philip. You always manage to explain stuff like this in a down to earth way, so I like coming to you for news and developments a lot.
Say... Speaking of news and developments, you've mentioned Cyberpunk a lot in these videos lately. Do you think you'd ever do an update video on what you think of the game after it's been patched now? I'd love to know your thoughts on the world and story, whether you think after patching it can be viewed as a good game. Recently played through it myself and I really quite liked the game.
@@2kliksphilip Thanks for your reply! All valid dislikes for sure, although I will say I think the game directly critiques the whole "Die to become a legend and have everyone know your name!" stuff. I think the death of Jackie is supposed to say, "Fly too close to the sun like Icarus, and you're gonna get burnt and make your friends sad when you fall out of the sky." They can all be edgy legends or whatever, but fact is they're gonna end up burnt and eventually friendless. The best ending, The Star, where you leave with the nomads, is precisely the best ending because the characters finally disengage with that toxic culture of sacrifice in the name of glory and fame.
Can't comment on the gameplay loop, I don't have the best context for what makes a good one, so it's hard to differentiate and say what's good or bad. I could see things getting boring if you grinded them out, yeah. But I did like how the game wasn't super harsh on building your character, one of the few games where you can allocate skill points and not be super duper punished for placing them in the wrong spots, which suited me well because I'm kinda dumb and bad at planning, so it was nice to have something be on my more casual level.
Anyways I could type out a bunch of my feelings, but maybe when Phantom Liberty drops in 2023, you could revisit it and make a more detailed video on how you feel about the game compared to launch and go deep into how it struggles in some areas and why it's just not to your personal tastes? Maybe when that DLC is out, you'll have an oppurtunity to chime in with how you think they still have a core flaw within the gameplay regardless of silly DLCs and patches. Always really liked that type of content from you since you're a lot more down to earth than a lot of youtubers out there who get caught up in the hype and in the hate. Just a fun idea for a relatively unnecessary but possibly quite interesting video.
@@2kliksphilip Honestly? Fair enough. I also can admit that even as a big time enjoyer of the game who sunk in 94 hours and would totally drop another 100 over the next year, some fans can really just excuse any issues and endlessly hype up the game. I think I can enjoy the game because I utterly avoided any hype, I don't follow Reeves or any other celebrities, I haven't played the Witcher 3. Going into it with no biases and no hype underneath my belt really set me up so I didn't have expectations that the game could fail to meet.
Not wanting to wait to the end to have everything come to fruition is true, and I do feel like they should have given us a choice to be the only not edgy person around, to really have V be this level-headed person in a city of hormones. The endings can kinda piss you off, too. They are all sad and also quite sudden, real whiplash. I didn't feel I had a lot of choice. I think that's real bad for a game that touted a lot of choice and possibility, but good if you want to justify it as "That's how a Cyberpunk world works, you don't truly have a choice and everything sucks."
They were still emotionally impactful to me, but honestly? In order to receive resolution for the characters and actually feel satisfied, I've had to go and read fan works which provide a less ambiguous ending and much more elaboration on the feelings and thoughts of the characters. Isn't that a bit goofy? I think a lot of CDPR's work here is getting done by fans.
Still, I'd overall say I feel better off for having played the game. That's likely just due that it's suited a lot to me as a person. I'm young and still have a bit of that edgy spirit in me, that constant urge to make some meaning with your life. The main story fell a bit flat with me, but the side quests with the romantic characters were very impactful to me, made me realize I quite want a relationship with a partner who can really act as a support, act as someone I can count on, a partner who will also attempt to romance me and put in the effort to be reciprocal and caring. I don't think Cyberpunk is the only game that could have shown that to me, but I'm still glad to have experienced those quests.
Sorry for the paragraphs of text btw! I get a little excited and have a lot to say. :)
@@2kliksphilip Hmm, well I have a theory. From what I know of you, you seem like a person who has surrounded yourself with people you can depend on, like good friends who you've been friends with for years, a partner you own a house with, parents you share interests and connections with.
I think it was impactful to me and not you because I don't have a lot of people I can depend on, I live a pretty isolated life and the people I do interact with have let me down a lot, proved themselves as people I can't really rely on. I think the game then is impactful to me, because I experience something out of my norm when playing: people really going out of their way to care and help. That's a bit bleak, I know I know, but I like that ability to put myself in the shoes of this character who accumulates this massive network of friends that are really great people.
On the gameplay, I think one thing that turned me off was the FOV, felt kinda sickening at times and I can't easily explain why, I felt like I had very little peripheral vision even at 100 or 110 FOV. Also, while I think there are some good things about the weapons, they very clearly could have been explored so much deeper and made so much more interesting. Like you do, I think I might end up forgetting that I've ever used certain Iconic weapons, many just aren't as interesting to me, for example they don't come nearly as close to things like the Daedric artifacts in Skyrim in terms of memorability. And Daedric Artifacts aren't even that cool, most of them suck!
Good graphics. I have a 4770 and a RX 580. Run at like 37 avg fps on Low 1080p but damn, game's eye candy. Maybe some day I'll be playing at at 8k, like you.
I was hyped for the DLSS3 mainly because I game on a strobed display, and having a proper strobed experience on a 240/280Hz panel demands that you have an FPS equally as high as your refresh rate, which is asking a lot from my CPU in quite a lot of games where I'd be lucky to even get 144+.
Just here to say that Requiem is my favorite game of the last couple of years. Needed to get it off my chest.
Nice video btw.
2:13 can hear crofty yelling THROUGH GOES HAMILTON
Very interesting video.
I had been putting some armchair theory towards figuring out what to think about DLSS 3. But there's no replacement for hands-on experience :)
I think I would tend to just avoid it. I 'always' disable motion blur (and depth of field, and v-sync).
Finally someone who explains the issues I feared would be pretty important, but everyone kept dismissing. DLSS 3 is great for 4K 120hz gaming but yeah, I think a 4080/4070 will be powerful enough to not need frame generation.
Maybe SpiderMan and movie games look better but everything else...
This is the best attempt at a fair, real world based review on DLSS 3
Timewarp for VR I believe actually comes at a bit of an extra cost. It's used for Wireless VR headsets to mitigate the massive latency wireless introduces for VR. I believe it actually renders at a higher resolution outside of what you set it to. At 1080p, it might render like 1200p, but NOT show you the extra edge space it's rendering. The extra edge space is reserved to pan across when moving your head. Calculated in headset, individually from what's going on in game. Hence the perceived latency reduction. The problem is that this won't reduce all other input. Gunshots and movement will respond slower, and so will jumping, or other interactions. It only compensates for head movement, and in Cyberpunk it'll only compensate for the player camera movement at like an extra 5-10% performance cost for the extra pixels rendered that are out of bounds 90% of the time. You can actually bug out Timewarp my moving your head really fast in an erratic fashion. It'll show black lines on the side of your VR screen where it's panning to but nothing is rendered if you move too fast. This would be difficult to mitigate on a monitor and much more noticeable, unless AI just filled in the gaps.
YES! THANK YOU. You are so god damn right on this one!
I found that when using higher sensitivity’s u could feel the latency more in fps games, and with a slower more csgo styled sensitivity that most pros use made the games feel way better with DLSS 3 and slow sensitivity. Thanks for this video ❤️❤️👍
I am having lots of fun doing almost the same stuff your doing right now, hopefully we will get this to gaming consoles in the future by xbox and sony.
As you already said, it depends on a lot of things and especially the game itself. In my opinion dlss3 is almost a no brainer in games that are like Cyberpunk, RDR2 or other singleplayer adventure games because the input lag you get is simply not a big enough deal while in CS or Overwatch or any other comp game you obviously don't want it on since it's that precious input latency + you most likely already have your monitor's refresh rate fps or higher to begin with
Thanks for your analysis! I came to the same conclusion as you and I didn't even use it yet. I do have a bit of experience in FPS games, dealing with how poorly my 3950X microstutters in many games, plus plenty of interpolation and video techniques through editing and video encoding. I was presuming that the visual artifacts were going to be a non issue (since VFX + video editing is all about focusing the viewer's eye), and that the input latency was going to be game dependent. I wouldn't exactly call FS2020 a game where those extra 20-40 milliseconds matter, as I'd imagine that getting 50+ FPS in LAX or JFK would be much better than getting 25 FPS.
Very helpful analysis
Great video, this is good analysis
a VERY important thing to remember with the DLSS artifacts when you're actually playing is how the human brain processes visual information. your brain fills in the gaps subconsciously, so if something is a little off your brain's basically going to auto-correct it anyway. that is, until you slow down and look at the frame on its own.
Holy shit
The idea of using async timewarp to reduce mouslag is Pure Genius. How has noone thought about this before!!
Grab mouse input **right** before vblank, and "distort" the screen to match the mouse movement missed since start of frame!
hooooly shit.
Alright, that was quite insightful
Best thing is there are so many blind people on both sides hating or loving a tech they never tried themselves. People just need to learn that a new feature is always good to have. You don't have to turn it on. It doesn't affect the person not using it. I don't know why those people hating a tech they never tried want a certain tech never developed. Also same goes for the lovers. It isn't just positive. Every game that has it you have to try it for yourself and then decide to either use the feature or not. Don't know what the problem is with so many people.
Love your take, its always "you dont need that gpu if you have a 1080p monitor" that or "you dont need that good of a processor for your gpu "that. What if I want to say get a very high end system for 1080p, to run my 240hz - or even 360hz monitor? Smoothness, responsiveness and general feel is the most important factor for me and its why I cant stand the games with floaty, imprecise aim mechanics.
nothing better than 250+ fps, stable, no hicks, no latency. that's why i love the last or old generation of games. they run smoothly. while i appreciate new Game Engines and what they do, i really miss the simple Quakish engines that just run instantly. on a 240hz+ monitor, turning around ingame, it will move the whole picture without any lags or hicks. experienced players will even notice mouse input lag on certain games or engines.
"Fighter pilots have been tested and can identify the type of plane in an image with just one frame at 255 fps. Noticing a flash of light can go into the 1000 fps territory."
I'm not a fighter pilot, but i want an engine/graphics card to offer me the fewest latencies and the most frames possible. like in real life. fancy graphics ? not necessarily needed in competitive plays.
Game developers could maybe focus more on general engine latency in future titles to compensate for these features. Digital Foundry tested latency a few years ago and shared the data in DF Direct Weekly #82 (at 47:25). Latency varies greatly between titles, some do a way better job than others.
Perhaps this may be incredible useful for Animation or CGI in the future at least!
It would be nice to see objective data like a input latency tool attached to the monitor, or a high speed camera to give us some numbers here instead of it based off feel. I trust your judgement, but feel can be off sometimes and Id like to see its actual impact and how its changed with reflex off/on.
I feel the same with very high framerates. I'm more concerned with mouse aim feeling snappy and instant as a result of high frames. The game looking smoother in motion is nice, but its really all about that input delay feeling better.
Nvidia is on point though with their tech and this is version 1 of this new feature. They are very aware of response times with reflex, I have faith they will improve on this to the same degree DLSS1 was improved on with DLSS2.
Knowledgeable and skilled sources done some tests and the latency with DLSS 3 frame generation is close or can even be lower than without DLSS 3 and Reflex off and is worth to remind that AMD don't support it (nor have something comparable) and many games also are missing it.
The problem of comparing what you experience at high framerate is that along the lower latency you also get reduced stroboscopic stepping and improved motion clarity which I guess will affect your perception of responsiveness, DLSS 3 is a tech that need to be tried in first person.
DLSS 3.0 con a 4050 of sorts willl be huge
you can really see the mouse input and game frames being separated when using (in my case) the quest 2 with a pc, when the games stutters, you can still look around the frozen frames
"Mouse movement being smoothed" is where I turn back. No amount of frames is worth giving up input latency in my opinion.
It's interesting how view on frame generation has changed since more people started using lossless scaling
Using DLSS 3 right now on my 4080 on Plague Tale, Witcher 3 and Portal RTX. With Nvidia Reflex on you really have to force yourself to feel the difference at most times when compared to DLSS 3 off and without Reflex. Yes, Reflex can improve non-DLSS 3 responsiveness even more than native and if compared to that the latency can be ever so slightly more pronounced but TBH it really isn't as big an issue as people are making it out to be and you will feel that if you just start playing a game for a few mins with DLSS 3 and Reflex on and experience it yourself. Also, the crazy boost to frames makes it even less noticeable and worth it.
So I did a test with the Nvidia Frame View tool and here are actual numbers:
Game: A Plague Tale Innocence @ Ultra + DLAA
Res: 1440p
Case 1 - No DLSS 3 / No Reflex
FPS = 100 to 103
PC Latency = 30 to 33 ms
Case 2 - No DLSS 3 / Reflex ON + Boost
FPS = 99 to 101
PC Latency = 27 to 30 ms
Case 3 - DLSS 3 / Reflex On (can't be changed to Boost for now)
FPS = 160 to 163
PC Latency = 37 to 40 ms
I tested this out in another area too, max latency difference is the same (at around 10 ms) but the FPS boost is insane (60+ fps). I even tested it without Reflex and without DLSS 3 and saw that sometimes the game has the same latency as it gets with DLSS 3 and Reflex ON so there's no difference if you compare those two cases. Now, I'll be completely honest and say that I personally CAN feel the difference of 10 ms if I concentrate on the mouse movement really hard and keep testing it out with DLSS 3 on and off a few times in succession, but I am so hard pressed to actually feel any real difference during gameplay that it's really irrelevant. I have a 144 Hz screen and I actually find the game so much nicer to play with a higher frame rate than worry about the 10 ms added latency which I can't really feel anyway due to the smoothness of the visuals.
Me with a GT 1030: ah yes very helpful
Quick note on 2:15. Racing games are in the same spot as CSGO where you need a lot of fps for several reasons. One is obviously input lag and response time. Another is that the force feedback of the wheel needs to get data from the game so low fps will give you worse ffb. In other words, simracing needs way more than 60fps in a competitive field.
I wonder if you can replicate the feel of 144fps video if you slow down the video to 60fps and let the viewers increase the playback speed
Just tested it in FS2020 with target frame rate 60 and it's working pretty well! The latency becomes really bad (comparable with my TV with its own frame interpolation enabled), but it's still playable and overall a much smoother experience than the 45 - 55 fps I normally get due the main thread being saturated. Note that I had to force v-sync in the NVidia control panel (as Digital Foundry found out) and also disable RivaTuner's frame limiter (which in my case caused 1 frame to drop each second, resulting in slight, but noticeable stutter).
gamer's nexus is the hardware guru review reporter,
and this is the software, end user experience reporter reviewer.
I will love DLSS 3.0 Getting 25fps Upto 50fps and then also Using My Fresync/Gsync Display to make it "smoother"
Great and very informative video
Exactly my thoughts. Dlss3 would make sense for consoles or situations in general where we want the best possible graphics and can't get more than 30 fps. Nvidia should make it work with vsync and target a specific frame rate like vr games do with double buffered vsync.
It would makes sense for them to eventually introduce adaptive interpolation, which kicks in when you have low fps for long enough. Maybe it can already see the "real" framerate, the only thing left would be to calculate some rolling average based on it.
Doubt they'll spend time on that though, as super-high-fps benchmarks is what sells their cards
There are just too few games out with DLSS3 rn to test it properly. The issue I also have is, that most of these games have a fixed character camera, when I'm then moving my mouse it only needs to pull the info from the depth buffer and basically only needs to work out what gets priority in taking up more screen space since it not like updating 5 out of every 6 frames generated, I think the overhead is pretty minimal.
We's need a game where if you have a fixed character camera, and object in the distance needs to raise from a very flat position while it's raising to like 45° angle then flip to the side and rotate continuously and also change the distance to the main character drasticly back and forth. Only thing I can imagine doing that in a game and with this dimensions would if you had an arcade-type yet fighter game...
DLSS 3 seems like something that will allow for 4k 120+ fps gaming in games like A Plague Tale Requiem. At least on a card as insane as the 4090. But it will also allow future games that look even better, and that are even more intensive than Requiem to get solid frame rates.
I personally think DLSS3 frame generation is great for games like Flight Sim where they are heavily CPU bound and doesn't require a high level of quick accuracy. to me its a game changer in that instance. It may differ in the use case of the sim though, people who race in the sim or low level flying may feel different.
The way DLSS works for now (interpolating between 2 already gpu-generated frames) make irrelevant having the AI generated frame taking the user inputs into account, since the next frame, the the input of the user from "back then" has already generated and will be used next.
For it to work kind of like VR reprojection, you would need for the frame generation to work base on information from the last displayed frame (without knowing when the next real frame will be there).
DLSS 3 reminds me of DLSS 1, it's interesting tech, it's promising, it could one day be really awesome, but currently it's only good for some people, on some games, and in some cases, like how DLSS 1 was only used for 4k and made the image noticeably worse for a good increase to fps, was slightly better than just lowering the resolution, but for most people was pointless
I love your "tech" focused content, and the steam deck example is great and sounds like hacks for better battery life
I think the best usecase is in Games like MSFS 2020 where its not even the GPU that limits the FPS and the Gameplay is slow. DLSS3 boosts the FPS to a level where even in VR you can have stable 90+ fps to feed the Headset.
9:37 Whoa, Untitled Goose Game 2 looks g o o d!
Regarding latency, this makes me wonder if games could have an option to enable DLSS3 only in non-latency-critical situations (driving, cutscenes, …) and disable it for shooting segments of a game. Of course, resolution or other graphics settings would have to be turned down automatically to allow for higher framerates during shooting segments, as well as having a fast enough CPU to avoid bottlenecks. I think it's still something worth trying :)
Now that's an interesting idea. I can see myself actually using frame generation in games like Cyberpunk if I could toggle it with a single key press and/or have it automatically toggle based on certain conditions like you suggest.
This is messy. I want technologies like dlss frame generation to just work and don't have to think about those kind of weird rules, ughh
I wish we could use dlss 3 as a way to fix frame drops, like if you have let's say almost consistent 100 fps, but sometimes drop to 80 or so, dlss could be applied to some frames to bring it back up for a more consistent experience and in cases where there's frame drops anyways, slight changes in responsiveness should matter less
I disagree, this is where responsiveness matters most! Moments of low framerate outside of just poorly optimized games often coincide with times when alot is going on at most, like when alot of enemies are on screen, or you're in the middle of the pack in a racing game. You don't want your input to suddenly change in those moments.
@@robertewans5313 I mean the responsiveness would drop there anyways, in the case of dlss 3 it would just drop responsiveness instead of visual clarity when you have frame drops, assuming you use reflex of course
That would just make it worse, you will suddenly have spikes in input latancy and that will feel like microstutters. If you are stable around 120 fps and it suddenly drops to 60 and up again, what you notice is the input latency (micro stutter) and not that the image looks less smooth. So adding DLSS 3 to this will just make it worse
DLSS3 and tech like it is super interesting to me. The definition of a "frame" is changing before our eyes.
Async time warp is easier on VR because algorithm knows exact position of camera that your head is at from tracking sensors, which the headset itself feeds to the game engine. On traditional games it is more difficult because game engine needs to process the position of camera based on your inputs and feed it to the DLSS.
It is more difficult but totally doable. Also foveated upscaling is another option. You can combine DLSS with foveated rendering for PC and VR
What i really like about foveated upscaling concept is that it will make artifacts - which are already hard to notice - practically invisible
Yep, VR use sensor data and the quality of the generated frame is way worse than DLSS 3, the reason they gone for interpolation instead of extrapolation like in VR headsets is quality
@@Stef3m FPS matters little when it hurts games responsiveness. Advantage of extrapolation is that is increases the responsiveness at least the feeling of responsiveness
@@erenbalatkan5945 The lower the latency the better but I wouldn't use "hurts" to describe the effect of the latency added by DLSS 3 frame generation, in VR the screen is mounted on your head and camera control connected with head movement so latency and not dropping frame is fundamental or it generate motion sickness but the 90fps requirement is also for stroboscopic stepping and motion trails which are also way more noticeable in VR and that tech like DLSS 3 frame generation fix.
There is a reason if I mentione quality, async time warp frame quality looks crap but in VR the high degradation is worth if allows to not drop frames, on desktop is a different story
I love these videos and thank you so much for helping inform us!!!
Would it work to upload a short 4k 144hz video file online so we can download it and play it offline to see the effects more clearly? Cus yt's framerate locking is annoying
Great music again
The thing that this tech seems most useful for is playing games that are at a locked framerate. This implemented in an emulator would be pretty good.
My first thought regarding DLSS 3 was that it's most likely only relevant in scenarios when native/DLSS 2 (at a reasonable quality level) only gets you around 40-60fps in games played with a controller or where you don't use precise mouse aiming.
Mouse aim would get annoying due to the input lag and you don't really need it in general when you already have more than 60fps.
*If a game is built from the ground up with DLSS 3 in mind,* the developers can render the UI in a separate pass so that DLSS 3 never even has to deal with it.
I can only expect every new game with DLSS3 will do exactly that, and in game where the UI is part of the world, like with Dead Space, there **shouldn't** be any problem at all.
We'll see!
Regardless I still think frame generation is best suited for adding additional frames to already high framerates (and really should have another name). If you have a 240 Hz screen and you get perhaps 100-120 FPS but the motion clarity is better at 180 FPS upwards then for that case frame generation could be a real boon - even with 4090-like GPU performance you're not going to hit near 240 Hz in every scenario, plus you could always supersample the image instead of upscaling - and latency already would be low enough for the additional latency of frame generation to not matter.
Thank you! It's crazy the amount of people who dismiss the improvement in motion clarity of higher frame rates/refresh rates like 240Hz and beyond. It's massive.
@@bungleleaders6823 The people who dismiss this are the ones enabling motion blur in games🤣 artificial motion blur can make things look smoother by making stroboscopic effects less noticeable but using it causes eye tracked motion to be unnaturally blurry. The only fix to have clear motion with no stroboscopic effects at the same time is ultra high refreshrates + framerates. which interpolation can help us achieve.
It's basic stuff but people still have terrible misunderstandings about it.
@@brett20000000009 I could see something in the future that uses a camera to track the eyes position and only selectively blur relative motions. But that would require very low latency operation so we would still need ultra high frame rates/refresh rates. Btw, thank you for this comment, it's a breath of fresh air to see someone who understands the stroboscopic stepping artifact!
@@bungleleaders6823 np! good to see more people getting it. eyetracking hardware assisted per object motion blur could make alot of sense for vr which already needs eyetracking. alot of people only focus on persistence blur but stroboscopic effects are equally important to motion. I think a compromise is using per object motion blur and only blurring objects that are moving way faster then a human could follow, or if the motion is very brief. some sort of algorithm could make a decent estimate and it would be much better then just fixed motion blur.
The way camera shutter blur works doesn't really work well with how humans watch video on a display imo. it's only accurate for the cameras perspective and no one watches video with their eyes locked to the center of the screen.
Scientifically one should target only non competitive games and around final 120 fps from around native 70 fps. This is the correct ratio because Lcd monitors start to clear motion from ghosting and blurring after the 100hz mark. Even the Back Light Strobing doesn't work under that because of brightness flickering. Watch images at RTINGS. Also those fps are enough for less artifacts. So you overall trade 5% interpolation artifacts for smoother fps and crisper image with 70% less motion artifacts.
i guess DLSS3 will be strong in 3-4 Generations, and Strong in Slow games like MMO's or Singleplayer Games. The RTX4000 will be in few years a RTX6000 with Downsides (Inputlag) but still playable. Imagine the 1080ti with DLSS2 and DLSS3, it would be a Monster, even today. 1080p/1440p
I think the delay on mouse movements "can" sometimes be even good.
Games that are slow paced and want to give the player some handicaps like condemned criminal origins was one game where I felt it was done well.
There are many games where this can be somewhat fine (especially single player games), but this weird mouse smoothing is for some odd reason always added in places you would not expect it.
Hardware mouse acceleration in windows, OSU!, Starcraft Broodwar, overwatch 2...
I can definitely understand why a CS Go player would hate it.
If we didn't have more in depth creators like you something like dlss3 could've destroyed competition without real performance gain, because most people would just see the increase in fps and think its better while its not that simple in most cases, in the future it will be hard to make new technologies like that understandable for the majority of consumers, and if consumers buy without the real knowledge of the performance impact of these things it becomes more of a marketing competition than a performance one and nobody likes that.
perfect video.
Couldn't agree more, at first I thought i was missing on with rtx 4000 series but Im not really the person that would like the imput lag difference. I guess I'll buy amd form now on.
I'm playing the new A Plague Tale and with my 4090 already gets a nice high frame rate just using DLSS 2 on ultra settings using 4K. However there are a few times where the framerate will just tank into the 60s which are very noticeable and jolting for me. With DLSS 3 enabled pushing my framerate well into the triple digits, these same, rare, and recreatable instances where framerate tanks, go nowhere near double digits therefore gameplay is uninterrupted and not jolting. So for me the extra latency is acceptable and welcomed at least in this title.
yeh the input latency is a huge turn off for me on any game. bugs me more than any other setting. Seems crazy to me that dlss3 is only on 4000series cards when it doesn't actually need it right now lol.
DLSS reduces power consumption, that's a pretty good reason and vr warping creates its own artifacts at least on Quest 2
I have a 6600xt and a 4k tv. Sometimes I hook up my pc to the tv and the 6600xt cant get more than 30-40 fps in 90% of the games I play at 4k high settings and I limit the refresh to 30hz and use the Tv's truemotion feature or whatever it's name is to get the 60fps feeling. It works amazing, yes you can feel the latency, but it's very smooth and now that it's integated directly into the gpu, with actual good tech to take advantage of it I'm pretty hyped to try it out.
Yep, DLSS 3 offers reduced latency and way far higher quality that TV smoothing tech