Let me know what you think. Any feedback about about the quality of the video are welcome. I am new to YT so Likes and subscribes help a lot and be sure to check out my other video as well! Thank you!
Fake frames can be good only if you are playing some sort of cinematic game, where gameplay is not a main point. In active games you will suffer - high input latency and image artifacts is not a good thing. You will be in a big disadvantage in CS2, Valorant, etc...
Yes, and that’s the whole point, valorant, CS2 and those multiplayer competitive games dont need “fake frames” cause their graphics are simpler and less demanding, this technology is meant for cinematic AAA games experiences that push visuals to the limit.
The video is good, and the naration is great, but the point is missed, I mean, nobody doubts that those ai frames makes the game look better, but feeling wise its unplayable (am still waiting on frame warp to make my final decision about frame gen), and the other major problem that we're seeing coming with the practice of using dlss/fsr is that game developpers are leaving game optimisation behind, and it genuintly bugs me, cause when i go back to games made 8 to 10 years earlier, they genuintly look better, and feel amazing to play on new hardware, so it becomes weird to me that when hardware becomes 10 times better, the games feel worse, i mean am not supposed to get bellow 60 fps on rtx 30/4070 on medium to high (30/4080 should be high to ultimate, and 30/4090 should play on ultimate at 60 fps) and yet, any title that comes out today, is barly playable, and i have to set all my settings to low, and turn on dlss to hit the 60 fps mark, which makes no sense, cause I can't see the added beauty of the game, as everything becomes blurry due to low settings, plus ghosting from temporal sampling methodes.
Honestly, I don’t really understand the nostalgia we have for older games from a technical standpoint. The PS3 and PS4 era was arguably one of the worst-optimized periods in gaming history. If we look at some old Digital Foundry benchmarks, it becomes clear just how poorly many games ran. So, when you say games are unoptimized, relative to what era are we comparing? I’m not sure you truly believe older games look better. We could go down the list of techniques developed over the past decade, such as massive improvements in lighting, parallax mapping, draw distances, LOD (Level of Detail) pop-in, and more. Some games are indeed well-stylized, and their aesthetics can hold up over time, but if this discussion is about mimesis (the imitation of real life), it’s evident that we’ve made significant progress in visual fidelity. I also think developers should focus on optimizing with AI upscaling in mind because native rendering is essentially obsolete. PlayStation has nailed this with the creation of PSSR and its integration of FSR. We’re seeing people successfully inject these technologies into older games, and all future games will likely adopt them. Why wouldn’t developers optimize for a feature that’s becoming the standard, replacing an approach that’s clearly been abandoned?
nshittia's fake frames are NOT better! they introduce substantial input lag which considerably deteriorates the fluidity & responsiveness of the players' experience... don't fall for the marketing hype! buy an AMD gpu like the 7900 XT or XTX with massively more raw power for a greater value & a better overall experience... btw RTX is trash & massively overhyped... 1/2 to 1/3 of the frames for what?? definitely not worth it, just go Radeon
@@Soulbreakergx but the delay that comes with frame gen is still there so even if you make it look better it still will be useless in any fast pased games(unless your conplete trash at video games so it wont effect you).
@VOYSTAN Quite the opposite. If you are good enough at gaming, the added input lag doesn't make that much of a difference. The only time where input lag added by frame generation is noticeably worse is in competitive multiplayer. If you play mainly single-player, especially with a controller, you get used to the input lag very fast, and playing any type of sp game is quite feasible.
I think he already knew about that bro most anyone that looks into this already knows it's not a single player problem it's a multiplayer problem that even the best skills can't fix what's competitive players but again pro players just turn everything to low anyways to get the lowest latency and buy the most expensive CPUs to run high frame rates@@_MrGameplay_
Rasterized rendering being dismissed as brute force or a dead end is a massive oversimplification. Sure, chasing photorealism with rasterization can be challenging, but ask yourself: who genuinely prioritizes absolute photorealism in gaming? Take Black Myth: Wukong, for example. It’s visually stunning, no doubt, but there’s such an overload of detail and visual noise that it detracts from the experience. Combine that with motion blur artifacts from temporal techniques like TAA, DLAA, or DLSS, and the result is an overwhelming and sometimes exhausting gaming experience. I’ve heard from multiple friends that Wukong left them more fatigued than any recent game. As for Frame Generation, latency remains a significant issue. It introduces delay, and when paired with a low base framerate, the problem compounds. Reflex 2 with frame warping might mitigate this, but I’m skeptical about its viability at low framerates. Most demos so far have been in high-FPS esports scenarios. At 30fps, for instance, FrameGen effectively doubles the frametime delay (66.67ms input-to-display latency), which feels far from ideal for any fast-paced or precise gameplay.
Yeah black myth wukong had a lot of problems, it's actually one of the few games on the console to use game generation and it's implementation of that is rather a lackluster. I actually have a video talking about misuses of frame gen in specially when you use it at lower FPS which isn't intended. As for the future of the game, I am hoping the new dlss 4 will be able to better clear up some of those artifacts. But we will see.
I've been messing around with Lossless Scaling 3.0 in the past few days having no prior personal experience with frame gen technology. I've tried it on Dark Souls Remastered, Grounded, and Valheim. I have to say that I'm impressed. People complaining about input lag are far more focused on the number than the actual experience it provides. Sure, empirical testing shows that, yes, frame generation undoubtedly introduces more input latency. It's simply true. However, from my personal experience swapping frame generation on and off while playing games, I could honestly hardly notice the difference, and once I stopped focusing on it and just played the game, I forgot about it entirely. Same goes for artifacting (most the time). The biggest struggle area for frame gen is with UI elements, which fairly frequently distort or shimmer, but in the actual game world I've only had a few moments in all three games I've tried where there were clearly visible artifacts, even with 3X frame gen mode enabled. These typically occur near the edges of the screen though, where my vision is typically less focused. Once again, majority of the time while I'm just focused on gaming, it isn't really an issue. And, from what I've heard, Nvidia's frame gen tech is only better than Lossless Scaling, which I would hope so given the massive amount of money I'm sure Nvidia has dumped into developing it. I don't doubt that it works better on some games than others, and that I've probably just purely by chance picked a good selection of games to try LS3 with. Frame generation will never outright replace rasterization. In competitive titles like CS, League, Dota, Marvel Rivals, etc. raw input latency will always be kind, and even if the tangible impact of that latency is low, every little advantage matters in those scenarios and frame gen will never really be viable. But, for basically any other game, it does certainly have benefits. Capping my games at 60FPS native resolution, then turning on 2-3X frame gen has offered me a more consistently smooth experience and better overall frametimes than running without. In games where I may get up to 90-100FPS with no frame gen, but have occasional dips down to 70 or 60FPS, I no longer have those problems since the game is hard capped to 60 anyways, and frame gen fills the gaps. Sure, if I'm playing a less demanding game that I can consistently natively push 120+ FPS on with no problem, I won't be using frame generation. But with any more intensive titles, I'm going to be using it. I feel like the issue here isn't really frame generation, rather than the way Nvidia is trying to sell it. It, like every other AI thing to come out in the past few years, is a tool. It isn't here to fundamentally change the way we do things. It is simply another thing to put on our belt such that if the situation calls for it, you can enable frame gen and have a smoother experience, albeit with the caveats of slightly higher input latency + some artifacting. It isn't an end-all-be-all. It isn't going to outright replace traditional rendering methods. But it is a useful tool, and I really wish the conversation would treat it as such instead of it just being devolved into your typical "AI BAD" or "AI IS LITERALLY THE FUTURE" argument.
Lossless is inherently worse than the other frame generations, so one can easily expect dlss to be better. Granted, lossless is designed in a way where It just works and doesn't need to be injected into the game like dlss does. This approach is super helpful in alot of ways. For example a lot of reshades(custom postprocess solution you can inject in your games) don't react well to frame generation, but, how lossless is setup has no problem with it. m.ua-cam.com/video/69k7ZXLK1to/v-deo.html&pp=ygUZbG9zc2xlc3MgZnJhbWUgZ2VuZXJhdGlvbg%3D%3D Yeah 5070 being is 4090 was a crazy oversell. AI is a tool, however it will fundamentally change how we do things. I think if you look at all the things down the pipe line you see a radical difference in how games are going to made in the future.
How do you even play if 90% is fake so you see an enemy poping much later than thought. It does not even stay on a persons poping up but like a Tree or Window or anything that was not nativ rendert before. The AI cant predict somthing that it cant see and if it does it will artifact the heck out of it. Not even talking that Upscaling is still not Perfect.
We pay for smoothness, responsiveness and a relaxed gaming experience. We do NOT pay for FPS. I get it: FPS once was an indicator for what we wanted - it isn't anymore. After NVIDIA redefined what FPS means and fools us with their "Marketing". AI isn't there yet.
Honestly, you're partially right. I don't believe that Fake frames are better than traditionally rendered ones* but I do think for newer AAA games (even some smaller AA games) native rendering will die with image-reconstruction techniques such as DLSS. That is no longer native rendering if you have a lower input res than your output res, since now an AI based algorithm/model will fill in the missing pixels. Frame generation still has a long way to go, but if Nvidia or someone masters async frame reprojection (including async space warp) which resolves the "perceived" latency of generated frames, then yeah. It could very well replace most traditionally rendered frames.
wrong from the beginning, higher resolutions dont affect fps exponentially in traditional raster gaming, for example 4k is exactly 4x the pixel count of 1080p but in reality it does not perform 4x worse, and no amount of reflex 2 will eliminate the visual artifacts that come with mullti frame generation, if all this was so easy and free performance more people would be using frame gen, which they are not even though its already in a lot of games, i pretty muchg never use it because the latency is unbearable and the visual artifacts are not worth it, and it also comes with a performance hit to real fps, and thats not even multi frame gen
This is a good point. I shouldn't have stated it gets more difficult exponentially. I think some visual artifacting will be something we have to deal with in this era. But you know traditional means has its visual artifacts too. As for latency we'll see how much it can be improved and cut down, but I think obviously the sky is the limits when it comes to this AI stuff at the moment it doesn't seem to have hit any type of wall. All those massive multi trillion dollar innovations and improvements are tricked down into gaming. We can only expect these things to get massively better coming 1-2 years.
native rendering is not over, simply because the insane power draw that happens in background, you may not care about the electricity bill at the end of the month but i care, fake frames, super sampling all draws 20-30% more power as i did my testing with 6700xt+5600x. i could happily enjoy a game for more hours with less power than playing less time with more power consumption. and no one seem to talk about power draw in their narratives.
@@Neoprenesiren efficient is not the issue, you cram more processing units rhings will become efficient, what i was suggesting is these were my finding and obviously amd does not have dedicated hardware for the job, not as much as nvidia, so see for yourself. because uou maybe wasting power in background simply adding more workload not not that big of a gain in visuals.
Nah frame gen is super gross, the disconect between what my eyes are seeing and the inputs I'm making makes games feel horrible. Only light upscaling is any good from these ai technologies.
I mean I think the point is you're not getting those frame rate with those graphical settings. if there was a good way to get all the shiny features with those frames that would be cool but it's just not the reality, but, now you can get the motion fluidity at least.
This is acceptable only if we get a new advancement in gaming graphics with something out of the ordinary graphics that would actually benefit from the Fake Frames for the game to be playable.
I suspect most of the backlash is due to hardware perf/dollar no longer scaling like it used to while, requirements for newer games just keep going up. In addition there are obvious teething issues with DLSS, Framegen, Ray tracing and all the neural rendering tech. Rasterized gaming is extremely well optimized and has been perfected for decades with all kinds of ressource intensive hacks. Whereas RT and neural rendering in real time computer graphics is still in its infancy. I can't wait to see where the NVIDIA DLSS transformer models and ray tracing is just 5 years from now. FIngers crossed Witcher IV will give us a sneak peak into what the future of game graphics will be like.
@ me too. This stuff is going to be insane. Also check out the deepdives that just went up. Cool stuff, lots of details for DLSS 4 how the Blackwell architecture works and more. Highly recommend the Techpowerup article
I hate playing with DLSS etc. The quality is horrible and it makes me feel like i need glasses. Id rather have an optimized games with less "realistic" graphics than a game with "realistic" graphics that require DLSS to run half decent
@@Soulbreakergx 1440p with a 165Hz refresh rate. My FPS varies in games; for example, in RDR2, I get around 60-ish FPS (without DLSS). The game looks much sharper without artifacts, even if I play it at 1080p. Yes, I know the DLSS version in that game is outdated, but this applies to all the games with DLSS I’ve played. Even in War Thunder, which uses DLSS v3.7.20, I still would prefer putting the game at its absolute lowest settings to avoid the blurry mess that DLSS creates. Without DLSS, I get around 200-ish FPS on relatively high settings, so there’s no real need for me to use it. Hunt: Showdown had a huge update last year that essentially forced me to use DLSS to get it to run well, and now I feel like a blind mole rat whenever I play that game...
ive played RDR2 at 4k(so a little higher then what your using), with a updated dlss dlll, and dunno it wasn't that blurry for me. Granted, I use some reshade and other things to enhance it, and DLAA is likely the better choice. is there a sharpness slider for RDR2?
Yeah there's always the question of what hardware people are using and can that hardware actually utilize the tools they're discussing in a valuable way. I'm hoping the transformer model of dlss, does a better job at upscaling stuff at 1080p, a solid 1080p dlss performance mode would help so many lower end graphics cards.
all the post processing (dlss, fg,etc) is done from real rendered frame, if you can render real frames under 5ms you are fine and it can be beneficial in some cases, otherwise you are screwed ...it is simple
60-80ms latency is what you will have. It's unplayable. It's like you are playing 12-20 fps in terms of how fast you can spot your enemy on screen... And the worst part is you trying to lock on target with such latency.
@@okolenmi7511 imagine you play a game, you have 4k 240hz display, game is running 30 fps (33ms) and you want 144hz(7ms). so you use dlss to render 1080p and upscale so game is now running 120 fps (8ms -+1-2 ms for dlss) thats nice but at cost of image quality, so now you want full 240 you add framegen x1 (+1-2ms) now you have 4k 240 at 12ms so you will feel your mouse latency like 90-100 Hz monitor and you will see 240 fps motion clarity at cost of image quality (it will look like you playing on console give or take :) at the end is up to you what you want or what you are willing to sacrifice
@@Soulbreakergx no it cant , reflex is for sync, "Reflex Low Latency reduces PC latency through precise synchronization of rendering across GPU and CPU" ,it has nothing to do with rendering times = latency between frames (real frames)
You can use Ai Upscaling in VR. it's a really great usecase since VR is really tough to run. I think for something like Half-life alyx you have to inject FSR3 into it, dunno about frame generation I haven't tired it with VR
Fake frames is good if they can play god and predict the future....at high mouse dpi in a twitchy competitive shooter type games, if you're talking about 26fps(yuck) fake framed to 240+ at 20+ms actual frametime at 26fps your mouse could literally be centimeters from where you aim at.........fake frames can play god and see that far? Lolololol Edit: Oh yeah Asynchronous time warping, lolol I guess the drivers is going to magically inject themselves into the runtime gamecode and modify game physics or anything that is not cause directly by the player nice.....or better yet it can magically force the game to load objects in, since you know at frame 1 game hasn't load the object only in frame 2, so the drivers will fake frame and modify the frame according to latest user input update but at last you cant modify what is not there.....so maybe Nvidia got this figured out and will inject itself into the gamecode to force I don't know a car into frame? Nice
Yeah Frame generation isn't great for anything below 50ish fps, I think the dream is to take a game running at 30 and make it 60 but the results are horrible atm. I talk about Black myth wukong attempt at doing this on the console here. ua-cam.com/video/9AWBkKGHzR8/v-deo.html Asynchronous time wrapping yeah Nvidia is going for it ua-cam.com/video/zpDxo2m6Sko/v-deo.htmlsi=MdvfnAxPDpeXAfDq 2kliksphilip has a great video on it as well. ua-cam.com/video/f8piCZz0p-Y/v-deo.html
this doesnt matter, because these GPUs get insane frames on FPS competive games. CS is probably 600+ frames on a 5060? I still use a 1070 on OW and get 240fps lmao theres a reason pro players disable visual clutter like textures, ray tracing light, etc.
@@gunnarowens Oh absolutely this doesn't matter for competitive eSports type games, but do you know of any games that require you to be quite precise in your inputs and how your character moves and it's quite precise placement relative to your target/enemy? All souls like games should fall into this category, what about monster hunter type games? If those have ray tracing and drags frame rate down? Oh turn of ray tracing, then what's the point? Nvidia even demoes and encourages developers and users to use it(yeah yeah use it as a point to sell GPU)
@@MikahRiveria I guarantee you people will be able to do 1hp no hit run throughs even with frame gen on (because they were doing this when it was 30-60fps). Game devs can increase tolerances for things like parry by 20ms if people absolutely need 4k RT to enjoy a game. 99% of people actually playing the game would prefer 240 fps rather than 60 with 20ms less latency, not everyone is doing crazy competitive things in single player games. a parry window in Elden Ring is 200+ ms, and you mentioned "predicting the future" but thats literally what you do in these souls games. its not like an attack comes out of nowhere, you memorize it from mechanics that could be one or two seconds before.
@@gunnarowens Parry window is 200ms, oh sure wonder why is that? Maybe OS, drivers, mouse, gamepad(wireless especially), monitor all have 0 latency, no? So maybe just maybe after all that latency is added you're within the 200ms window, but what happens when you add 30ms on top of that due to frame gen? Whoops over 200ms. 99% of user prefer 240fps rather than 20ms decrease in latency, wow such proud arrogant statement, so why don't e-sport users use framegen to push frame rate higher? Oh right because you absolutely can feel the latency....sure game devs can add another 30ms offset to help with frame gen or easy mode 100ms extra, medium 50ms extra etc but that is bandaid to the framegen latency issue. A human predicting the future in this game, is latency free because you perform it in your head, "It's not like an attack comes out of nowhere" lolol I'm pretty sure a memorize attack patterns is significantly easier compared to predicting your future input, don't tell me you sit the same, move the same hands are solid like steel, etc? Lol now after framegen let's add in another processing step to the pipeline to predict the future shall we, see I'm sure it cost 0 latency..... Edit: 99%of people actually playing the game would prefer 240 fps rather that 60 with 20ms less latency, this statement is weird because assuming all other latency is the same, and users use latency reducing tech like reflex, a 20ms drop in latency brings your frame rate up to over 200fps, since 60Fps is 16ms per frame.........wut? Of course people would prefer that over framegen lmao
Fake frames feel so bad man, I can't play with it, and I'm pretty resistent for low frame rates, in the past when I had a bad pc i used to play at 20~40 fps, so it's not a problem for me playing at 30~60 fps, but I like to have higher frame rates, so I tested frame gen and it feels horrible, why did I test it since I'm fine with 20? Simple, frame drops, nowadays you don't have smooth 20fps you have 5~20~40 frame drops every single time. Upscaling is good, don't get me wrong I like upscaling, but fake frames no!
Frame generation isn't intended for lower frame rates, I actually have a video on that misuse. if you have bad frame rates on a lower end pc, ai upscaling is the thing you want to go for in order to improve performance.
You miss the entire point of people's issue with frame generation. DLSS and TAA look like ass. That is why people hate where technology is going. I'd rather keep my lower FPS and have better graphics than a muddy mess.
With this gamedev road chosen we will encounter games that gonna run 30 to 60 fps with ALL (DLSS, frame generation and other whatever silly stuff Nvidia invent).
Yup, only thing this REALLY does (in the vast majority of cases) is make developers optimize less, where's the need if 70% of the market is expected to use DLSS to aid performance? Sad times.
Likely not frame generation doesn't really work too well at lower frame rates, I think the future, a decoupling of user input and frame rate mediated by AI, while frame rate matches whatever your screen resolution is.
I mean why would you want them to focus on optimization if there is an actual good solution? Don't you want them to focus on actually making the game? More time optimizing equals less game.
@@Soulbreakergx But it's not an actual 'good' solution though, it just lowers the bar for what the 'default' performance of a game should be, check sys. requirements on games, they state that the settings used to determine the requirements were achieved with DLSS or FSR. I think you're naive if you think the vast majority of developers will put those saved resources to good use - majority of them are beholdent to public shareholders and upper management, in my mind it just saves the amount you need to invest into a game for you to 'finish' developing it and release.
@ Default performance is always going to improve-that's just how tech progresses. The bar will continue to rise. DLSS offers a solution for lower-end cards instead of simply saying, "You can't run this game." Optimization has its limits; at some point, you’re trading off visual features. So, the question becomes: what trade-offs are you willing to make to gain extra performance at the very minimum requirements level? Alternatively, why not enable DLSS and maintain better visuals? (For anything else, you can always turn down the settings.)
@@Soulbreakergx I see man. I don't mean it offensively but as a viewer it is a bit jarring. I hope you improve in all aspects as you wish, also nice video
Nvidia it’s always a step ahead. That’s why they’re worth trillions of dollars and Jensen is on top of the world. He earned the black leather jacket and his rockstar status in Silicon Valley
Can you elaborate, and stuff like motion fluidity not important or other things, like dlss performance improvement not valuable? Thanks for the input 🙏
@@Soulbreakergx I think the problem here is to understand how a good technology is developed. A good technology that really has a positive impact is, 99% of the time, built from first principles. In practical terms that means that the foundations of the new technology must be on things not derived from simple and cheap deduction (or inference in the case of generative AI) but principles based on physics and sciences. The approach of the mult generation and upscalers is instead a "cheap" way to try to solve a problem that they created with the premature introduction of a premature technology (raytracing). They "try to solve", not "solve". There are some real application for generative AI but in my opinion there is an abuse of this technology, they want to put that in every product. In movies CG is useful but not when used without a real scope. Often a good practical effect gives better result, is only less cheap (often to think than to pay for). So we need to identify what are the real benefits for this nvidia technology. From my perspective DLSS is terrific when used to run new games on old hardware. The pushing for this technlogy as the new and only way, however, is only marketing because they're business is to sell chips. Obviously Intel and AMD need to follow because, like it or not, nvidia at the moment is in control of the market. Like many technlogies, when the hype will settle, Gen AI will be used only where is really necessary and useful. Conclusion: dlss IS NOT a performance improvement, is like putting you in a slow car, ask you to open the window, to blow fast air in your face pretending you're going fast. You can do that, but it makes sense?
DLSS is multiple parts. DLSS as a upscaler is massively performant. Frame generation is about motion fluidity and smoothing out animations Now if you take the whole thing as package is massively more performant that anything use due to upscaling and is better than other upscaling solutions in quality and performance.
Let me know what you think. Any feedback about about the quality of the video are welcome.
I am new to YT so Likes and subscribes help a lot and be sure to check out my other video as well!
Thank you!
Fake frames can be good only if you are playing some sort of cinematic game, where gameplay is not a main point. In active games you will suffer - high input latency and image artifacts is not a good thing. You will be in a big disadvantage in CS2, Valorant, etc...
but why use fake frame gen in those games when you can play with a potato setup and get high fps anyway
@@edjonhadley1 I'm not sure how you can achieve good FPS in CS2 on potato setup.
Yes, and that’s the whole point, valorant, CS2 and those multiplayer competitive games dont need “fake frames” cause their graphics are simpler and less demanding, this technology is meant for cinematic AAA games experiences that push visuals to the limit.
@@brianywea GTX 4090 - 227 FPS (4K top settings)... You are braindead. Confirmed.
Nvidia reflex 2 disagrees
The video is good, and the naration is great, but the point is missed, I mean, nobody doubts that those ai frames makes the game look better, but feeling wise its unplayable (am still waiting on frame warp to make my final decision about frame gen), and the other major problem that we're seeing coming with the practice of using dlss/fsr is that game developpers are leaving game optimisation behind, and it genuintly bugs me, cause when i go back to games made 8 to 10 years earlier, they genuintly look better, and feel amazing to play on new hardware, so it becomes weird to me that when hardware becomes 10 times better, the games feel worse, i mean am not supposed to get bellow 60 fps on rtx 30/4070 on medium to high (30/4080 should be high to ultimate, and 30/4090 should play on ultimate at 60 fps) and yet, any title that comes out today, is barly playable, and i have to set all my settings to low, and turn on dlss to hit the 60 fps mark, which makes no sense, cause I can't see the added beauty of the game, as everything becomes blurry due to low settings, plus ghosting from temporal sampling methodes.
Honestly, I don’t really understand the nostalgia we have for older games from a technical standpoint. The PS3 and PS4 era was arguably one of the worst-optimized periods in gaming history. If we look at some old Digital Foundry benchmarks, it becomes clear just how poorly many games ran. So, when you say games are unoptimized, relative to what era are we comparing?
I’m not sure you truly believe older games look better. We could go down the list of techniques developed over the past decade, such as massive improvements in lighting, parallax mapping, draw distances, LOD (Level of Detail) pop-in, and more. Some games are indeed well-stylized, and their aesthetics can hold up over time, but if this discussion is about mimesis (the imitation of real life), it’s evident that we’ve made significant progress in visual fidelity.
I also think developers should focus on optimizing with AI upscaling in mind because native rendering is essentially obsolete. PlayStation has nailed this with the creation of PSSR and its integration of FSR. We’re seeing people successfully inject these technologies into older games, and all future games will likely adopt them. Why wouldn’t developers optimize for a feature that’s becoming the standard, replacing an approach that’s clearly been abandoned?
nshittia's fake frames are NOT better! they introduce substantial input lag which considerably deteriorates the fluidity & responsiveness of the players' experience... don't fall for the marketing hype! buy an AMD gpu like the 7900 XT or XTX with massively more raw power for a greater value & a better overall experience... btw RTX is trash & massively overhyped... 1/2 to 1/3 of the frames for what?? definitely not worth it, just go Radeon
Welp DLSS4 with the new transformer upscaling model and Mut should be here in a few weeks. So well see. Might upgrade my 4090.
@@Soulbreakergx but the delay that comes with frame gen is still there so even if you make it look better it still will be useless in any fast pased games(unless your conplete trash at video games so it wont effect you).
@VOYSTAN Quite the opposite. If you are good enough at gaming, the added input lag doesn't make that much of a difference. The only time where input lag added by frame generation is noticeably worse is in competitive multiplayer. If you play mainly single-player, especially with a controller, you get used to the input lag very fast, and playing any type of sp game is quite feasible.
I think he already knew about that bro most anyone that looks into this already knows it's not a single player problem it's a multiplayer problem that even the best skills can't fix what's competitive players but again pro players just turn everything to low anyways to get the lowest latency and buy the most expensive CPUs to run high frame rates@@_MrGameplay_
Oh no Lisa Su spotted 😂
Rasterized rendering being dismissed as brute force or a dead end is a massive oversimplification. Sure, chasing photorealism with rasterization can be challenging, but ask yourself: who genuinely prioritizes absolute photorealism in gaming? Take Black Myth: Wukong, for example. It’s visually stunning, no doubt, but there’s such an overload of detail and visual noise that it detracts from the experience. Combine that with motion blur artifacts from temporal techniques like TAA, DLAA, or DLSS, and the result is an overwhelming and sometimes exhausting gaming experience. I’ve heard from multiple friends that Wukong left them more fatigued than any recent game.
As for Frame Generation, latency remains a significant issue. It introduces delay, and when paired with a low base framerate, the problem compounds. Reflex 2 with frame warping might mitigate this, but I’m skeptical about its viability at low framerates. Most demos so far have been in high-FPS esports scenarios. At 30fps, for instance, FrameGen effectively doubles the frametime delay (66.67ms input-to-display latency), which feels far from ideal for any fast-paced or precise gameplay.
Yeah black myth wukong had a lot of problems, it's actually one of the few games on the console to use game generation and it's implementation of that is rather a lackluster. I actually have a video talking about misuses of frame gen in specially when you use it at lower FPS which isn't intended.
As for the future of the game, I am hoping the new dlss 4 will be able to better clear up some of those artifacts. But we will see.
I've been messing around with Lossless Scaling 3.0 in the past few days having no prior personal experience with frame gen technology. I've tried it on Dark Souls Remastered, Grounded, and Valheim. I have to say that I'm impressed.
People complaining about input lag are far more focused on the number than the actual experience it provides. Sure, empirical testing shows that, yes, frame generation undoubtedly introduces more input latency. It's simply true. However, from my personal experience swapping frame generation on and off while playing games, I could honestly hardly notice the difference, and once I stopped focusing on it and just played the game, I forgot about it entirely.
Same goes for artifacting (most the time). The biggest struggle area for frame gen is with UI elements, which fairly frequently distort or shimmer, but in the actual game world I've only had a few moments in all three games I've tried where there were clearly visible artifacts, even with 3X frame gen mode enabled. These typically occur near the edges of the screen though, where my vision is typically less focused. Once again, majority of the time while I'm just focused on gaming, it isn't really an issue.
And, from what I've heard, Nvidia's frame gen tech is only better than Lossless Scaling, which I would hope so given the massive amount of money I'm sure Nvidia has dumped into developing it. I don't doubt that it works better on some games than others, and that I've probably just purely by chance picked a good selection of games to try LS3 with.
Frame generation will never outright replace rasterization. In competitive titles like CS, League, Dota, Marvel Rivals, etc. raw input latency will always be kind, and even if the tangible impact of that latency is low, every little advantage matters in those scenarios and frame gen will never really be viable. But, for basically any other game, it does certainly have benefits. Capping my games at 60FPS native resolution, then turning on 2-3X frame gen has offered me a more consistently smooth experience and better overall frametimes than running without. In games where I may get up to 90-100FPS with no frame gen, but have occasional dips down to 70 or 60FPS, I no longer have those problems since the game is hard capped to 60 anyways, and frame gen fills the gaps. Sure, if I'm playing a less demanding game that I can consistently natively push 120+ FPS on with no problem, I won't be using frame generation. But with any more intensive titles, I'm going to be using it.
I feel like the issue here isn't really frame generation, rather than the way Nvidia is trying to sell it. It, like every other AI thing to come out in the past few years, is a tool. It isn't here to fundamentally change the way we do things. It is simply another thing to put on our belt such that if the situation calls for it, you can enable frame gen and have a smoother experience, albeit with the caveats of slightly higher input latency + some artifacting. It isn't an end-all-be-all. It isn't going to outright replace traditional rendering methods. But it is a useful tool, and I really wish the conversation would treat it as such instead of it just being devolved into your typical "AI BAD" or "AI IS LITERALLY THE FUTURE" argument.
Lossless is inherently worse than the other frame generations, so one can easily expect dlss to be better.
Granted, lossless is designed in a way where It just works and doesn't need to be injected into the game like dlss does. This approach is super helpful in alot of ways. For example a lot of reshades(custom postprocess solution you can inject in your games) don't react well to frame generation, but, how lossless is setup has no problem with it.
m.ua-cam.com/video/69k7ZXLK1to/v-deo.html&pp=ygUZbG9zc2xlc3MgZnJhbWUgZ2VuZXJhdGlvbg%3D%3D
Yeah 5070 being is 4090 was a crazy oversell. AI is a tool, however it will fundamentally change how we do things. I think if you look at all the things down the pipe line you see a radical difference in how games are going to made in the future.
Or you could buy a 1080ti and some narcotics and experience the same thing.
That would be a great time
How do you even play if 90% is fake so you see an enemy poping much later than thought. It does not even stay on a persons poping up but like a Tree or Window or anything that was not nativ rendert before. The AI cant predict somthing that it cant see and if it does it will artifact the heck out of it. Not even talking that Upscaling is still not Perfect.
28 fps to 240 is obscene, you're not playing a video game, it's just a movie with some interactions here and there.
We pay for smoothness, responsiveness and a relaxed gaming experience. We do NOT pay for FPS. I get it: FPS once was an indicator for what we wanted - it isn't anymore. After NVIDIA redefined what FPS means and fools us with their "Marketing". AI isn't there yet.
I mean kinda most people are playing with dlss or FSR at this point. I would say some of this stuff is already here.
Honestly, you're partially right. I don't believe that Fake frames are better than traditionally rendered ones* but I do think for newer AAA games (even some smaller AA games) native rendering will die with image-reconstruction techniques such as DLSS. That is no longer native rendering if you have a lower input res than your output res, since now an AI based algorithm/model will fill in the missing pixels. Frame generation still has a long way to go, but if Nvidia or someone masters async frame reprojection (including async space warp) which resolves the "perceived" latency of generated frames, then yeah. It could very well replace most traditionally rendered frames.
wrong from the beginning, higher resolutions dont affect fps exponentially in traditional raster gaming, for example 4k is exactly 4x the pixel count of 1080p but in reality it does not perform 4x worse, and no amount of reflex 2 will eliminate the visual artifacts that come with mullti frame generation, if all this was so easy and free performance more people would be using frame gen, which they are not even though its already in a lot of games, i pretty muchg never use it because the latency is unbearable and the visual artifacts are not worth it, and it also comes with a performance hit to real fps, and thats not even multi frame gen
This is a good point. I shouldn't have stated it gets more difficult exponentially. I think some visual artifacting will be something we have to deal with in this era. But you know traditional means has its visual artifacts too.
As for latency we'll see how much it can be improved and cut down, but I think obviously the sky is the limits when it comes to this AI stuff at the moment it doesn't seem to have hit any type of wall. All those massive multi trillion dollar innovations and improvements are tricked down into gaming. We can only expect these things to get massively better coming 1-2 years.
maybe the developers should OPTIMIZE their crappy games too
More time optimizing less time making games. I want the devs making the game tbh.
native rendering is not over, simply because the insane power draw that happens in background, you may not care about the electricity bill at the end of the month but i care, fake frames, super sampling all draws 20-30% more power as i did my testing with 6700xt+5600x. i could happily enjoy a game for more hours with less power than playing less time with more power consumption. and no one seem to talk about power draw in their narratives.
Ignoring that navidias cards are getting more and more power efficient.
Why would something like ai up scaling. Which takes a lower resolution and upscales to a higher one would take more power than the native equivalent?
@@Soulbreakergx i maybe seeing with amd, i want some one to try, power draw figures.
@@Neoprenesiren efficient is not the issue, you cram more processing units rhings will become efficient, what i was suggesting is these were my finding and obviously amd does not have dedicated hardware for the job, not as much as nvidia, so see for yourself. because uou maybe wasting power in background simply adding more workload not not that big of a gain in visuals.
Nah frame gen is super gross, the disconect between what my eyes are seeing and the inputs I'm making makes games feel horrible.
Only light upscaling is any good from these ai technologies.
I think alot of things are going to get better
native better
Imo a game with 2x frame gen @200fps would feel the same as a game natively running at 100 fps. Basically less responsiveness.
I mean I think the point is you're not getting those frame rate with those graphical settings. if there was a good way to get all the shiny features with those frames that would be cool but it's just not the reality, but, now you can get the motion fluidity at least.
This is acceptable only if we get a new advancement in gaming graphics with something out of the ordinary graphics that would actually benefit from the Fake Frames for the game to be playable.
We will
ua-cam.com/video/5KRxyvdjpVU/v-deo.htmlsi=ISyXMO-7u_dF9aSb
I suspect most of the backlash is due to hardware perf/dollar no longer scaling like it used to while, requirements for newer games just keep going up. In addition there are obvious teething issues with DLSS, Framegen, Ray tracing and all the neural rendering tech.
Rasterized gaming is extremely well optimized and has been perfected for decades with all kinds of ressource intensive hacks. Whereas RT and neural rendering in real time computer graphics is still in its infancy. I can't wait to see where the NVIDIA DLSS transformer models and ray tracing is just 5 years from now. FIngers crossed Witcher IV will give us a sneak peak into what the future of game graphics will be like.
Yea, I am excited for the future as well. So many amazing things down the line are coming.
@ me too. This stuff is going to be insane. Also check out the deepdives that just went up. Cool stuff, lots of details for DLSS 4 how the Blackwell architecture works and more.
Highly recommend the Techpowerup article
I hate playing with DLSS etc. The quality is horrible and it makes me feel like i need glasses. Id rather have an optimized games with less "realistic" graphics than a game with "realistic" graphics that require DLSS to run half decent
What resolution are you playing on, and what frame rate are you playing natively on?
@@Soulbreakergx 1440p with a 165Hz refresh rate. My FPS varies in games; for example, in RDR2, I get around 60-ish FPS (without DLSS). The game looks much sharper without artifacts, even if I play it at 1080p. Yes, I know the DLSS version in that game is outdated, but this applies to all the games with DLSS I’ve played.
Even in War Thunder, which uses DLSS v3.7.20, I still would prefer putting the game at its absolute lowest settings to avoid the blurry mess that DLSS creates. Without DLSS, I get around 200-ish FPS on relatively high settings, so there’s no real need for me to use it.
Hunt: Showdown had a huge update last year that essentially forced me to use DLSS to get it to run well, and now I feel like a blind mole rat whenever I play that game...
ive played RDR2 at 4k(so a little higher then what your using), with a updated dlss dlll, and dunno it wasn't that blurry for me. Granted, I use some reshade and other things to enhance it, and DLAA is likely the better choice. is there a sharpness slider for RDR2?
People who are saying FG as fake frames don't even used it 😂
I use it, and smoothness gain is really like native fps
Yeah there's always the question of what hardware people are using and can that hardware actually utilize the tools they're discussing in a valuable way.
I'm hoping the transformer model of dlss, does a better job at upscaling stuff at 1080p, a solid 1080p dlss performance mode would help so many lower end graphics cards.
all the post processing (dlss, fg,etc) is done from real rendered frame, if you can render real frames under 5ms you are fine and it can be beneficial in some cases, otherwise you are screwed ...it is simple
60-80ms latency is what you will have. It's unplayable. It's like you are playing 12-20 fps in terms of how fast you can spot your enemy on screen... And the worst part is you trying to lock on target with such latency.
Stuff like reflex 2 might address that problem
@@okolenmi7511 imagine you play a game, you have 4k 240hz display, game is running 30 fps (33ms) and you want 144hz(7ms). so you use dlss to render 1080p and upscale so game is now running 120 fps (8ms -+1-2 ms for dlss) thats nice but at cost of image quality, so now you want full 240 you add framegen x1 (+1-2ms) now you have 4k 240 at 12ms so you will feel your mouse latency like 90-100 Hz monitor and you will see 240 fps motion clarity at cost of image quality (it will look like you playing on console give or take :) at the end is up to you what you want or what you are willing to sacrifice
@@Soulbreakergx no it cant , reflex is for sync, "Reflex Low Latency reduces PC latency through precise synchronization of rendering across GPU and CPU" ,it has nothing to do with rendering times = latency between frames (real frames)
except it doesn't work in VR which needs raw rasterizing computational power than dlss4 mfg
You can use Ai Upscaling in VR. it's a really great usecase since VR is really tough to run.
I think for something like Half-life alyx you have to inject FSR3 into it, dunno about frame generation I haven't tired it with VR
Fake frames is good if they can play god and predict the future....at high mouse dpi in a twitchy competitive shooter type games, if you're talking about 26fps(yuck) fake framed to 240+ at 20+ms actual frametime at 26fps your mouse could literally be centimeters from where you aim at.........fake frames can play god and see that far? Lolololol
Edit: Oh yeah Asynchronous time warping, lolol I guess the drivers is going to magically inject themselves into the runtime gamecode and modify game physics or anything that is not cause directly by the player nice.....or better yet it can magically force the game to load objects in, since you know at frame 1 game hasn't load the object only in frame 2, so the drivers will fake frame and modify the frame according to latest user input update but at last you cant modify what is not there.....so maybe Nvidia got this figured out and will inject itself into the gamecode to force I don't know a car into frame? Nice
Yeah Frame generation isn't great for anything below 50ish fps, I think the dream is to take a game running at 30 and make it 60 but the results are horrible atm. I talk about Black myth wukong attempt at doing this on the console here.
ua-cam.com/video/9AWBkKGHzR8/v-deo.html
Asynchronous time wrapping yeah Nvidia is going for it
ua-cam.com/video/zpDxo2m6Sko/v-deo.htmlsi=MdvfnAxPDpeXAfDq
2kliksphilip has a great video on it as well.
ua-cam.com/video/f8piCZz0p-Y/v-deo.html
this doesnt matter, because these GPUs get insane frames on FPS competive games. CS is probably 600+ frames on a 5060? I still use a 1070 on OW and get 240fps lmao
theres a reason pro players disable visual clutter like textures, ray tracing light, etc.
@@gunnarowens Oh absolutely this doesn't matter for competitive eSports type games, but do you know of any games that require you to be quite precise in your inputs and how your character moves and it's quite precise placement relative to your target/enemy? All souls like games should fall into this category, what about monster hunter type games? If those have ray tracing and drags frame rate down? Oh turn of ray tracing, then what's the point? Nvidia even demoes and encourages developers and users to use it(yeah yeah use it as a point to sell GPU)
@@MikahRiveria I guarantee you people will be able to do 1hp no hit run throughs even with frame gen on (because they were doing this when it was 30-60fps). Game devs can increase tolerances for things like parry by 20ms if people absolutely need 4k RT to enjoy a game.
99% of people actually playing the game would prefer 240 fps rather than 60 with 20ms less latency, not everyone is doing crazy competitive things in single player games. a parry window in Elden Ring is 200+ ms, and you mentioned "predicting the future" but thats literally what you do in these souls games. its not like an attack comes out of nowhere, you memorize it from mechanics that could be one or two seconds before.
@@gunnarowens Parry window is 200ms, oh sure wonder why is that? Maybe OS, drivers, mouse, gamepad(wireless especially), monitor all have 0 latency, no? So maybe just maybe after all that latency is added you're within the 200ms window, but what happens when you add 30ms on top of that due to frame gen? Whoops over 200ms. 99% of user prefer 240fps rather than 20ms decrease in latency, wow such proud arrogant statement, so why don't e-sport users use framegen to push frame rate higher? Oh right because you absolutely can feel the latency....sure game devs can add another 30ms offset to help with frame gen or easy mode 100ms extra, medium 50ms extra etc but that is bandaid to the framegen latency issue.
A human predicting the future in this game, is latency free because you perform it in your head, "It's not like an attack comes out of nowhere" lolol I'm pretty sure a memorize attack patterns is significantly easier compared to predicting your future input, don't tell me you sit the same, move the same hands are solid like steel, etc? Lol
now after framegen let's add in another processing step to the pipeline to predict the future shall we, see I'm sure it cost 0 latency.....
Edit: 99%of people actually playing the game would prefer 240 fps rather that 60 with 20ms less latency, this statement is weird because assuming all other latency is the same, and users use latency reducing tech like reflex, a 20ms drop in latency brings your frame rate up to over 200fps, since 60Fps is 16ms per frame.........wut? Of course people would prefer that over framegen lmao
Fake frames feel so bad man, I can't play with it, and I'm pretty resistent for low frame rates, in the past when I had a bad pc i used to play at 20~40 fps, so it's not a problem for me playing at 30~60 fps, but I like to have higher frame rates, so I tested frame gen and it feels horrible, why did I test it since I'm fine with 20? Simple, frame drops, nowadays you don't have smooth 20fps you have 5~20~40 frame drops every single time.
Upscaling is good, don't get me wrong I like upscaling, but fake frames no!
Frame generation isn't intended for lower frame rates, I actually have a video on that misuse. if you have bad frame rates on a lower end pc, ai upscaling is the thing you want to go for in order to improve performance.
You miss the entire point of people's issue with frame generation. DLSS and TAA look like ass. That is why people hate where technology is going. I'd rather keep my lower FPS and have better graphics than a muddy mess.
Do you think playing a maxxed out wukong at 27 fps, is better than playing it at 240+?
With this gamedev road chosen we will encounter games that gonna run 30 to 60 fps with ALL (DLSS, frame generation and other whatever silly stuff Nvidia invent).
Yup, only thing this REALLY does (in the vast majority of cases) is make developers optimize less, where's the need if 70% of the market is expected to use DLSS to aid performance? Sad times.
Likely not frame generation doesn't really work too well at lower frame rates, I think the future, a decoupling of user input and frame rate mediated by AI, while frame rate matches whatever your screen resolution is.
I mean why would you want them to focus on optimization if there is an actual good solution? Don't you want them to focus on actually making the game?
More time optimizing equals less game.
@@Soulbreakergx But it's not an actual 'good' solution though, it just lowers the bar for what the 'default' performance of a game should be, check sys. requirements on games, they state that the settings used to determine the requirements were achieved with DLSS or FSR.
I think you're naive if you think the vast majority of developers will put those saved resources to good use - majority of them are beholdent to public shareholders and upper management, in my mind it just saves the amount you need to invest into a game for you to 'finish' developing it and release.
@ Default performance is always going to improve-that's just how tech progresses. The bar will continue to rise. DLSS offers a solution for lower-end cards instead of simply saying, "You can't run this game." Optimization has its limits; at some point, you’re trading off visual features.
So, the question becomes: what trade-offs are you willing to make to gain extra performance at the very minimum requirements level? Alternatively, why not enable DLSS and maintain better visuals? (For anything else, you can always turn down the settings.)
is your voice AI generated? Your voice is very start stoppy with inconsistent breaks
Sorry about that working on it. It's my real voice.
@@Soulbreakergx I see man. I don't mean it offensively but as a viewer it is a bit jarring. I hope you improve in all aspects as you wish, also nice video
@mattss7x thank you 🙏
this is the future
Nvidia it’s always a step ahead. That’s why they’re worth trillions of dollars and Jensen is on top of the world. He earned the black leather jacket and his rockstar status in Silicon Valley
ignorance is a bliss...
Great movie bad game! End of Story.
NVidia CEO secret YTB account, I also add that the graphics of games since introduction of AI are worst,
Lol I wish. Where's that Nvidia money 😂
Its just AI bullshit.
Can you elaborate, and stuff like motion fluidity not important or other things, like dlss performance improvement not valuable? Thanks for the input 🙏
@@Soulbreakergx I think the problem here is to understand how a good technology is developed. A good technology that really has a positive impact is, 99% of the time, built from first principles. In practical terms that means that the foundations of the new technology must be on things not derived from simple and cheap deduction (or inference in the case of generative AI) but principles based on physics and sciences. The approach of the mult generation and upscalers is instead a "cheap" way to try to solve a problem that they created with the premature introduction of a premature technology (raytracing). They "try to solve", not "solve". There are some real application for generative AI but in my opinion there is an abuse of this technology, they want to put that in every product. In movies CG is useful but not when used without a real scope. Often a good practical effect gives better result, is only less cheap (often to think than to pay for). So we need to identify what are the real benefits for this nvidia technology. From my perspective DLSS is terrific when used to run new games on old hardware. The pushing for this technlogy as the new and only way, however, is only marketing because they're business is to sell chips. Obviously Intel and AMD need to follow because, like it or not, nvidia at the moment is in control of the market. Like many technlogies, when the hype will settle, Gen AI will be used only where is really necessary and useful.
Conclusion: dlss IS NOT a performance improvement, is like putting you in a slow car, ask you to open the window, to blow fast air in your face pretending you're going fast. You can do that, but it makes sense?
DLSS is multiple parts.
DLSS as a upscaler is massively performant.
Frame generation is about motion fluidity and smoothing out animations
Now if you take the whole thing as package is massively more performant that anything use due to upscaling and is better than other upscaling solutions in quality and performance.
just buy AMD Nvidia Trick you
Buy Intel 😂
@@Soulbreakergx 🤣😂
no
Yes lol 🤣
This sucks
Omg....