Jedi Survivor has abysmal optimization that still isn't fixed. Doesn't matter if you have an i9 and 4090, there will always be stuttering in the Koboh village area. Really put me off the game to be honest.
My first thought was that this feels much more like a software issue than a hardware issue. It doesn't matter how expensive your hardware is if the software doesn't utilize it efficiently. Why are developers making games that don't run correctly with top of the line current hardware? If we aren't suppose to run at 'Epic' settings make it an opt in toggle to enable 'Shenanigans' level settings or whatever. Blaming the hardware just seems silly at this point. Edit: I want to make it clear that I'm happy that games are slightly future proof. But either the settings aren't used correctly or gamer culture of using the highest settings is making things seem ridiculous. Developers shouldn't be designing their games with these kind of bottlenecks In My Very Unexpert Opinion
@@TiasungI go beyond 1% lows, i just target frametimes. If i run 8 player mode with bots for Smash Ultimate on Switch Emulation, the frametime graph is perfectly flat at 60fps and 16.6ms without stutters while the main CPU thread is bottlenecked at 100%. After 12th gen Alder Lake, there's no reason to upgrade since the IPC stagnated. Overclocking RAM or adding more cache is artificially pushing fundamentally flawed chips. (now that Arrow Lake will remove Ring Bus, expect a bigger problem). Also a friend has a Ryzen 9 5900X, stutters are non existent and the windows experience is smoother than Ryzen 7 chips.
As a game developer myself I would argue that we currently have insane levels of CPU performance. It boils down to poor game optimization. On the PC side most gamers are literally brute forcing their way to acceptable performance levels, even though the core problem lies within the "optimization" efforts during development. Question is, what happens when you buy "the best of the best" and STILL can't get a good frame delivery? The best strategy is to actually to call out the companies, letting them know that 30fps is not being an acceptable performance target on today's hardware (including current-gen consoles)...
Yep. Way too much of the consumer market seems bound and determined to point anywhere but at the developers and publishers with regards to the issues. I've been yelled at by people on forums for making the statement that adjusting simple game mechanics (things like changing what time of day event triggers fire on, for example) should not take literal months to change a few values, should be EXCEEDINGLY simple and easy, and that the only way it takes that long, is if the underlying code for the whole structure is haphazard. Even when I explicitly don't blame the devs, but instead their time crunch, I still get people jumping down my throat claiming it's more complicated than it is. It may be more complicated, but it really shouldn't be. I would be shocked if most the current western AAA publishers had a single team amongst them that could build something like a blackjack game without massive performance bloat issues from poor code. Tends to be what happens when you push your team to work with pre-established code structures instead of letting them make new ones better suited to the task, and that's before touching on the push from publishers to replace long-term workers with per project contracted temps.
@@jtnachos16 Yeah, I mean. I can go anywhere from being super efficient at coding, to basically "taking forever" depending on what the underlying architecture, structure and documentation looks like. In many cases; deadlines, code crunch etc results in developers coding fast but not necessarily adhering to core architectural principles or forward thinking-structure. Further down the line other developers will build features on top of that already-shaky-foundation. That's when things go south in terms of optimizations, because once you want to optimize the foundation at a later stage, everything built on top of it will just come crashing down... I am not a master coder myself, but I often get praised for creating efficient solutions. And it's really not about writing the most efficient algorithm's all the time, but rather to create good foundations while only running code when it is absolutely necessary. Another good practice is to question how often certain features need to be polled, and at what accuracy level, adding those things into your workflow and you end up with code that is fairly optimized. A Perfect example is Cyberpunk on the Steam Deck. CPU performance is great when walking around, but once you start running, you can tell that a lot of "interactive systems" are being triggered; hammering the CPU. things like crowd interaction system, resulting in a lot of ray casting and dynamic animation blending depending on player movement. Typical example of how some of the code isn't actually running all of the time. Personally I wished I could disable those systems on the Steam Deck and instead just slide past npc's without any added interactivity. It would look less immersive, but if we are being honest to ourselves, the NPCs are just window dressing, they are not a part of the core gameplay loop and I much rather take the performance/battery life. :)
Mate, thanks a lot for your comment, we need more people like you. Today we see how disastrous the situation is. For example, the UE5 engine is a total mess. Every game (mostly) has traversal stutters, and it has become normal nowadays. Mostly we are playing with half-baked products. Look at Silent Hill 2 Remake, an amazing game, but with killing quantity of stutters.
The problem is that CPUs have evolved significantly in terms of multi-threaded performance, but not single-threaded performance. If you compare Raptor Lake to Core 2 Duo E8000, the IPC has just about doubled, in 15 years. We have high core counts and much higher clock speeds, but the architectures themselves are not that amazing. And games still rely mostly on single-threaded performance. A 6C/12T CPU from the newest generation always does better than any CPU from the previous generation (putting aside X3D chips). Developers always say it's hard to parallelize things in gaming, and Unreal Engine is definitely the biggest culprit in this regard. And considering everyone is moving to UE5, it's not a good situation.
Problem here with these situations is there's nothing you can do to help a CPU bottleneck other than getting faster RAM or replacing the CPU. Turning down CPU intensive settings rarely helps, in a lot of cases if you lower say the draw distance your CPU won't handle the extra fps it would get anyway.
@@laitinlok1That, and buying Intel (Ring Bus) with usually higher IPC and more cores. OR a Ryzen 9 7900X and beyond. I have a 12700K, with DDR5 5200 i easily would get 80fps in Hogsmead for Hogwarts Legacy. The 7800X3D is not that fast, it's still 8000 points on 3D Mark, my 12700K reaches 10000 points. I always noticed that X3D chips look nice on the averages and max fps but the frametimes are kinda bad (?).
@@saricubra2867Vulkan layers only improve performance in very specific and limited ways and are not able to fix a game that is overly broken. 3DMark heavily favors cores, much more than games, so it's not a great data point. 7800x3d does great with frametimes, often even better than something like a 12900k. Smaller cache and the growing ringbus latency are actually sources of frametimes issues in their own rights. Intel does not have IPC lead, they have been trying to make up for it with higher clocks. I have a 7800x3d and also get excellent performance in Hogsmeade. Almost everything you said is either misleading or outright wrong, and I would encourage you to be more objective and thorough in your research in the future.
No amount of CPU processing power or GPU processing power will ever be able to overcome bad software. Hardware is software reliant and is essentially complex sand castles without software to tell it what to do.
@@roklaca3138 PCMR is a meme for children and basement dwellers. Your average pc gamer doesn't even know what's in his pc let alone knowing or caring about bottlenecks or software issues as long as it makes the game go.
@@Owen-np3wf but they feel the performance drop on lesser cpus, cannot deny that. Noone can prove to me you need a 800$ cpu to get high frames...doom seriea proved that
@@roklaca3138My 12700K easily can pull above 80fps on Hogwarts Legacy 's Hogsmead. With 12 cores and 20 threads monolithic Ring Bus and high IPC, what is stuttering?
The big problem is when you have to optimize for the lowest common denominator the hardware better than it tends to suffer. It shouldn't as better hardware should run it better but depending on how the optimization is done it can. Now how did the people on PC get past it before? They just had hardware so far above the lowest that it didn't matter. I do think always above 60fps is a good target to aim for as you have to draw the line some where. Yes you might have consumer will monitors able to do more than most knowingly or unknowingly will have a 60hz monitor or not have set it in settings to do more than 60hz even if it is able to do more. So making the game look worse so a relatively few people can run their game at 120fps is not a good model. Having it so it can run and go at 60fps in action scenes on the majority of target systems is the model to go with. Going by Steam September hardware survey most only have a 1080p monitor fat 55% and 1440p at 20% of the survey well which do you think will be the target knowing that 1440p can just run 1080p with most not noticing? The majority will make for 1080p at 60fps as it is still the most common by a lot. Do you have that? As you are here most likely no nor do i but we are in the minority not the majority and publishers want to sell as many copies as possiable so optimize for the majority of systems.
@@yumri4the reality is that the vast majority of PC gamers are on hardware at about RTX 3060/RX 6600 XT and mid range CPU's. That's what developers should optimize games for, or else we will just keep playing older games.
@@bjarnis that has been like, forever. The 60s cards have sold most. I guess Devs expect Nvidia to offer increasingly faster 60s cards. And instead we got stagnation.
@@konstantinlozev2272what devs don't understand is that most of us don't care about "pretty graphics", gameplay and performance is what matters. Native 1080p is still king and it doesn't matter how much "DLSS, FSR and frame gen" they advertise for, we just want crystal clear image with no artifacts when running around.
You can't optimize the RT API as a game dev. You work with what you have and it's garbage. Turn on RT expect stutter or MUCH lower lows where to not notice stutter you need a locked fps near those lows. The better cpu will give you better lows to work from.
People like you have brainworms. You guys are always in the comments...screaming everything is unoptimized while knowing nothing about game development. Yall expect Cyberpunk 4k full PT to run on an amd dual core and a gtx 1070...it doesnt work like that.
*"CPU Usage" confuses people...* Many people think if a CPU has "50%" usage that you can't be CPU bottlenecked because they simply don't understand enough about computers. You always go by "GPU Usage" because, generally speaking, the code can be extremely parallel and thus close to 100% usage, to oversimplify, means a GPU is fully saturated... and if it shows 50% (at maximum GPU frequency) then you're using about HALF of its potential so are thus bottlenecked by the CPU (assuming no software FPS cap causing this). So... if you had a game that had NO branching code and thus could only run on one core at a time, you could only use 1/x(100%) of the CPU. So if it was a 4c/4t CPU then in this scenario you could only use 25% of the CPU yet could still be bottlenecking the graphics card. *TLDR* GPU USAGE near 100% means a GPU bottleneck. GPU Usage below 100% means a CPU bottleneck. CPU USAGE near 100% means a CPU bottleneck. CPU Usage below 100% can't tell you where the bottleneck is.
Last one with the cpu usage below 100% means it's a latency (usually memory) bottleneck somewhere in the chain. Aka why cranking memory latency lower makes fps number go up despite bandwidth staying the same. That reason is why the x3d vcache amd chips are "the best gaming cpus". They have enough L3 cache to lower avg memory latency by meaningful amounts.
@@LiveType Shaders. A lot of the more modern games have to compile shaders. That's why we see sudden judders as he's playing the game. It's not really a good take on CPU usage here.
@@ArdaSReal I'm perfectly aware that there's more to it than that if you want to get nerdy with frame pacing and dispatch calls. But the net result is about the same. PresentMon is a useful tool for developers as it allows them to see exactly what is happening when during the entire frame rendering. But it doesn't help me much to know that the majority of the the waiting happens between the geometry upload and the shader code dispatch.
I'm not saying it isn't sad, but it can be somewhat explained by game devs that target 30fps on PS5. That means a CPU with double the gaming performance will only be around 60fps.
@JayzBeerz and yet the fastest gaming cpu is an 8 core..... It's not the core count that matters.... Game engines only have so much parallelization that can be done , Certainly having enough cores is important but 8 cores is very much enough. Typically either the main thread or the render thread is the one holding everything else back.
@@JayzBeerz Do you even know what you're talking about mate? Dividing game loop between more threads is just not possible most of the times. You can even see the CPU usage during parts where there is CPU bottleneck is barely hitting 50%. There are half of the cores doing nothing. 7800x3d is the best gaming CPU there is right now, because it has lots of cache, that actually improving performance by making CPU not needing to fetch data from RAM as often, because it can keep more of it in its own cache, which is 40+ times faster than fetching from RAM. You can even see 7900x3d is not as fast as 7800x3d, simply because 7900x3d has only 6 3d v-cache enabled cores, while 7800x3d has all of them, which is 8.
@@Gornius My 13900K is faster in Hogwarts Legacy. It is OC'd and the DDR5 has tight timings 7200 MT/s CL32. So there's more to performance than just more cache. Some programs prefer core speed and some games can't fit all of it's data into the 7800X3D cache and have to go fetch it from RAM. In these scenarios it's easy to see a 13/14 gen CPU be faster with it's better DDR5 support.
@@Gornius Aren't half the cores doing nothing because the CPU is parking those cores because they don't have access to the strap on cache? And wasn't the lack of handoff to the right cores a huge bottleneck for a while because CPU cache should be invisible to all software and AMD just dropped the ball on having the OS use the right cores?
There's nothing really happening in this scene to justify 7800x3d to drop to 60fps. CPU bottlenecks can happen when many characters spawned or many objects calculating physics, many ai calculating paths etc. But this game episode literally empty and looks like it's doing some heavy unnecessary calculations all the time on the main/game thread which affects fps and smoothness so much It's really a game problem here not the CPU You might see drops to 90 fps for example in this game on a cpu from 2034 that wouldn't mean that we need a better CPU to release... I presume in this particular scene all the interiors and characters inside buildings are spawned at the same time with zero optimization to them
I don't like moving the issue to hardware. The devs designed these games with current and past hardware in mind and shouldn't expect consumers to need future hardware to make their game run correctly. I just find this to be completely ridiculous. Why are we, the consumers, moving the goalpost for a game to run well 2, 3, 4+ years after release. They DESIGNED THIS FOR CONSOLES and it doesn't run well on $3000 rigs. I'm just flabbergasted and feel like I'm an old man yelling at clouds with how this seems to be a perpetual an immovable issue. When the game was in development the hardware they had was a generation or two behind when it actually releases so HOW does it not run well on release day new hardware.
We don't?? I've heard that that percentage number of CPU usage is actually not correct because it's overall cores usage. Games are (probably) using 99% of a few cores and not actually utilizing all cores. Maybe engines don't work like that? So if this is correct, we actually do need faster CPUs.
11:58 I don't think you should be disappointed with AMD or Intel, because they can't just magically release a 7GHz cpu then 8, 9, 10GHz and so on. It's the game developers that you should be disappointed with, for designing a game that can't go beyond 60fps on the best gaming CPU on the market.
It's not about general disappointment. It's just sad that a new cpu generation brings almost no performance improvements to the table. Zen 5 and arrow lake are mainly about power efficiency and that's not enough.
It might also be game engines and device drivers. Would be interesting to see which graphics settings most affect CPU usage. (Edit: also operating system stuff like the scheduler...)
Until sony and microsoft release consoles that are able to run these AAA cpu bottlenecked games smoothly at a cap of 60fps or higher, i think we will always see cpus be bottlenecking out systems
@@gerooq *"Until sony and microsoft release consoles that are able to run these AAA cpu bottlenecked games smoothly at a cap of 60fps or higher, i think we will always see cpus be bottlenecking out systems"* That's incorrect. A 4770K from 2013 is at least 25% more powerful than an XBox Series X. A modern CPU is multiples faster than a 4770K. The problem is: 1. Ever since the Great Consolization of 2008, AAA engines/games are coded for console HW. Console architecture is fundamentally different than PC architecture --- yes, even the PS5 and XBox Series X/S. Console use a shared memory architecture and further have _much_ less RAM than a mid-range gaming PC, to say nothing of the top-end. Series X: 13GB (Shared between CPU & GPU --- 3GB is reserved for system processes) Mid-range PC: 28GB (16GB RAM + 12GB VRAM) Top-end PC: 56GB+ (32GB+ RAM + 24GB VRAM) Disk/SSD streaming and DirectStorage are designed to compensate for a lack of RAM. PCs don't need this. A PC needs games that are coded to properly use the large amounts of _separate_ RAM and VRAM. Talking about optimization is a moot point, because the games aren't even properly _coded_ to begin with. 2. As many others have touched upon, modern studios are very poorly managed. So now, not even the _console_ code is properly written, to say nothing of optimization. Until companies begin hiring based upon merit again instead of "social justice" agendas this ugly situation will continue to fester.
What this also shows is game developers need to change how they're making games because there's no way that these games should be putting this sort of strain on the fastest current gaming CPU.
I'm sorry, I'm not disagreeing with your testing and results but this speaks more to poor game optimization than just a CPU bottleneck. Some games are just designed poorly and will stutter no matter how powerful the hardware is. I've never been a fan of just throwing more money or more power at a problem. I remember the Nvidia Fermi days of GPUs.
*"There will always be some sort of bottleneck somewhere."* That's not true. If you use VSync, as you should, you will rarely have either CPU or GPU maxed. The goal is to have a stable, non-fluctuating framerate --- the highest stable, non-fluctuating framerate you can achieve given your HW and game settings. You don't ever want either your CPU or GPU to be maxed out (i.e. bottlenecking). That means there is no headroom, and thus there can can not be a stable framerate.
So basically to mitigate the incompetence of nowadays' game developers to optimize performances in a game we should upgrade a beast like the 7800X3D? This is crazy. I'm gonna play older games instead if this is the trend. EDIT: typos
I don't know why you wouldn't want to play older games as a matter of course! I play games all the way back to the 2D Adventure games of the '90s and '80s, and further back to the console/arcade games and Text Adventures of the '80s and '70s. But the AAA games from ~1998 to 2011 are some of the best video games ever made. _Especially_ the AAA PC games from ~2003 to 2007. The reason I say this is that unlike some older games which are beloved by many, but do not stand the test of time due to gameplay and controls that were in a state of experimentation, these games are older graphically but absolutely hold up to modern scrutiny, and in many ways are _better_ than modern games --- atmosphere, characterization, and most importantly gameplay.
There's no valid reason these games don't get more framerate with such a good CPU. No huge number of physics items calculated, no enormous armies that need inidvidual calculation like in Total War, yet framerate can only go up to 60ish or 100ish FPS, that's absurd.
With shoddy optimization becoming more frequent, and many open world games just crushing CPUs (especially UE games), CPU is becoming just as vital as GPU for high end gaming
@@JoaoBatista-yq4ml yeah. Im on a 5800x3D still and it’s mostly great, but I running into more scenarios where it gets stomped by terrible optimization or overly ambitious scope. When I upgrade, it will be to the best x3D chip available (or Intel equivalent)
You don't have to buy a badly optimized game though. If people aren't buying the game then they'll have to optimize them better. More games are pushing CPUs harder now but you shouldn't feel pressured to shell money to buy an new cpu for an unoptimized game. You'll only be encouraging devs to keep making games that way.
Once I heard Tech from The channel Teach Deal explained things easily for people. It was something like this: your CPU is creating/generating/ rendering all the structures you see in games, all floors, walls, objects and characters, the CPU is giving all of them the structure and your GPU is just painting everything making it looks nice. So, if your GPU IS NOT UNDER 100% usage it means you are bottlenecked by something else in your system CPU/RAM/SSD/software, usually is your CPU that can't generate enough structures and your GPU is just waiting dor them to get "painted"...
I agree with your points but I also think this is heavily modern developers' fault for having horrible optimization. Jedi Survivor doesn't look bad by any means, but there is no reason the strongest CPU and GPU combo on the market cannot run the game at max settings at 1440p at the bare minimum of a steady 100+ fps. Time and time again we see these current gen games releasing with piss poor optimization and it's getting annoying to the point that I don't even feel like playing those games until a gen later for better performance. Hell, I barely got into Cyberpunk and GTA V so I could run them at 4K (without RT) with high FPS on my new build.
100% this. This is an optimization issue much more so than it should be a CPU issue. But I fear that this is the new normal, shoddy and horrible optimization is here to stay because they will just shrug their shoulders and say "use upscaling" or "use frame gen" or "get a faster computer" instead of paying a team of devs hundreds of thousands of dollars to optimize the code.
Yeah man. Nvidia and amd introduced all this new tech that is shifting the work from all gpu to more of a 50/50 load. Cpu's never needed this in the past and your perfomance was more dependent on the gpu. Now they have lessened the load on the gpu, and that will allow them to hold back on gpu development. This is my theory. I have not seen/heard this anywhere else. If i am wrong, i am wrong.
It's because a lot of modern devs are inexperienced because studios don't want to pay experienced devs what they're worth, and they don't want to pay to train the newer devs either, so as a result, we get rushed, unoptimized slop that barely runs on the fastest hardware available. Then we get to wait for 40 patches to roll through before the game runs like it should've upon release, meanwhile some games never truly get fixed.
"No reason the strongest CPU GPU combo cannot run it max settings at 1440p 100+ fps" based on what exactly? Do you have some technical explanation or are you just pulling numbers out of nowhere based on how "good" you think the game looks? I find it weird that people who have zero experience in game development will make comments like this with such confidence.... PC gaming always has games that push past what even the current best hardware can run maxed out, it isn't a new thing....
It is very game dependent, too. In Red Dead Online I went from 100 FPS at 4K with my 7900 XT and a 7700, to 101 FPS when I swapped the 7700 out for a 7800X3D. Literally 1% with the exact same settings. That's it.
Not all games will yield the same benefits. Many older games often don't even use all of your cores. It's not uncommon for many games from the early and mid 8th gen to really only use 4 cores
RDR 2 in general is very optimized on the CPU front, even more than GTA V, so your results don't surprise me at all especially if you play at 4k. I think it's the least CPU intensive open world game I've ever played (also the fact that it can run on an old ass mechanical hard drive with zero issues is still a miracle for me).
feel bad for you doing that...i've ended up that way in the past...where i upgraded a part only to be at the same fps it sucks so bad cpu and gpu is never gonna be equal.....it looks like the gpu will always have the cpu bottlenecking on high end gear. We always seem to have problems with our pc gear.....
I remember an older video you did about bottlenecking... I'm that you were also talking about resolutions and blew my mind because i totally had not thought about how since we're using lower resolutions, we're actually gaming at those lower resolutions so cpu bottlenecking would nl definitely be a thing. Thank you for being an American hero!
I was one of those who said it doesn’t matter at 4k and I’m really glad someone finally made a clear video about this. I see it now. But what’s confusing me is that I don’t see CPU intensive things happening in these games you demo’d. They aren’t even simulation games. Makes me think developers are used to cutting corners on CPU optimization and that this is fixable with better coding.
thumbs up for choosing to not be ignorant. I don't know the showcased games very well but especially in RPG games alot of cpu intensive work can be done in the background without really seeing it. for example the logic of NPC can be very elaborated, regularily checking various variables like the players lvl and various skill levels and how they relate to various variables of every single npc and how they are supposed to react, like aggro range and other behaviour. also stuff like what the npc is doing, is he just running down a scripted path or is there a realistic simulation of what the npc is doing, like going to and from work depending on the time and stuff like that. npcs all have some algorithm that makes them choose their path and avoid obstacles, including the player and other NPCs. additionally, alot of graphics related stuff can also be demanding on the cpu. then you have various game assets constantly being loaded in the background, for example scripted events that trigger as soon as you cross a certain point or perform a specific action. there is tons of other things that can happen in the background without you seeing anything on screen. without knowing the source code of a specific game its really impossible to tell what the exact reason for frametime spikes are.
The game is only using 1 or 2 CPU threads for the highest clocks due to the way the software had to be written! Not many games are good at multy-threading 😊
Thanks for this video. Even most enthusiasts aren't educated when it comes to the importance of CPU performance in gaming, it's really tiresome to argue against ignorance.
Місяць тому+98
God forbid these devs optimize their games. Much better to make i9/Ryzen 9 CPU the minimum requirement!
The games are optimized for the ps4/ps5 which is why they run fine on lower PCs as well, demanding that every game should be capable of maxing out any CPU and/or GPU is hilarious. "Cinematic" 30FPS is what devs are after and you are trying to run this 30FPS game at 300FPS.
@@terrylaze6247 The Devs can take their "cinematic" performance targets and shove it. Games are for the players not the devs. If you design a game with no intent to maximize your player's experience then you shouldn't be a game dev.
Good video man. I always wondered if better cpu's would cause me to get these type of stutter in these EXACT types of situations and its actually really hard to find footage of it. definitely in 4k.
Easy, there is always something to bottleneck every system, that's undeniable fact. The way to play games in a PC is to lock the frames where the system is comfortable working, relax and enjoy.
You should create a poll one day if any of your audience actually plays the game you showcase, because every time you pull up a "modern" game to showcase some point about hardware limitations, it is something I would never dream of playing. These days I play mostly retro and indie games because modern AAA feels like it's moving backwards, but maybe that's just me.
if you play retro and indie games you usually dont need any good hardware. there is no point in benchmarking a game that runs smoothly on 20 year old hardware.
@stonythewoke9921 Not really true. Take Pools for example, which is a very small indie game, but it has a focus on realistic rendering so it gives my 4090 a run for its money. Or Minecraft with shaders and other mods. Or any of the nvidia mods like Portal RTX.
@stonythewoke9921 No I'm saying from my perspective it seems artificial to pick out these games to make a point just because they're new. It's a fallacy to assume if someone buys the best hardware it's to play the latest AAA titles. And without a poll there's no telling what his audience is actually playing. Kind of like how compared to the general public, a much higher percentage of people who watch this channel use AMD.
@@seeibe there is nothing artificial, arbitrary or anything about the games he picked. he simply picked games that prove the comments about cpus being fast enough wrong. if you think playing popular games is not a realistic scenario, I can't help you.
Try without RT as well, I'm also starting to think that RT is hitting some weird bottlenecks that are not necessarily CPU related. Just think about it for a second, if RT is to blame for most of your issues then arguably it's the GPU's fault since its the component responsible for RT. Is it offloading too much on the CPU? Yeah maybe but how do we know that the RT cores isn't the part that's bottlenecking the system and what you are seeing is rasterised performance left on the table? We kinda had this issue all the way from the 20 series, Nvidia could be processing RT inefficiently vs rasterised or the RT was badly implemented in the game causing this and everyone would blame the cpu and/or game anyways. One reason why I stay away from RT like the plague since I get spiky performance almost every time that I enable full RT in any game, though there are some that runs better than others.
rt i think adds to the cpu bottlenecking faster....it puts extra load on it i think and so it stands to reason. However one day games won't allow you to turn that ray tracing off...there'll be no offswitch for alot of setting and then will come more problems!!! the days of smooth pc gaming are either far behind us or far ahead in the future we havent met yet but at present it does suck...
just think better - why does the video card have RT cores - why is the processor the one that bears the load? Shouldn't the video card do this? another stone in Nvidia's direction.
@@robot_0121it should but goes back to the devs because they can actually code it to where RT is primarily done on either the GPU and CPU (wrongfully). It’s supposed to be done mainly on the GPU though.
I love your point, but i think i also speaks to reviewers needing to add in more "realistic in-world" gaming settings in their reviews, and not only the "in lab settings" to maximize the performance differences. I understand that is better for actually getting the numbers correctly, but it doesn’t speak to real world scenarios, which is really what people should be expecting.
I built a pc from used parts last year and got a 3060 with 12GB, for photo editing and a kittle gaming. I was really surprised to see how crazy bottlenecked that system is. I can play Battlefield 1 smoother in 4K than 1080p :D. I want to upgrade to a cheap ryzen at some point so I can use my 3060s potential, but as I‘m not gaming a lot it‘s not high priority.
Lot of people totaly missing the point in the comments unfortunatly. Daniel was tring to demonstrate that CPU bottleneck with a high end GPU at high resolution can still happen, people think he is defending badly optimized games.
@@Blafard666 We get that but he is doing so using the small % of bad optimized games and the hardware a very small% of gamers have. It is kinda stupid as it makes no sense to use what 1% of PC gamers use to show PC gamers look these bad games can even make the best hardware bottleneck. What is the actual point? Everybody who played these games knows this and it will do the same thing to the 5090 and 9800x3d because it is not a harware problem.
@@jmangames5562 The demonstration wasn't about the games. Again. Its about the persistent myth that high end GPU + high resolution can't generate CPU bottleneck. For this demonstration coupling the best GPU outhere and the best CPU is the smartest combo. What pourcentage of gamers is using that hardware is irrelevant.
Speaking as a developer, the entire idea of "cpu vs gpu bottleneck" is not a good one to usefor analysis. It's a perspective that tries to blame one piece of hardware for the overall performance of the software. That's a useful perspective when you're trying to decide what parts to spend more money on when you have a budget and play particular games. But that's it! That's as useful as it is to think that way.
Daniel , we desperately need your take on pcie 4 x 8 cards on pcie 3 motherboards. Most poor gamers only gonna be upgrading to rtx 60/Rx 600 tier GPUs ,and will still be using their old pcie 3 gen mobos . So they'd be stuck with pcie 4x8 bottlenecks from this current gen. to most likely the next gen of gpus. We need to know how much the performance reduction, especially when they ran out of vram. Cause most of them are 8gb cards.
It’s not a problem at all. Its pcie x4 gpus that show a significant decrease in performance. I run my 6600 on a b450 board and my performance is the same as what all the benchmarks say. At most its like 2-5% slower
@@virtual7789but for the very little performance it offers, You'd need the 2-5% difference don't ya think? I'm running a 4070(4x16) on a b450m(3x16) board so I wouldn't need to worry about it it. But still , being aware of this would clearly help out a lot of budget gamers. That's why we need Daniel's take.
@@clem9808 2-5% performance shouldn’t affect your purchasing decision. Also, why not just buy the used high end cards from previous gen instead of settling for the trash Nvidia and amd offer in the low end? Based on the leaks the generational gains aren’t gonna be there because the cards are so cut down. Just buy a rx 6800 for $250 or a 5700xt for $150. The used market is amazing right now
Point taken. A very helpful video. Gaming non- competitively and not requiring DLSS at 4k resolution I tend to see no such effects after I learned to turn it off to take pressure off my CPU 😂. Turning off raytracing helps too - it is to me an upsell with mostly diminishing returns in most games Inplay. There are trends to get us to buy the latest greatest driven by such tech. Thats ray tracing.
I have noticed that the lower your gpu usage is, the more frames you can squeeze out of frame generation. You can get close to 80% more frames if you are on a cpu bottleneck vs 30% more frames if you're GPU bound. So if you're not that sensitive to latency, just turn on frame gen and everything will run at 120. I agree real 120 would be better, but ¯\_(ツ)_/¯
Yes that’s already well know about FG which is another side step to actual CPU optimization. FG is designed to bypass the CPU so the the bigger the bottleneck from that the bigger the get back from FG. Doesn’t help people on 60hz TV’s or monitors that regularly drop below 60 🤷🏾
In the past, it was often said that games only utilized 4 CPU cores. Given that your CPU has 8 cores, but the game only uses 4, that's why Afterburner shows around 50% CPU usage. Instead of solely blaming the CPU, why not criticize the game for not taking advantage of all the available cores?
I am not Daniel's particular demographic of viewer. I watch him occasionally, and I find his benchmark stuff useful, but I feel like his demographic is of people who are more new to the scene. That said, he's right. Another way to know this is that, these GPUs are far more powerful than their CPU counterparts and have been for a long time. That's why many much older CPUs didn't even bottleneck ADA when it came out and still doesn't. GPUs are progressing far quicker. I feel like that's what made X3D so exciting. Also competitive gamers want every single bit of fps they can get in 1080p, and it doesn't matter that 200 is a lot or 300 is a lot, they want THE MOST they can get. If he throws in the next tier down in CPU his fps will drop, and that's a perfect indicator that he is CPU limited. For most people this will not matter, but for the future of GPUs, when 70 class cards are as powerful as a 4090, we need to understand and be advocating for CPUs that can handle it. That means better flagship CPUs and the trickledown it will inevitably cause. Why buy a 5090 if the performance is the same as a 5080? It encourages Nvidia and AMD to give us worse GPUs to try and artificially keep a larger performance gap between the different GPU tiers.
those who say cpu performance is secondary don't appreciate triple-digit frame rates even in single-player games. these comments will always exist. you can't save everyone.
The problem is for most games the solution is "just brute force it" to fix the poor game optimization issues. That worked until games began to use the newer features on PC's GPUs as they could just throw better hardware at it to get the desired results now optimization is needed. Now we have games that push the limits of what the GPUs can do the way of thinking of "the CPU doesn't need to be optimized for" isn't working so well.
@@I_Jakob_Ido you understand that it's useless to push the brute force of the hardware if we keep on letting software developers get away with shitty optimization? It's a zero sum game, a capitalistic and braindead game. It's pathetic. CPUs nowadays are super powerful.
you're absolutely right. I am using triple screen 1440p @240hz monitors and the FOV affects the CPU .. A LOT!!!!!!!!!!!!!!!! there are so many games I can't play with very good frames and my cpu overall usage is much higher than most users. I have been thrown shit at for such a long time, and now you're explaining the shit I've been explaining on reddit so well... great job Daniel!
That must be a very immersive setup and certainly demanding. I hope you really have some fun with that. It would be interesting if the FOV was affecting CPU. I play at higher FOV in games even on a single monitor as I want immersion in the scene. I couldn't compare it directly though as you are pushing triple the resolution too, by utilizing more screens. That is a lot of pixels you are pushing, which would also work the GPU a lot. You are effectively talking about a higher resolution. Your running something equivalent to 6k (2k x3). My usage of FOV is not as demanding, but also not as nice. I'm sure there is confusion as FOV may refer to how wide of an angle the camera is. It can even refer to how far the camera is away from a third person character you control. In those cases the resolution is the same. You are increasing the FOV, but you are changing the amount of rendering needed too with multiple monitors. You may have or have not changed the original camera, sometimes I think the game just adds another 2 cameras. That has got to be nice to have more vision. I think it would only be outdone by a good VR experience. I haven't ever tried VR, well not at home. Odd it wasn't understood on reddit, perhaps because of effective resolution. I'd stick with pointing out that..The same basic rule applies. If you are getting less then 95% of GPU utilization the CPU is generally the bottleneck. (IMHO it may actually be a RAM, VRAM, or bandwidth issue as that is quite the demanding scenario and out what I budget for gaming. As you probably know CPU is sensitive to RAM speed, GPU is sensitive to VRAM and bandwidth issues in some scenarios)
That's because of the core count with SMT. The game would have to use all 16 threads fully to get to 100%, and games usually can't do that. So they end up bottlenecked by the single thread performance.
@@yavnrh shitty optimization. Battlefield 2042 had similar problems but right now it's one of the best optimize games ever. It uses all 16 threads of my 5800x3d and it can run over 300fps. A Game with 128 players, destruction, wetter effects, big open maps and vehicles has a better parallel CPU utilization than a single player game. Unreal engine 5 is in general a stutter mess.
Poorly optimized and coded games are not a CPU problem, they're a game problem. This also has a lot to do with ray tracing, which again was introduced far too early for the hardware.
no one said that the hardware is really the problem. poorly optimized games are the problem, you are right on that but having more powerful cpu would make it easier to run poorly optimized games. Otherwise you wouldn't see a difference going from an i3 to an i7 or from a non x3d chip to one.
Really good video. I hate how all the bechmark videos compare cpus at 1080p mid settings with a 4090. A much more relatable benchmark would be wether there´s an impact on these poorly optimized games at 4k. Well done.
They also explained they had to do that video because of that persistent myth that high end GPU+high resolution means no need for a good CPU cause "impossib' to CPU bottleneck boyz"
only 0.1% of gamers actually buy the top range GPUs, AMD is actually trying to make money. Nvidia gets away with it because they have the much bigger market share.
Doesn't make much sense since amd are also making the cpu's. Unless they're limited in how well their cpu's can run. Talking about on a per core level, intel has gone the same route by going wide with multiple cores. Which is great for heavily threaded applications. A few games are, most are still reliant on a single or just a couple main threads. Meaning 12c, 24c, 1024 core, doesn't matter if the 2-3 necessary cores are hamstringed. Single core performance almost always matters (very few light weight multithread loads and those that are parallel processes typically run off gpu acceleration anyway). Amd's sticking to mid range because on the gpu front they can't compete. The briefly mentioned they could if they pushed the power (they didn't say by how much). Shoving more power to boost clocks for diminishing returns shows they're up against a wall. Yes nvidia uses more power but it's not power consumption alone stretching the 4090 well beyond anything amd has right now. They know they can't catch up for now so they're sticking with what they can do. It's not speculation, amd's actually come out and said as much. They did try to say they could compete if they pushed more power but 'pics or it didn't happen'.
Daniel Owen : this game Star Wars Jedi Survivor is NEVER GOING TO BE WELL OPTIMIZED. Why ? Because it's using an iteration of the Unreal 4 engine released in 2013 and developed through the years from 2008 - 2013 which was the era of Win XP and its DX 9.0c. That engine was then for the year 2013 translated to the DX 11 API which was the current Win 7 API back then. DX 11 is heavily based upon DX 9.0c and has one fundamental flaw : it cannot efficiently and parallerly utilize more than 1-2 cores. So in practice it need an core cut down i9 with 2c/4t setup running at 6.0+ GHz to utilize it's speed. That's why measuring anykind of CPU / GPU bottlenecking in this kind of a game is incorrect because it won't show proper and true utilization due to abundance of cores that aren't utilized and lack of very high single-core frequency that the UT 4 2013 needs. The only thing here that is correct is the usage of a RTX 4090 because Nvidia uses an older principal of their architecture design here and currently and is very LINEARLY utilized which such old graphic engines like.
UE4 has been supporting dx12 for years now, many other games running on even older versions with dx11 don't have those issues.... but you are right they wont optimise it because the selling cycle is over now they made millions they are not going to spend ressources that wont generate more sales anyway...
@@jagildown Unreal Engine 4 doesn't support DX 12 API natively....it partially supports DX 11 API....and it translates via DX 11 API through to DX 12 software wise a.k.a. emulated.
@@lflyr6287 so what?... why other games using older ue4 versions don't have those issues then? the real issue is elsewhere its not that they can't do it it's that they don't want to do it
Hey Daniel, you'll never read this but: Can you do some testing regarding the CPU cost of Ray Tracing and Path Tracing enabled? Almost every reviewer and gamer ignores the CPU cost of enabling RTX. But according to Digital Foundry's Cyberpunk testing the 5800x can drop to mid-40s with Ray Tracing Ultra, and the 12600k performs 43% faster under those conditions.
It was commonly said only a few years ago that AMD cpus didn't perform aswell in games with RT on. Not sure if that still holds true but no one talks about it anymore.
If I had a 4090 I wouldn't enable DLSS at 1440p I get the analogy but I run a 4070 Ti super and 165 Hz 1440p monitor I hit 165 FPS at High to Ultra settings without DLSS and ray tracing enabled in most games My CPU is a i7-12700 is considered not optimal for gaming Poorly optimized games are the real problem
@@FastGPU Please provide video proof of 1440p native @ 165fps with High-Ultra + RT (wherever possible) in: Wukong, Space Marine 2, SW: Outlaws, Starfield, Remnant II, Hellblade 2, Silent Hill 2 Remake, FF XVI, Ghost of Tsushima, GoW Ragnarok, Horizon Forbidden West, CP2077, Alan Wake II. I'll wait :) Why do you guys feel the need to lie so blatantly ? You know people can fact-check your BS and just look at benchmarks/reviews, right ? Most games my sweet behind :)
@@parowOOz Good point. Alot of stupidity here in comments. The discussion was tachnical, not about optimisation, but about the act the Intel, AMD and Nvidia are not giving better products with new gen. But they keeo the same prices or worse. also the concept that you cannot get bottlenecked in 4k is just stupid.
@@sausages932 How is this missinformation? You pay 2k for a GPU, and even for that stupid priced RTX 4070Ti to not enable RT? The point of the video was that you need sometimes a better CPU, and Arrow Lake + Zen 5 are not giving for this generation, but i did not hear them "We lower the perices", the contrary, we raise them.
I hate that games graphics are starting to get out of hand and being thought only for top of the line hardware(and not even that in cases like this one) especially when ultra 1080p gaming is becoming much more affordable than ever(I think?) with cards like the rx 6600.
If you wanna come at the argument from that point of view, sure, current hardware isn't fast enough to make sure games never get bottlenecked. That's not because the hardware isn't fast enough, that's because there's effectively no upper limit on how poorly optimized code can get.
@@BlackJesus8463 What a braindead take. Should I bake a perfect lemon pie before I set about criticizing someone who shat on a plate, put a candle in it and called it dessert?
And the main blame falls on Nvidia. Thanks to DLSS, they buried game optimizations. And above all, games are not played on ultra details... Ultra details cause quite a bottleneck. It has happened to me many times that on ultra I got 80FPS and gpu 70-80%, and on high it jumped to 150FPS and gpu 99%. A beautiful example is the game Crysis Remastered. There you set the water quality to ultra and you immediately have a bottlneck and 50fps worse performance. I don't understand why God of War Ragnarok can run absolutely perfectly and the game looks like on Unreal Engin 5 and then games like star wars come out...
@@BlackJesus8463 10700K + Nvidia FG = +-90 fps in MFS 2020 New-York with smooth framegraph. 7800X3D without FG = +-60 fps in MFS 2020 New-York with spikes. Hardware just nothing without software.
So-called "frame generation" ** is not a solution. It's a cruddy workaround. ** "Frame generation" is a ridiculous term, because that's exactly what every game does --- generate frames.
It's an RT problem. On my 7950X3D Jedi Survivor hits the ceiling at ~120FPS if I disable RT. I can set everything lower and it stays at 120FPS. Cranking on RT and it slices it basically in half. Same with Hogwarts, btw. I don't have Spiderman, so I can't confirm there... but I've seen this enough to know what's up. TL;DR: Nvidia pushed a crappy way of doing RT on the world and now we are stuck with it.
Outside of the trend of games being generally unoptimized because 'frame gen fixes all', it looks genuinely like you've got a serious leach on your system, unless you're capturing footage and pulling game capture at the same time while gaming. Over 20gb of system ram usage With 10-14gb of VRAM usage consistently across multiple titles would make me start looking at what is sucking off the teat while I'm trying to cram frames.
I feel like you either read my mind or saw my posts :D My god, losing my mind when people are saying how they are always GPU “bottlenecked” with some older CPU + top tier GPU. This is so easy to find out if people would monitor their AAA titles even a minute.
Tech literacy seems to be dire among the PC gaming userbase now. I don't even think they know more about PCs than the average console gamer at this point. So many people can't even handle an options menu to tune their own performance, they just open it, leave it as is or adjust the general preset and call it a day. Understanding deeper things is out of the question. The worst part is thanks to social media they repeat wrong information to each other a lot.
@@PowellCat745 These were AMD users. I just think people try to tell themselves that they made a right choice, and what they have is perfect. When someone says anything that breaks their illusion, they go into defense mode. I just trust data.
@@albert2006xp This seems to be the issue. It's weird when other PC users can't understand even simple concepts. We have more information available, but people don't even bother finding out things.
Clearly the 7800x3d can't keep up with a RTX 4090 in the games you demonstrated. My question is, how far down the GPU stack you have to go to be GPU bound, in those same games? Thanks for your insights.
If you enable RT in Jedi Survivor or Hogwarts you'll always struggle to get much above 60FPS. When I disable RT on my 7950X3D I get just above 120FPS in Jedi Survivor. But to answer your question: He was at ~70% GPU utilization in the first example. A 3090Ti is about 70% the performance of a 4090, so that would about max out. A 4080 is about 80% the performance of a 4090 so that would still be slightly under utilized.
8k 60 with some upscaling is a reasonable to maximize your setup with some of these games. You could also turn off RT and run native.(Yes.. rt is heavy on the CPU)
@@christophermullins7163 I'm not sure if you got that I was joking... Anyway, the deal with 8K screens is that it's stupid. In fact, I cannot think of a SINGLE benefit for an 8K screen. You physically cannot resolve more pixel density than 4K provides unless you're sitting unreasonably close. And if a game benefits from rendering at a higher resolution than 4K you can do that in a 4K monitor (i.e. NVidia DSR to render at higher than native then downscale to fit)... but... UPSCALING has a processing cost. So you'll get a lower FPS if you render at 4K then upscale to 8K than you would rendering at 4K for a 4K screen (using DLAA)... So... There's just no scenario where an 8K screen makes sense. Quite the opposite. That doesn't mean we won't get 8K screens so we need to upgrade. Of course it'll go that way.
@@photonboy999 do you play on a 4k screen? I have several and in every normal scenario I play in.. I could most definitely see the difference in 4k and 8k. I see jaggies in every game. "8k has no benefits" is indicative of copying what you hear and having no clue about the reality of the situation. Respectfully of course. 😛
@@christophermullins7163 I clearly said 8K screens serve no benefit I can see. I did NOT say rendering at 8K served no purpose. It can, and that's why I discussed DSR. Do you have an 8K screen? If so, have you compared rendering at 8K on an 8K screen to rendering at 8K on a 4K screen? Unless you have an 8K screen to test then you can't dispute what I've said.
Faster memory would also help. While I agree that a lot of people do not understand bottlenecks, I feel like a lot of this is less of a CPU issue and more of a poor optimization issue. However, that isn't likely to change and it definitely highlights that we're not at peak CPU performance with how lazy many of these of these developers are.
Except this was all with RT. RT absolutely rapes the CPU. The FPS ceiling in Jedi Survivor is ~130FPS on the 7800X3D and ~120FPS on the 14900k as soon as you disable RT. Remember that Nvidia solely designed and forced the how RT is done on everyone. They don't give a fuck if that standard tanks CPU performance. They just wanted a new gimmick that took up as little space as a possible on the GPU die so they could tell gamers they needed an AI feature (DLSS).
@@andersjjensen UE4 is pretty inefficient with how RT is done. The engine is known for being notoriously heavy on the CPU with ray tracing. That's why Hogwarts Legacy and Jedi Survivor get such a big impact on the CPU when ray tracing is enabled. Plenty of other games do not have such heavy CPU usage with RT on such as Cyberpunk, Guardians of the Galaxy, Minecraft RTX, Metro Exodus(Both versions), and others. Quit blaming Nvidia and ray tracing as the reason for the inefficient CPU usage with RT on when it's not even present outside of UE4 games, UE5 is by default pretty inefficient with the CPU even without the ray tracing done via Lumen or other traditional RT solutions. Also the standard, DXR, isn't just supported by Nvidia, it's supported by AMD and Intel as well who also tend to follow these examples.
That's what he explained right at the beginning, GPUs always have a benefit from getting wider due to the nature of what they are processing while CPUs don't get a benefit from getting wider because most things the CPU has to process don't have that much data to process all of that has been moved to the GPU years and years ago.
yet when the 5090 comes out people will be rushing to get it ignoring that cpu bottleneck and then wondering why the fps are no better with rt on lmaoooooooooo im not upgrading from my 4090 fk this im just gonna go afew years only upgrading my cpu to try and even it out
I got a good analogy why parallel processing doesn't always make sense for gaming workloads: if you have to calculate in your head the results of "1+1+1+1+1+1+1+1", would it be faster if you find 4 other people, tell each of those people to calculate "1+1" and then ask each of those people what the results of their calculations was and then tell two of those other people to each calculate "2+2", then ask them for their results and then calculate "4+4" in your head to get the result, or would it be faster if you did the "1+1+1+1+1+1+1+1" calculation yourself in your head? the answer is, communicating the tasks to the other people and then fetching their results takes alot longer than doing everything by yourself. thats why not everything in game workloads is parallelized and you usually have one main thread that makes the most important state dependent calculations by itself. stuff that can easily be parallelized, like rendering the frame for your monitor to display, is actually highly parallelized and done by the GPU, which is utilizing thousands of cores. but calculating the game logic, including f.e. what effect your inputs have on the state of the game and so on, cannot be effectively parallelized in the same way. so you end up with one main thread that fully utilizes the core it is assigned to at the moment, while a few support threads are being worked on other cores but who usually don't max out the computing potential of those cores. Parallelization in game logic can not only slow everything down substantially, it can also cause mistakes in the game logic that manifest in stuff like duping of items, game crashes and all kinds of stuff. so claiming that a game is badly optimized or the programmers are lazy only because it mainly utilizes one core fully while 4 other cores are just used slightly isn't really a valid argument. at most it shows that you don't really know how those things work.
You are completely missing the point of the video. There are going to be badly optimised games that are CPU bottlenecked, the only way to deal with it, is better CPUs. If you compare this CPU with lower performance CPUs, at these badly optimised games and these high resolutions, the better CPU will do better, therefore the point of the video is, better CPUs will run better even in high resolutions. So the people who say "yOu dOnT nEEd a hIGh eNd CpU fOr hIgH rEsolutIOn" are talking nonsense. Can't explain more simple than that. Kindergarten basic logic, over.
@@thomasantipleb8512 You are accepting game developers releasing badly optimized games as a matter of life. The sane thing to do is, to not buy a game until it is proven to have at least decent optimization. Honestly, there are so many good games nowadays that you can skip all the terrible optimization and still have more games to play then there is time to play them. Especially if you have other things to do in live other than playing games.
This video is so misleading and does a disservice to gamers. What people are saying about CPU bottlenecks is generally true. In most games at 1440p or higher you will be GPU bottlenecked and and CPU improvements will have marginal impacts. Of course there's going to be outliers but most games aren't poorly optimized like the examples you showed.
I wish what was true, but many studios are switching over to the mess that is unreal eg. The new silent hill. For they way it looks it runs very poorly.
Misleading how? What he's trying to say is that some people are under the impression that you can offload from CPU to GPU by increasing resolution and thus get better frames. He's just trying to say that is not how it works.
On some types of games, there could be more optimization to use more cores. For example, City Skyline 2 can use up to 64 cores, and that might very well be a soft Window limit. I believe that the bulk of the CPU usage is for pathfinding for the agents. If so, many strategy games could use multithreading much more aggressively. That's a small fraction of the game, but that would be a good start.
Try this without RT. Nvidia pushed an absolutely stupid standard on everyone. It costs an enormous amount of CPU power to calculate the BVHs... and that should have been done on the GPU (which, you know, is pretty good at geometry). The problem is that it would require dedicated silicon like ROPs and TMUs and Nvidia didn't want to take space from their precious compute... which is what the professionals pay big dollars for. So now CPUs have to handle that on top of doing the character animations, physics and building the scene geometry before dispatching it to the GPU.
Blaming RT & Nvidia is a cope. Cyberpunk 2077 already proved you can have very advanced RT & the game scale CPU performance beautifully. The only people at fault for the crummy game performance are the devs who made the game.
@@atiradeon6320 Cyberpunk also tanks massively on the CPU when you enable RT. It's not nearly as egregious as the UE5 games (which are particularly bad due to Nanite constantly recalculating the BVHs because polygon levels change all the time). Hybrid rendering just categorically sucks. It's not until you get to full scene path tracing that you get out of that pickle. And no, being on Nvidia's ass is not "cope". I've watched the motherfuckers for over 30 years now. I could write an entire book about how they've consistently tried to lock competition out.
@@crestofhonor2349 I have a 7959X3D. Both Jedi Survivor and Hogwarts Legacy drops to 60-75FPS when I enable highest RT at 720p Native while my GPU is under 50%. If I disable it I get 120-130FPS and my GPU is still under 50% (Obviously, RT is GPU heavy).
@atiradeon6320 I only ran the cyberpunk benchmark a few times, never played it but we were turning RT on and off and I could not really tell much of a difference. Nothing as good as the difference between 60fps and 100fps with it off.
While technically true that some games are "CPU limited" in terms of single-threaded performance, it's also true that even modern games that use a couple of threads are still limited by single-threaded performance. This is likely not something that we'll see huge leaps from one generation to the next on. As you stated, GPU scaling is simpler because making GPUs wider makes games run faster on them. NVIDIA could release an RTX 5099 that's double everything the RTX 5090 is and it'd be more or less twice as fast (and expensive, and consume 800-1000 watts or whatever, but it'd be possible). So I'd put the blame mostly on game developers and current game engines in use. Also, if you really wanted the fastest possible gaming CPU, you'd see a very slight increase in performance with a 7950X3D with the non-3D CCD disabled, as the 3D CCD can turbo slightly higher than the 7800X3D's CCD. We're also holding a what, $400 CPU up to a $1500+ GPU, right? I just don't think CPU manufacturers super selectively binning their CPUs to get a 10% faster version and then selling that for $1000+ would sit very well with consumers. So yeah, even if we got a +15% uplift in gaming performance for CPUs this year, it would just shift the point of where the "bottleneck" is occurring slightly further back, but nowhere near enough to keep up with the upcoming RTX 5090 in these scenarios you've shown, even though that +15% improvement in CPU gaming performance would be very impressive generation-on-generation.
I don't know Daniel, you're always gonna be bottle-necked by something ... Your point is indeed well taken with regards to the online discourse about "cpu performance don't matter duh", but the more devs rely on upscaling and FG to provide the fps figures instead of optimizing the underlying engine the bigger the need for cpu perf for everything else that makes a game will be glaringly obvious. Love your videos, keep up the good work. Cheers
Im fed up of people on the internet saying it works OK for me. Or it runs smooth as butter. Or no stutter here. Well, here we have it, some games in which you throw more than 2000 dollar in components and you get a sub par experience. Its like higher end parts are meant to bruteforce through badly optimized games instead of getting super next gen graphics. What a waste of money, energy and peoples time. Thank you, Daniel, for exposing such data always.
actually i want to make point that this doesn't mean always cpu bottleneck, unless you see one or more threads hittting 85-90% utilization. The point i am trying to make is that game engine or game itself may be very bad which make neither cpu or gpu culprit. For example , if i write a code to wait for 2ms, the cpu will just wait for 2ms doing nothing or something else and no matter 100 years down the line we get the best cpu, it still has to wait 2ms as the command is given to it. So if i don't see any cores being utilized much and gpu is also being under utilized then i start suspecting optimisation of the game or game engine not fitting well for the game scenes.
Bad optimization literally makes the CPU or GPU a bottleneck when it doesn’t need to be. The reason why it’s bottlenecking doesn’t really matter when there’s nothing you can do to make a game perform better.
Wow. It would be very interesting to test how memory optimization and some CPU tweaks in your system, with that top CPU & GPU combo. I have been dipping my toes in memory tweaking/overclocking and CPU undervolting/overclocking since, well, forever at this point, but in my current system my GPU is innadequate to escale significantly to changes on the CPU side of things. In your system any improvement will be immediatly obvious.
Just a note - The main thing that improved the performance in Jedi Survivor, was the removal of the DRM Denuvo, which just shows us that all games running Denuvo, will have the potential for greater performance, if only the developers remove it from their games.
RT cripples CPU's as you showed with Jedi Survivor. Other games are also CPU dependent like MMO's but it's a dying genre for kids, so they probably have just never experienced it. Heck I think SWTOR would bottleneck a PS4 level GPU due to being DX 9 and single core dependent. Add to all this Intel is a desperate company atm. Never underestimate bots and astroturfing. It's rampant on all social media, it's cheap and the FTC rarely fines the companies for it.
@@ArdaSReal Nah, it have huge impact on CPU. Which is why Nvidia made key feature of 4000 series being "frame generation", because this thing doesn't need much of cpu, so Nvidia could sell it's 4000 series as "better graphic and more fps"
Thank you for the video. Unfortunately there are many reasons why a CPU makes bottleneck, many times the not good optimization and full utilization of available resources including the use of individual cores. It would be very useful for example to monitor the % usage of each logical core instead of the whole CPU to show which core saturates and creates bottleneck for the GPU.
to echo Daniel's point about CPU bottleneck. I have a 7800X3D+4070ti @3440x1440, playing Spiderman Rematered (2022 game) on max graphic settings with regular ray tracing, I'm getting 100% GPU usage and 70% CPU usage. People forget ray tracing costs a lot of CPU, and higher the resolution, the more workload on the CPU to handle ray tracing
When I got a new 40 series I was shocked at how CPU heavy ray tracing is. Stark difference. What I find odd is there aren't many videos doing CPU comparison WITH RT ON specifically.
@@christophermullins7163 I think that's because an average PC user doesn't have a GPU powerful enough to run RT on every game, so to cater to an average user, they test games without RT
For those saying "its not the CPU, its bad optimisation". You are completely missing the point of the video. There are going to be badly optimised games that are CPU bottlenecked, the only way to deal with it, is better CPUs. If you compare this CPU (7800x3D) with lower performance CPUs, at these badly optimised games (that unfortunately exist and will continue to exist) and these high resolutions, the better CPU will do better, therefore the point of the video is, better CPUs will run better even in high resolutions. So the people who say "yOu dOnT nEEd a hIGh eNd CpU fOr hIgH rEsolutIOn" are talking nonsense. Can't explain more simple than that. Kindergarten basic logic, over.
This video only proves what I've always thought I will never enable RT on any games since the difference is mostly minimal and it's really costly on your PC. Raster performance is what you mostly want and it pains me that devs are baking RT right into games' engines and settings
RT is far better as a solution for lighting. We can't stay on rasterization forever, especially since it can also have a lot of time and authoring since it's just an approximation. There are times when rasterization is slower than ray tracing
is not minimal difference. I don't know why but I played calisto protocol today since it was free on epic some time ago now but I never found time to play it. Anyway playing it without raytracing is different experience. Looks like older game not to modern standard until I put on raytracing. I don't mind raytracing in single player campaign even if it come with some stutter hiccups. When I payed for gpu I intended to use it to its limit. Also maybe is not such bad thing that the best gaming cpu is falling behind gpu performance. It may force software developers to improve game engines and we get free fps boost instead of keep upgrading hardware for minimal gains
Some games use it well but yes in most titles i dont see much difference in looks but the difference in fps is very obvious. Im sure in the future i will be standard tho in AAA games at least
This is only true if you have a low quality monitor. I agree with you if you’re not using a oled. I have both a Ray tracing does look way better on a properly tuned oled.
I play at 1440p/PSVR2 using a 4070 paired with a 7600 & 32GB DDR5 and I'm 99% of the time GPU limited. Don't really need a faster CPU until a couple years later. You're not really making the case for CPU bottlenecking when you're using a 4090, which about 0,01% of gamers has.
It's pretty funny to see that at 1440p the 4090 use 400 watts in pause menu of jedi survivor but only 300 watts in the actual game.
Those stupid paused consumptions are what's bothering me the most since I got my 4080 S. Terrible idle in general though.
Said it once, saying it again, if you are buying trash games like Jedi Survivor, you are part of the problem, standards people, standards.
@@George-um2vctrue speak with your money that's your only real power as a consumer
@@tysage1473 needless to say I refunded Jedi Survivor after 20 mins of owning it, sadly, most accepted the bad performance and praised Respawn.
Leaving the game unpaused is cheaper than pausing the game
That performance at those specs is a crime
Jedi Survivor has abysmal optimization that still isn't fixed. Doesn't matter if you have an i9 and 4090, there will always be stuttering in the Koboh village area. Really put me off the game to be honest.
My first thought was that this feels much more like a software issue than a hardware issue. It doesn't matter how expensive your hardware is if the software doesn't utilize it efficiently.
Why are developers making games that don't run correctly with top of the line current hardware? If we aren't suppose to run at 'Epic' settings make it an opt in toggle to enable 'Shenanigans' level settings or whatever.
Blaming the hardware just seems silly at this point.
Edit: I want to make it clear that I'm happy that games are slightly future proof. But either the settings aren't used correctly or gamer culture of using the highest settings is making things seem ridiculous.
Developers shouldn't be designing their games with these kind of bottlenecks In My Very Unexpert Opinion
@@dalobstah I heard Boba Fett is already heading for the guys who are responsible for this FPS mess...
Some ps2 games looked better than this
well that's what it is there is nothing we can do about it either play other game or stay away from ue5
When the 1% lows happen to average just above 30fps you know exactly what they optimized for.
Unfortunately they usually get away with it because most people don't look at 1% lows, just averages.
@@TiasungI go beyond 1% lows, i just target frametimes.
If i run 8 player mode with bots for Smash Ultimate on Switch Emulation, the frametime graph is perfectly flat at 60fps and 16.6ms without stutters while the main CPU thread is bottlenecked at 100%.
After 12th gen Alder Lake, there's no reason to upgrade since the IPC stagnated. Overclocking RAM or adding more cache is artificially pushing fundamentally flawed chips. (now that Arrow Lake will remove Ring Bus, expect a bigger problem).
Also a friend has a Ryzen 9 5900X, stutters are non existent and the windows experience is smoother than Ryzen 7 chips.
Exactly right.
I dont think that has anything to do with it. Its about the optimization for PC not consoles...
nothing?
As a game developer myself I would argue that we currently have insane levels of CPU performance. It boils down to poor game optimization. On the PC side most gamers are literally brute forcing their way to acceptable performance levels, even though the core problem lies within the "optimization" efforts during development. Question is, what happens when you buy "the best of the best" and STILL can't get a good frame delivery? The best strategy is to actually to call out the companies, letting them know that 30fps is not being an acceptable performance target on today's hardware (including current-gen consoles)...
Yep. Way too much of the consumer market seems bound and determined to point anywhere but at the developers and publishers with regards to the issues. I've been yelled at by people on forums for making the statement that adjusting simple game mechanics (things like changing what time of day event triggers fire on, for example) should not take literal months to change a few values, should be EXCEEDINGLY simple and easy, and that the only way it takes that long, is if the underlying code for the whole structure is haphazard. Even when I explicitly don't blame the devs, but instead their time crunch, I still get people jumping down my throat claiming it's more complicated than it is. It may be more complicated, but it really shouldn't be.
I would be shocked if most the current western AAA publishers had a single team amongst them that could build something like a blackjack game without massive performance bloat issues from poor code. Tends to be what happens when you push your team to work with pre-established code structures instead of letting them make new ones better suited to the task, and that's before touching on the push from publishers to replace long-term workers with per project contracted temps.
@@jtnachos16 Yeah, I mean. I can go anywhere from being super efficient at coding, to basically "taking forever" depending on what the underlying architecture, structure and documentation looks like. In many cases; deadlines, code crunch etc results in developers coding fast but not necessarily adhering to core architectural principles or forward thinking-structure. Further down the line other developers will build features on top of that already-shaky-foundation. That's when things go south in terms of optimizations, because once you want to optimize the foundation at a later stage, everything built on top of it will just come crashing down... I am not a master coder myself, but I often get praised for creating efficient solutions. And it's really not about writing the most efficient algorithm's all the time, but rather to create good foundations while only running code when it is absolutely necessary. Another good practice is to question how often certain features need to be polled, and at what accuracy level, adding those things into your workflow and you end up with code that is fairly optimized. A Perfect example is Cyberpunk on the Steam Deck. CPU performance is great when walking around, but once you start running, you can tell that a lot of "interactive systems" are being triggered; hammering the CPU. things like crowd interaction system, resulting in a lot of ray casting and dynamic animation blending depending on player movement. Typical example of how some of the code isn't actually running all of the time. Personally I wished I could disable those systems on the Steam Deck and instead just slide past npc's without any added interactivity. It would look less immersive, but if we are being honest to ourselves, the NPCs are just window dressing, they are not a part of the core gameplay loop and I much rather take the performance/battery life. :)
You're right on the money.
Mate, thanks a lot for your comment, we need more people like you.
Today we see how disastrous the situation is.
For example, the UE5 engine is a total mess. Every game (mostly) has traversal stutters, and it has become normal nowadays. Mostly we are playing with half-baked products.
Look at Silent Hill 2 Remake, an amazing game, but with killing quantity of stutters.
The problem is that CPUs have evolved significantly in terms of multi-threaded performance, but not single-threaded performance. If you compare Raptor Lake to Core 2 Duo E8000, the IPC has just about doubled, in 15 years. We have high core counts and much higher clock speeds, but the architectures themselves are not that amazing.
And games still rely mostly on single-threaded performance. A 6C/12T CPU from the newest generation always does better than any CPU from the previous generation (putting aside X3D chips). Developers always say it's hard to parallelize things in gaming, and Unreal Engine is definitely the biggest culprit in this regard. And considering everyone is moving to UE5, it's not a good situation.
I have been saying this for years. You can turn down graphics settings and resolution. You can't turn down bad optimization.
Problem here with these situations is there's nothing you can do to help a CPU bottleneck other than getting faster RAM or replacing the CPU. Turning down CPU intensive settings rarely helps, in a lot of cases if you lower say the draw distance your CPU won't handle the extra fps it would get anyway.
@@mttrashcan-bg1ro Exactly
DXVK and VKD3D
@@laitinlok1That, and buying Intel (Ring Bus) with usually higher IPC and more cores. OR a Ryzen 9 7900X and beyond.
I have a 12700K, with DDR5 5200 i easily would get 80fps in Hogsmead for Hogwarts Legacy.
The 7800X3D is not that fast, it's still 8000 points on 3D Mark, my 12700K reaches 10000 points.
I always noticed that X3D chips look nice on the averages and max fps but the frametimes are kinda bad (?).
@@saricubra2867Vulkan layers only improve performance in very specific and limited ways and are not able to fix a game that is overly broken. 3DMark heavily favors cores, much more than games, so it's not a great data point. 7800x3d does great with frametimes, often even better than something like a 12900k. Smaller cache and the growing ringbus latency are actually sources of frametimes issues in their own rights. Intel does not have IPC lead, they have been trying to make up for it with higher clocks. I have a 7800x3d and also get excellent performance in Hogsmeade.
Almost everything you said is either misleading or outright wrong, and I would encourage you to be more objective and thorough in your research in the future.
No amount of CPU processing power or GPU processing power will ever be able to overcome bad software. Hardware is software reliant and is essentially complex sand castles without software to tell it what to do.
Tell that to snobist PCMR crowd
@@roklaca3138 PCMR is a meme for children and basement dwellers. Your average pc gamer doesn't even know what's in his pc let alone knowing or caring about bottlenecks or software issues as long as it makes the game go.
@@Owen-np3wf but they feel the performance drop on lesser cpus, cannot deny that. Noone can prove to me you need a 800$ cpu to get high frames...doom seriea proved that
@@roklaca3138My 12700K easily can pull above 80fps on Hogwarts Legacy 's Hogsmead.
With 12 cores and 20 threads monolithic Ring Bus and high IPC, what is stuttering?
Exactly
Solution: don't play broken games.
Maybe devs would learn how to optimise games.
🤝my man
The big problem is when you have to optimize for the lowest common denominator the hardware better than it tends to suffer. It shouldn't as better hardware should run it better but depending on how the optimization is done it can. Now how did the people on PC get past it before? They just had hardware so far above the lowest that it didn't matter. I do think always above 60fps is a good target to aim for as you have to draw the line some where. Yes you might have consumer will monitors able to do more than most knowingly or unknowingly will have a 60hz monitor or not have set it in settings to do more than 60hz even if it is able to do more. So making the game look worse so a relatively few people can run their game at 120fps is not a good model. Having it so it can run and go at 60fps in action scenes on the majority of target systems is the model to go with. Going by Steam September hardware survey most only have a 1080p monitor fat 55% and 1440p at 20% of the survey well which do you think will be the target knowing that 1440p can just run 1080p with most not noticing? The majority will make for 1080p at 60fps as it is still the most common by a lot. Do you have that? As you are here most likely no nor do i but we are in the minority not the majority and publishers want to sell as many copies as possiable so optimize for the majority of systems.
@@yumri4the reality is that the vast majority of PC gamers are on hardware at about RTX 3060/RX 6600 XT and mid range CPU's. That's what developers should optimize games for, or else we will just keep playing older games.
@@bjarnis that has been like, forever. The 60s cards have sold most.
I guess Devs expect Nvidia to offer increasingly faster 60s cards.
And instead we got stagnation.
@@konstantinlozev2272what devs don't understand is that most of us don't care about "pretty graphics", gameplay and performance is what matters. Native 1080p is still king and it doesn't matter how much "DLSS, FSR and frame gen" they advertise for, we just want crystal clear image with no artifacts when running around.
Shame on you guys for expecting devs to optimize their games. Just upgrade your CPU!
alright todd settle down
You can't optimize the RT API as a game dev. You work with what you have and it's garbage. Turn on RT expect stutter or MUCH lower lows where to not notice stutter you need a locked fps near those lows. The better cpu will give you better lows to work from.
Mfs be running a 40s series card on a potato cpu
@sagebladeg like he said in the vid, literally using the fastest cpu available
People like you have brainworms. You guys are always in the comments...screaming everything is unoptimized while knowing nothing about game development. Yall expect Cyberpunk 4k full PT to run on an amd dual core and a gtx 1070...it doesnt work like that.
*"CPU Usage" confuses people...*
Many people think if a CPU has "50%" usage that you can't be CPU bottlenecked because they simply don't understand enough about computers. You always go by "GPU Usage" because, generally speaking, the code can be extremely parallel and thus close to 100% usage, to oversimplify, means a GPU is fully saturated... and if it shows 50% (at maximum GPU frequency) then you're using about HALF of its potential so are thus bottlenecked by the CPU (assuming no software FPS cap causing this).
So... if you had a game that had NO branching code and thus could only run on one core at a time, you could only use 1/x(100%) of the CPU. So if it was a 4c/4t CPU then in this scenario you could only use 25% of the CPU yet could still be bottlenecking the graphics card.
*TLDR*
GPU USAGE near 100% means a GPU bottleneck.
GPU Usage below 100% means a CPU bottleneck.
CPU USAGE near 100% means a CPU bottleneck.
CPU Usage below 100% can't tell you where the bottleneck is.
Very enlightening, thanks.
It could also be a memory bottle neck but that's not typically too much of an issue. It's not just the CPU and GPU contributing to the performance
If only Bernoulli's principle applied to processing power...
Last one with the cpu usage below 100% means it's a latency (usually memory) bottleneck somewhere in the chain. Aka why cranking memory latency lower makes fps number go up despite bandwidth staying the same. That reason is why the x3d vcache amd chips are "the best gaming cpus". They have enough L3 cache to lower avg memory latency by meaningful amounts.
@@LiveType Shaders. A lot of the more modern games have to compile shaders. That's why we see sudden judders as he's playing the game. It's not really a good take on CPU usage here.
Intel's PresentMon utility and its "GPU Wait" metric are a great way to show how much time the GPU is spending waiting for the CPU.
If the GPU is only 60% utilized I don't need a tool to tell me that it's waiting 40% of the time....
@@andersjjensen- That’s not how it works.
@@andersjjensenthats just not what that means lol
@@ArdaSReal I'm perfectly aware that there's more to it than that if you want to get nerdy with frame pacing and dispatch calls. But the net result is about the same. PresentMon is a useful tool for developers as it allows them to see exactly what is happening when during the entire frame rendering. But it doesn't help me much to know that the majority of the the waiting happens between the geometry upload and the shader code dispatch.
data, graphs and charts in Special K OSD and widgets is great too
a modern game being bottlenecked to 60 fps in some areas by a 7800x3d is just sad
I'm not saying it isn't sad, but it can be somewhat explained by game devs that target 30fps on PS5. That means a CPU with double the gaming performance will only be around 60fps.
@JayzBeerz and yet the fastest gaming cpu is an 8 core.....
It's not the core count that matters.... Game engines only have so much parallelization that can be done , Certainly having enough cores is important but 8 cores is very much enough.
Typically either the main thread or the render thread is the one holding everything else back.
@@JayzBeerz Do you even know what you're talking about mate? Dividing game loop between more threads is just not possible most of the times. You can even see the CPU usage during parts where there is CPU bottleneck is barely hitting 50%. There are half of the cores doing nothing. 7800x3d is the best gaming CPU there is right now, because it has lots of cache, that actually improving performance by making CPU not needing to fetch data from RAM as often, because it can keep more of it in its own cache, which is 40+ times faster than fetching from RAM.
You can even see 7900x3d is not as fast as 7800x3d, simply because 7900x3d has only 6 3d v-cache enabled cores, while 7800x3d has all of them, which is 8.
@@Gornius My 13900K is faster in Hogwarts Legacy. It is OC'd and the DDR5 has tight timings 7200 MT/s CL32. So there's more to performance than just more cache. Some programs prefer core speed and some games can't fit all of it's data into the 7800X3D cache and have to go fetch it from RAM. In these scenarios it's easy to see a 13/14 gen CPU be faster with it's better DDR5 support.
@@Gornius Aren't half the cores doing nothing because the CPU is parking those cores because they don't have access to the strap on cache? And wasn't the lack of handoff to the right cores a huge bottleneck for a while because CPU cache should be invisible to all software and AMD just dropped the ball on having the OS use the right cores?
There's nothing really happening in this scene to justify 7800x3d to drop to 60fps.
CPU bottlenecks can happen when many characters spawned or many objects calculating physics, many ai calculating paths etc.
But this game episode literally empty and looks like it's doing some heavy unnecessary calculations all the time on the main/game thread which affects fps and smoothness so much
It's really a game problem here not the CPU
You might see drops to 90 fps for example in this game on a cpu from 2034 that wouldn't mean that we need a better CPU to release...
I presume in this particular scene all the interiors and characters inside buildings are spawned at the same time with zero optimization to them
I don't like moving the issue to hardware. The devs designed these games with current and past hardware in mind and shouldn't expect consumers to need future hardware to make their game run correctly. I just find this to be completely ridiculous. Why are we, the consumers, moving the goalpost for a game to run well 2, 3, 4+ years after release. They DESIGNED THIS FOR CONSOLES and it doesn't run well on $3000 rigs.
I'm just flabbergasted and feel like I'm an old man yelling at clouds with how this seems to be a perpetual an immovable issue. When the game was in development the hardware they had was a generation or two behind when it actually releases so HOW does it not run well on release day new hardware.
But is ok for hardwre to stagnate, at same or worse prices? Devs cannot do the same thing? Lets put the blame on everyone.
We DON'T need faster cpus. We need game developers who aren't lazy and so profit driven
We don't?? I've heard that that percentage number of CPU usage is actually not correct because it's overall cores usage. Games are (probably) using 99% of a few cores and not actually utilizing all cores. Maybe engines don't work like that? So if this is correct, we actually do need faster CPUs.
It's greed and Leftism.
11:58 I don't think you should be disappointed with AMD or Intel, because they can't just magically release a 7GHz cpu then 8, 9, 10GHz and so on. It's the game developers that you should be disappointed with, for designing a game that can't go beyond 60fps on the best gaming CPU on the market.
It's not about general disappointment. It's just sad that a new cpu generation brings almost no performance improvements to the table. Zen 5 and arrow lake are mainly about power efficiency and that's not enough.
It might also be game engines and device drivers. Would be interesting to see which graphics settings most affect CPU usage. (Edit: also operating system stuff like the scheduler...)
Until sony and microsoft release consoles that are able to run these AAA cpu bottlenecked games smoothly at a cap of 60fps or higher, i think we will always see cpus be bottlenecking out systems
This "sweet spot" being really low am5 systems is the biggest problem
@@gerooq *"Until sony and microsoft release consoles that are able to run these AAA cpu bottlenecked games smoothly at a cap of 60fps or higher, i think we will always see cpus be bottlenecking out systems"*
That's incorrect. A 4770K from 2013 is at least 25% more powerful than an XBox Series X. A modern CPU is multiples faster than a 4770K. The problem is:
1. Ever since the Great Consolization of 2008, AAA engines/games are coded for console HW. Console architecture is fundamentally different than PC architecture --- yes, even the PS5 and XBox Series X/S. Console use a shared memory architecture and further have _much_ less RAM than a mid-range gaming PC, to say nothing of the top-end.
Series X: 13GB (Shared between CPU & GPU --- 3GB is reserved for system processes)
Mid-range PC: 28GB (16GB RAM + 12GB VRAM)
Top-end PC: 56GB+ (32GB+ RAM + 24GB VRAM)
Disk/SSD streaming and DirectStorage are designed to compensate for a lack of RAM. PCs don't need this. A PC needs games that are coded to properly use the large amounts of _separate_ RAM and VRAM. Talking about optimization is a moot point, because the games aren't even properly _coded_ to begin with.
2. As many others have touched upon, modern studios are very poorly managed. So now, not even the _console_ code is properly written, to say nothing of optimization. Until companies begin hiring based upon merit again instead of "social justice" agendas this ugly situation will continue to fester.
What this also shows is game developers need to change how they're making games because there's no way that these games should be putting this sort of strain on the fastest current gaming CPU.
I'm sorry, I'm not disagreeing with your testing and results but this speaks more to poor game optimization than just a CPU bottleneck. Some games are just designed poorly and will stutter no matter how powerful the hardware is. I've never been a fan of just throwing more money or more power at a problem. I remember the Nvidia Fermi days of GPUs.
I remember playing fallout 4 12% cpu 40% gpu sub 60fps. These games are def not designed well.
Exactly
I agree, but games are going to continue getting worse when it comes to optimization.
the thing is a cpu isnt just built for games, theres large parts of a cpu that just won't be used.
its only going to get worse next year and near future. im always buying the best cpu available when I build a new pc now idc.
There will always be some sort of bottleneck somewhere. Doesn’t have to be CPU or GPU, but there will be one
Yeah but if you’re gaming you usually want that to be your GPU.
@@OptimalMayhem They should've had you make the game.
*"There will always be some sort of bottleneck somewhere."*
That's not true. If you use VSync, as you should, you will rarely have either CPU or GPU maxed. The goal is to have a stable, non-fluctuating framerate --- the highest stable, non-fluctuating framerate you can achieve given your HW and game settings.
You don't ever want either your CPU or GPU to be maxed out (i.e. bottlenecking). That means there is no headroom, and thus there can can not be a stable framerate.
So basically to mitigate the incompetence of nowadays' game developers to optimize performances in a game we should upgrade a beast like the 7800X3D? This is crazy. I'm gonna play older games instead if this is the trend.
EDIT: typos
I don't know why you wouldn't want to play older games as a matter of course! I play games all the way back to the 2D Adventure games of the '90s and '80s, and further back to the console/arcade games and Text Adventures of the '80s and '70s.
But the AAA games from ~1998 to 2011 are some of the best video games ever made. _Especially_ the AAA PC games from ~2003 to 2007.
The reason I say this is that unlike some older games which are beloved by many, but do not stand the test of time due to gameplay and controls that were in a state of experimentation, these games are older graphically but absolutely hold up to modern scrutiny, and in many ways are _better_ than modern games --- atmosphere, characterization, and most importantly gameplay.
There's no valid reason these games don't get more framerate with such a good CPU. No huge number of physics items calculated, no enormous armies that need inidvidual calculation like in Total War, yet framerate can only go up to 60ish or 100ish FPS, that's absurd.
With shoddy optimization becoming more frequent, and many open world games just crushing CPUs (especially UE games), CPU is becoming just as vital as GPU for high end gaming
I'm wondering if from now on I will have to get much beefier CPU "just in case", even though 99% of the games will run just fine on a cheaper CPU
@@JoaoBatista-yq4ml yeah. Im on a 5800x3D still and it’s mostly great, but I running into more scenarios where it gets stomped by terrible optimization or overly ambitious scope. When I upgrade, it will be to the best x3D chip available (or Intel equivalent)
@@JoaoBatista-yq4ml Ye, it seems the majority of the culprits are AAA games (no surprise) running on UE5 and sometimes UE4.
We can say we are still lucky CPUs are not getting insanely high price increases each generation like GPU's (or Nvidia specifically)
You don't have to buy a badly optimized game though. If people aren't buying the game then they'll have to optimize them better. More games are pushing CPUs harder now but you shouldn't feel pressured to shell money to buy an new cpu for an unoptimized game. You'll only be encouraging devs to keep making games that way.
Once I heard Tech from The channel Teach Deal explained things easily for people. It was something like this: your CPU is creating/generating/ rendering all the structures you see in games, all floors, walls, objects and characters, the CPU is giving all of them the structure and your GPU is just painting everything making it looks nice. So, if your GPU IS NOT UNDER 100% usage it means you are bottlenecked by something else in your system CPU/RAM/SSD/software, usually is your CPU that can't generate enough structures and your GPU is just waiting dor them to get "painted"...
I agree with your points but I also think this is heavily modern developers' fault for having horrible optimization. Jedi Survivor doesn't look bad by any means, but there is no reason the strongest CPU and GPU combo on the market cannot run the game at max settings at 1440p at the bare minimum of a steady 100+ fps. Time and time again we see these current gen games releasing with piss poor optimization and it's getting annoying to the point that I don't even feel like playing those games until a gen later for better performance. Hell, I barely got into Cyberpunk and GTA V so I could run them at 4K (without RT) with high FPS on my new build.
100% this. This is an optimization issue much more so than it should be a CPU issue. But I fear that this is the new normal, shoddy and horrible optimization is here to stay because they will just shrug their shoulders and say "use upscaling" or "use frame gen" or "get a faster computer" instead of paying a team of devs hundreds of thousands of dollars to optimize the code.
Yeah man. Nvidia and amd introduced all this new tech that is shifting the work from all gpu to more of a 50/50 load.
Cpu's never needed this in the past and your perfomance was more dependent on the gpu.
Now they have lessened the load on the gpu, and that will allow them to hold back on gpu development.
This is my theory. I have not seen/heard this anywhere else.
If i am wrong, i am wrong.
It's because a lot of modern devs are inexperienced because studios don't want to pay experienced devs what they're worth, and they don't want to pay to train the newer devs either, so as a result, we get rushed, unoptimized slop that barely runs on the fastest hardware available. Then we get to wait for 40 patches to roll through before the game runs like it should've upon release, meanwhile some games never truly get fixed.
"No reason the strongest CPU GPU combo cannot run it max settings at 1440p 100+ fps" based on what exactly? Do you have some technical explanation or are you just pulling numbers out of nowhere based on how "good" you think the game looks? I find it weird that people who have zero experience in game development will make comments like this with such confidence.... PC gaming always has games that push past what even the current best hardware can run maxed out, it isn't a new thing....
@@Kryptic1046 That is complete nonsense.....
It is very game dependent, too. In Red Dead Online I went from 100 FPS at 4K with my 7900 XT and a 7700, to 101 FPS when I swapped the 7700 out for a 7800X3D. Literally 1% with the exact same settings. That's it.
But what's your GPU utilization? If that was already at 100% (which sounds rather plausible for a 7900XT at 4k) then a faster CPU helps bugger all.
Not all games will yield the same benefits. Many older games often don't even use all of your cores. It's not uncommon for many games from the early and mid 8th gen to really only use 4 cores
RDR 2 in general is very optimized on the CPU front, even more than GTA V, so your results don't surprise me at all especially if you play at 4k. I think it's the least CPU intensive open world game I've ever played (also the fact that it can run on an old ass mechanical hard drive with zero issues is still a miracle for me).
@@thebaffman4898 not to mention GTA 5 also falls apart when the frame rate hits 180fps
feel bad for you doing that...i've ended up that way in the past...where i upgraded a part only to be at the same fps it sucks so bad cpu and gpu is never gonna be equal.....it looks like the gpu will always have the cpu bottlenecking on high end gear. We always seem to have problems with our pc gear.....
Dragon's Dogma 2 is also heavily CPU bound in cities.
People do not accept CPU limitations. 🤣🤣🤣
I remember an older video you did about bottlenecking... I'm that you were also talking about resolutions and blew my mind because i totally had not thought about how since we're using lower resolutions, we're actually gaming at those lower resolutions so cpu bottlenecking would nl definitely be a thing. Thank you for being an American hero!
I was one of those who said it doesn’t matter at 4k and I’m really glad someone finally made a clear video about this. I see it now. But what’s confusing me is that I don’t see CPU intensive things happening in these games you demo’d. They aren’t even simulation games. Makes me think developers are used to cutting corners on CPU optimization and that this is fixable with better coding.
thumbs up for choosing to not be ignorant. I don't know the showcased games very well but especially in RPG games alot of cpu intensive work can be done in the background without really seeing it. for example the logic of NPC can be very elaborated, regularily checking various variables like the players lvl and various skill levels and how they relate to various variables of every single npc and how they are supposed to react, like aggro range and other behaviour. also stuff like what the npc is doing, is he just running down a scripted path or is there a realistic simulation of what the npc is doing, like going to and from work depending on the time and stuff like that. npcs all have some algorithm that makes them choose their path and avoid obstacles, including the player and other NPCs. additionally, alot of graphics related stuff can also be demanding on the cpu. then you have various game assets constantly being loaded in the background, for example scripted events that trigger as soon as you cross a certain point or perform a specific action. there is tons of other things that can happen in the background without you seeing anything on screen. without knowing the source code of a specific game its really impossible to tell what the exact reason for frametime spikes are.
The game is only using 1 or 2 CPU threads for the highest clocks due to the way the software had to be written! Not many games are good at multy-threading 😊
@@Ottomanmintwhy did it have to be written like that? UE4?
@@jose131991 Not sure where I wrote that, but sounds like Unreal Engine 4 to me.
Excellent video again, thank u.
it's definitly because of how monitoring softwares show the CPU usage, and how games are reliant on single core performance in most cases.
Thanks for this video. Even most enthusiasts aren't educated when it comes to the importance of CPU performance in gaming, it's really tiresome to argue against ignorance.
God forbid these devs optimize their games. Much better to make i9/Ryzen 9 CPU the minimum requirement!
That's the intent.
game devs and gpu makers are colluding with each other to force gamers to buy unnecessarily more powerful hardware
@@soniablanche5672 True
The games are optimized for the ps4/ps5 which is why they run fine on lower PCs as well, demanding that every game should be capable of maxing out any CPU and/or GPU is hilarious.
"Cinematic" 30FPS is what devs are after and you are trying to run this 30FPS game at 300FPS.
@@terrylaze6247 The Devs can take their "cinematic" performance targets and shove it. Games are for the players not the devs. If you design a game with no intent to maximize your player's experience then you shouldn't be a game dev.
Good video man. I always wondered if better cpu's would cause me to get these type of stutter in these EXACT types of situations and its actually really hard to find footage of it. definitely in 4k.
Easy, there is always something to bottleneck every system, that's undeniable fact. The way to play games in a PC is to lock the frames where the system is comfortable working, relax and enjoy.
They should fix those lows tho
Lock fps dont work on cpu bottleneck only affect gpu
@@bdha8333 I don't know what you re talking about. When the frames are locked, the frame limit becomes the bottleneck.
You should create a poll one day if any of your audience actually plays the game you showcase, because every time you pull up a "modern" game to showcase some point about hardware limitations, it is something I would never dream of playing. These days I play mostly retro and indie games because modern AAA feels like it's moving backwards, but maybe that's just me.
if you play retro and indie games you usually dont need any good hardware. there is no point in benchmarking a game that runs smoothly on 20 year old hardware.
@stonythewoke9921 Not really true. Take Pools for example, which is a very small indie game, but it has a focus on realistic rendering so it gives my 4090 a run for its money. Or Minecraft with shaders and other mods. Or any of the nvidia mods like Portal RTX.
@@seeibe I am not sure what you are trying to say, are you confused why pools does not require a strong cpu compared to hogwarts or jedi?
@stonythewoke9921 No I'm saying from my perspective it seems artificial to pick out these games to make a point just because they're new. It's a fallacy to assume if someone buys the best hardware it's to play the latest AAA titles. And without a poll there's no telling what his audience is actually playing. Kind of like how compared to the general public, a much higher percentage of people who watch this channel use AMD.
@@seeibe there is nothing artificial, arbitrary or anything about the games he picked. he simply picked games that prove the comments about cpus being fast enough wrong. if you think playing popular games is not a realistic scenario, I can't help you.
Try without RT as well, I'm also starting to think that RT is hitting some weird bottlenecks that are not necessarily CPU related.
Just think about it for a second, if RT is to blame for most of your issues then arguably it's the GPU's fault since its the component responsible for RT.
Is it offloading too much on the CPU? Yeah maybe but how do we know that the RT cores isn't the part that's bottlenecking the system and what you are seeing is rasterised performance left on the table?
We kinda had this issue all the way from the 20 series, Nvidia could be processing RT inefficiently vs rasterised or the RT was badly implemented in the game causing this and everyone would blame the cpu and/or game anyways.
One reason why I stay away from RT like the plague since I get spiky performance almost every time that I enable full RT in any game, though there are some that runs better than others.
rt i think adds to the cpu bottlenecking faster....it puts extra load on it i think and so it stands to reason. However one day games won't allow you to turn that ray tracing off...there'll be no offswitch for alot of setting and then will come more problems!!! the days of smooth pc gaming are either far behind us or far ahead in the future we havent met yet but at present it does suck...
just think better - why does the video card have RT cores - why is the processor the one that bears the load? Shouldn't the video card do this? another stone in Nvidia's direction.
@@robot_0121 this is basically what I said.
@@dawienel1142 ok_hand
@@robot_0121it should but goes back to the devs because they can actually code it to where RT is primarily done on either the GPU and CPU (wrongfully). It’s supposed to be done mainly on the GPU though.
I love your point, but i think i also speaks to reviewers needing to add in more "realistic in-world" gaming settings in their reviews, and not only the "in lab settings" to maximize the performance differences. I understand that is better for actually getting the numbers correctly, but it doesn’t speak to real world scenarios, which is really what people should be expecting.
Using an horrible performing game... You're not cpu limited, you're "bad code" limited...
I built a pc from used parts last year and got a 3060 with 12GB, for photo editing and a kittle gaming. I was really surprised to see how crazy bottlenecked that system is. I can play Battlefield 1 smoother in 4K than 1080p :D. I want to upgrade to a cheap ryzen at some point so I can use my 3060s potential, but as I‘m not gaming a lot it‘s not high priority.
Informative and on point as always thanks owen!
Lot of people totaly missing the point in the comments unfortunatly.
Daniel was tring to demonstrate that CPU bottleneck with a high end GPU at high resolution can still happen, people think he is defending badly optimized games.
@@Blafard666 We get that but he is doing so using the small % of bad optimized games and the hardware a very small% of gamers have. It is kinda stupid as it makes no sense to use what 1% of PC gamers use to show PC gamers look these bad games can even make the best hardware bottleneck. What is the actual point? Everybody who played these games knows this and it will do the same thing to the 5090 and 9800x3d because it is not a harware problem.
@@jmangames5562 The demonstration wasn't about the games. Again. Its about the persistent myth that high end GPU + high resolution can't generate CPU bottleneck.
For this demonstration coupling the best GPU outhere and the best CPU is the smartest combo. What pourcentage of gamers is using that hardware is irrelevant.
Speaking as a developer, the entire idea of "cpu vs gpu bottleneck" is not a good one to usefor analysis. It's a perspective that tries to blame one piece of hardware for the overall performance of the software. That's a useful perspective when you're trying to decide what parts to spend more money on when you have a budget and play particular games. But that's it! That's as useful as it is to think that way.
Daniel , we desperately need your take on pcie 4 x 8 cards on pcie 3 motherboards.
Most poor gamers only gonna be upgrading to rtx 60/Rx 600 tier GPUs ,and will still be using their old pcie 3 gen mobos . So they'd be stuck with pcie 4x8 bottlenecks from this current gen. to most likely the next gen of gpus.
We need to know how much the performance reduction, especially when they ran out of vram. Cause most of them are 8gb cards.
It’s not a problem at all. Its pcie x4 gpus that show a significant decrease in performance. I run my 6600 on a b450 board and my performance is the same as what all the benchmarks say. At most its like 2-5% slower
@@virtual7789but for the very little performance it offers, You'd need the 2-5% difference don't ya think?
I'm running a 4070(4x16) on a b450m(3x16) board so I wouldn't need to worry about it it. But still , being aware of this would clearly help out a lot of budget gamers.
That's why we need Daniel's take.
@@clem9808 2-5% performance shouldn’t affect your purchasing decision. Also, why not just buy the used high end cards from previous gen instead of settling for the trash Nvidia and amd offer in the low end? Based on the leaks the generational gains aren’t gonna be there because the cards are so cut down. Just buy a rx 6800 for $250 or a 5700xt for $150. The used market is amazing right now
Point taken. A very helpful video.
Gaming non- competitively and not requiring DLSS at 4k resolution I tend to see no such effects after I learned to turn it off to take pressure off my CPU 😂.
Turning off raytracing helps too - it is to me an upsell with mostly diminishing returns in most games Inplay. There are trends to get us to buy the latest greatest driven by such tech. Thats ray tracing.
I have noticed that the lower your gpu usage is, the more frames you can squeeze out of frame generation. You can get close to 80% more frames if you are on a cpu bottleneck vs 30% more frames if you're GPU bound. So if you're not that sensitive to latency, just turn on frame gen and everything will run at 120. I agree real 120 would be better, but ¯\_(ツ)_/¯
Yes that’s already well know about FG which is another side step to actual CPU optimization. FG is designed to bypass the CPU so the the bigger the bottleneck from that the bigger the get back from FG. Doesn’t help people on 60hz TV’s or monitors that regularly drop below 60 🤷🏾
Trying to do flight sims in VR (so ideally 4K per eye at 90fps) I can never have enough of anything!
In the past, it was often said that games only utilized 4 CPU cores. Given that your CPU has 8 cores, but the game only uses 4, that's why Afterburner shows around 50% CPU usage. Instead of solely blaming the CPU, why not criticize the game for not taking advantage of all the available cores?
I am not Daniel's particular demographic of viewer. I watch him occasionally, and I find his benchmark stuff useful, but I feel like his demographic is of people who are more new to the scene. That said, he's right. Another way to know this is that, these GPUs are far more powerful than their CPU counterparts and have been for a long time. That's why many much older CPUs didn't even bottleneck ADA when it came out and still doesn't. GPUs are progressing far quicker. I feel like that's what made X3D so exciting. Also competitive gamers want every single bit of fps they can get in 1080p, and it doesn't matter that 200 is a lot or 300 is a lot, they want THE MOST they can get. If he throws in the next tier down in CPU his fps will drop, and that's a perfect indicator that he is CPU limited. For most people this will not matter, but for the future of GPUs, when 70 class cards are as powerful as a 4090, we need to understand and be advocating for CPUs that can handle it. That means better flagship CPUs and the trickledown it will inevitably cause. Why buy a 5090 if the performance is the same as a 5080? It encourages Nvidia and AMD to give us worse GPUs to try and artificially keep a larger performance gap between the different GPU tiers.
those who say cpu performance is secondary don't appreciate triple-digit frame rates even in single-player games. these comments will always exist. you can't save everyone.
so the evolution would be going from a single thread to multi thread optimization, but that is up to engine/devs cpu manufacturers to implement
or faster CPUs 🤣✌
I'm SO SICK of Reviewers misinformation about CPU Bottlenecks ITS POOR TO AWFUL GAME OPTIMIZATION that's to blame The CPUs today Kick Butt
Whats fascinating is he doesn't get the obvious, and he has this many subscribers.
What? Its still a CPU bottleneck
The problem is for most games the solution is "just brute force it" to fix the poor game optimization issues. That worked until games began to use the newer features on PC's GPUs as they could just throw better hardware at it to get the desired results now optimization is needed. Now we have games that push the limits of what the GPUs can do the way of thinking of "the CPU doesn't need to be optimized for" isn't working so well.
@@I_Jakob_Ido you understand that it's useless to push the brute force of the hardware if we keep on letting software developers get away with shitty optimization? It's a zero sum game, a capitalistic and braindead game. It's pathetic. CPUs nowadays are super powerful.
@@Varil92 yes I know
you're absolutely right.
I am using triple screen 1440p @240hz monitors and the FOV affects the CPU .. A LOT!!!!!!!!!!!!!!!!
there are so many games I can't play with very good frames and my cpu overall usage is much higher than most users.
I have been thrown shit at for such a long time, and now you're explaining the shit I've been explaining on reddit so well... great job Daniel!
That must be a very immersive setup and certainly demanding. I hope you really have some fun with that. It would be interesting if the FOV was affecting CPU. I play at higher FOV in games even on a single monitor as I want immersion in the scene. I couldn't compare it directly though as you are pushing triple the resolution too, by utilizing more screens.
That is a lot of pixels you are pushing, which would also work the GPU a lot. You are effectively talking about a higher resolution. Your running something equivalent to 6k (2k x3).
My usage of FOV is not as demanding, but also not as nice. I'm sure there is confusion as FOV may refer to how wide of an angle the camera is. It can even refer to how far the camera is away from a third person character you control. In those cases the resolution is the same. You are increasing the FOV, but you are changing the amount of rendering needed too with multiple monitors. You may have or have not changed the original camera, sometimes I think the game just adds another 2 cameras. That has got to be nice to have more vision. I think it would only be outdone by a good VR experience. I haven't ever tried VR, well not at home.
Odd it wasn't understood on reddit, perhaps because of effective resolution. I'd stick with pointing out that..The same basic rule applies. If you are getting less then 95% of GPU utilization the CPU is generally the bottleneck. (IMHO it may actually be a RAM, VRAM, or bandwidth issue as that is quite the demanding scenario and out what I budget for gaming. As you probably know CPU is sensitive to RAM speed, GPU is sensitive to VRAM and bandwidth issues in some scenarios)
8:00 the cpu is asleep as well, under 40% CPU utilization.
That's because of the core count with SMT. The game would have to use all 16 threads fully to get to 100%, and games usually can't do that. So they end up bottlenecked by the single thread performance.
@@yavnrh shitty optimization. Battlefield 2042 had similar problems but right now it's one of the best optimize games ever. It uses all 16 threads of my 5800x3d and it can run over 300fps. A Game with 128 players, destruction, wetter effects, big open maps and vehicles has a better parallel CPU utilization than a single player game. Unreal engine 5 is in general a stutter mess.
@@coffee7180you mean weather, lol at the wetter.
best benchmark channel on youtube right now
Poorly optimized and coded games are not a CPU problem, they're a game problem. This also has a lot to do with ray tracing, which again was introduced far too early for the hardware.
no one said that the hardware is really the problem. poorly optimized games are the problem, you are right on that but having more powerful cpu would make it easier to run poorly optimized games. Otherwise you wouldn't see a difference going from an i3 to an i7 or from a non x3d chip to one.
You right. Also you missing the point.
Facts. This is the majority of modern games in the last 4 years.
Ray tracing was introduced to early for AMD
@@Conumdrum If it were fine for nvidia they wouldn't have invented dlss and frame gen.
Really good video. I hate how all the bechmark videos compare cpus at 1080p mid settings with a 4090. A much more relatable benchmark would be wether there´s an impact on these poorly optimized games at 4k. Well done.
I believe Hardware Unboxed did a similar test on how important the CPU is even at higher resolutions
They also explained they had to do that video because of that persistent myth that high end GPU+high resolution means no need for a good CPU cause "impossib' to CPU bottleneck boyz"
Those people havent played any mmos at all
Man, those animation cycles really suck in Jedi Survivor. It's like the animators never saw a real human run before.
Do you think this is why AMD is sticking to mid range???
i think you might be onto something here.
It's why I paired my 7800x3d with a 7900xtx. 4090 is an extra 8-900 bucks for quality that is limited by shit optimization in most modern games.
What? Bad cpu optimization in a few games is why amd is sticking to mid range GPUs? What's the logic here?
only 0.1% of gamers actually buy the top range GPUs, AMD is actually trying to make money. Nvidia gets away with it because they have the much bigger market share.
Doesn't make much sense since amd are also making the cpu's. Unless they're limited in how well their cpu's can run. Talking about on a per core level, intel has gone the same route by going wide with multiple cores. Which is great for heavily threaded applications. A few games are, most are still reliant on a single or just a couple main threads. Meaning 12c, 24c, 1024 core, doesn't matter if the 2-3 necessary cores are hamstringed. Single core performance almost always matters (very few light weight multithread loads and those that are parallel processes typically run off gpu acceleration anyway).
Amd's sticking to mid range because on the gpu front they can't compete. The briefly mentioned they could if they pushed the power (they didn't say by how much). Shoving more power to boost clocks for diminishing returns shows they're up against a wall. Yes nvidia uses more power but it's not power consumption alone stretching the 4090 well beyond anything amd has right now. They know they can't catch up for now so they're sticking with what they can do. It's not speculation, amd's actually come out and said as much. They did try to say they could compete if they pushed more power but 'pics or it didn't happen'.
11:54 so the point is : Dont buy 5080 and 5090 because you wont have a CPU .
Daniel Owen : this game Star Wars Jedi Survivor is NEVER GOING TO BE WELL OPTIMIZED. Why ? Because it's using an iteration of the Unreal 4 engine released in 2013 and developed through the years from 2008 - 2013 which was the era of Win XP and its DX 9.0c. That engine was then for the year 2013 translated to the DX 11 API which was the current Win 7 API back then. DX 11 is heavily based upon DX 9.0c and has one fundamental flaw : it cannot efficiently and parallerly utilize more than 1-2 cores. So in practice it need an core cut down i9 with 2c/4t setup running at 6.0+ GHz to utilize it's speed.
That's why measuring anykind of CPU / GPU bottlenecking in this kind of a game is incorrect because it won't show proper and true utilization due to abundance of cores that aren't utilized and lack of very high single-core frequency that the UT 4 2013 needs.
The only thing here that is correct is the usage of a RTX 4090 because Nvidia uses an older principal of their architecture design here and currently and is very LINEARLY utilized which such old graphic engines like.
Fantastic Explanation, i wish a UE4 Dev would tell Hardware Tubers that, but that will never happen. Because that will hurt cpu sales. 😁
@@residentCJ true....completely true.
UE4 has been supporting dx12 for years now, many other games running on even older versions with dx11 don't have those issues.... but you are right they wont optimise it because the selling cycle is over now they made millions they are not going to spend ressources that wont generate more sales anyway...
@@jagildown Unreal Engine 4 doesn't support DX 12 API natively....it partially supports DX 11 API....and it translates via DX 11 API through to DX 12 software wise a.k.a. emulated.
@@lflyr6287 so what?... why other games using older ue4 versions don't have those issues then? the real issue is elsewhere its not that they can't do it it's that they don't want to do it
Hey Daniel, you'll never read this but:
Can you do some testing regarding the CPU cost of Ray Tracing and Path Tracing enabled? Almost every reviewer and gamer ignores the CPU cost of enabling RTX. But according to Digital Foundry's Cyberpunk testing the 5800x can drop to mid-40s with Ray Tracing Ultra, and the 12600k performs 43% faster under those conditions.
It was commonly said only a few years ago that AMD cpus didn't perform aswell in games with RT on. Not sure if that still holds true but no one talks about it anymore.
@@iang3902 i think you mean gpus
If I had a 4090 I wouldn't enable DLSS at 1440p I get the analogy but I run a 4070 Ti super and 165 Hz 1440p monitor I hit 165 FPS at High to Ultra settings without DLSS and ray tracing enabled in most games My CPU is a i7-12700 is considered not optimal for gaming Poorly optimized games are the real problem
Yes ray tracing is unrealistic as this video demonstrates how slow it is. Turn it off and most CPUs should be fine?
I run ray tracing in my games This video cherry picks poorly optimized games Frankly, its misinformation
@@FastGPU Please provide video proof of 1440p native @ 165fps with High-Ultra + RT (wherever possible) in: Wukong, Space Marine 2, SW: Outlaws, Starfield, Remnant II, Hellblade 2, Silent Hill 2 Remake, FF XVI, Ghost of Tsushima, GoW Ragnarok, Horizon Forbidden West, CP2077, Alan Wake II. I'll wait :) Why do you guys feel the need to lie so blatantly ? You know people can fact-check your BS and just look at benchmarks/reviews, right ? Most games my sweet behind :)
@@parowOOz Good point. Alot of stupidity here in comments. The discussion was tachnical, not about optimisation, but about the act the Intel, AMD and Nvidia are not giving better products with new gen. But they keeo the same prices or worse. also the concept that you cannot get bottlenecked in 4k is just stupid.
@@sausages932 How is this missinformation? You pay 2k for a GPU, and even for that stupid priced RTX 4070Ti to not enable RT? The point of the video was that you need sometimes a better CPU, and Arrow Lake + Zen 5 are not giving for this generation, but i did not hear them "We lower the perices", the contrary, we raise them.
I hate that games graphics are starting to get out of hand and being thought only for top of the line hardware(and not even that in cases like this one) especially when ultra 1080p gaming is becoming much more affordable than ever(I think?) with cards like the rx 6600.
Try turning off Nvidia reflex low latency.
Great video
If you wanna come at the argument from that point of view, sure, current hardware isn't fast enough to make sure games never get bottlenecked. That's not because the hardware isn't fast enough, that's because there's effectively no upper limit on how poorly optimized code can get.
If you made the game it wouldve been perfect. 👍👍
@@BlackJesus8463 What a braindead take. Should I bake a perfect lemon pie before I set about criticizing someone who shat on a plate, put a candle in it and called it dessert?
@@BlackJesus8463lol
And the main blame falls on Nvidia.
Thanks to DLSS, they buried game optimizations.
And above all, games are not played on ultra details... Ultra details cause quite a bottleneck.
It has happened to me many times that on ultra I got 80FPS and gpu 70-80%, and on high it jumped to 150FPS and gpu 99%.
A beautiful example is the game Crysis Remastered. There you set the water quality to ultra and you immediately have a bottlneck and 50fps worse performance.
I don't understand why God of War Ragnarok can run absolutely perfectly and the game looks like on Unreal Engin 5 and then games like star wars come out...
Hogwarts legacy at 4k I was bottlenecked with a 4080 and a over clocked 5900x so I went to a 7800x3d and now GPU bottlenecked.
Without dlss.
Frame generation is the only solution for the CPU bottleneck today.
X3D
@@BlackJesus8463 10700K + Nvidia FG = +-90 fps in MFS 2020 New-York with smooth framegraph.
7800X3D without FG = +-60 fps in MFS 2020 New-York with spikes.
Hardware just nothing without software.
So-called "frame generation" ** is not a solution. It's a cruddy workaround.
** "Frame generation" is a ridiculous term, because that's exactly what every game does --- generate frames.
This is such a terrible take. It’s not a CPU problem, it’s a trash Engine problem.
Absolutely. The hardware is more than capable...complaining that shoddy code runs poorly and blaming the hardware. Lol
It's an RT problem. On my 7950X3D Jedi Survivor hits the ceiling at ~120FPS if I disable RT. I can set everything lower and it stays at 120FPS. Cranking on RT and it slices it basically in half. Same with Hogwarts, btw. I don't have Spiderman, so I can't confirm there... but I've seen this enough to know what's up.
TL;DR: Nvidia pushed a crappy way of doing RT on the world and now we are stuck with it.
Yeah absolutely.
Not all the engines showcased here were CPU limited. Spiderman uses Insomniac's custom engine which has been shown to be well optimized
@@andersjjensen It's not a crappy way of doing RT, it's moreso that optimizing the CPU side of RT is usually not well done in most games.
Outside of the trend of games being generally unoptimized because 'frame gen fixes all', it looks genuinely like you've got a serious leach on your system, unless you're capturing footage and pulling game capture at the same time while gaming. Over 20gb of system ram usage With 10-14gb of VRAM usage consistently across multiple titles would make me start looking at what is sucking off the teat while I'm trying to cram frames.
When he switched to dlss performance, it looked so bad then I realised at same time my video switched to 480p 😂😂😂
😆
Odd, my phone did the same thing.
Lmao
Great video, thanks for clarifying this point.
I feel like you either read my mind or saw my posts :D My god, losing my mind when people are saying how they are always GPU “bottlenecked” with some older CPU + top tier GPU. This is so easy to find out if people would monitor their AAA titles even a minute.
Some of them are Intel shills. Ignore them.
Tech literacy seems to be dire among the PC gaming userbase now. I don't even think they know more about PCs than the average console gamer at this point. So many people can't even handle an options menu to tune their own performance, they just open it, leave it as is or adjust the general preset and call it a day. Understanding deeper things is out of the question. The worst part is thanks to social media they repeat wrong information to each other a lot.
@@PowellCat745 These were AMD users. I just think people try to tell themselves that they made a right choice, and what they have is perfect. When someone says anything that breaks their illusion, they go into defense mode. I just trust data.
@@Monsux Yeah those too. I saw someone refusing to upgrade from their 3900X, a huge bottleneck for their 4080.
@@albert2006xp This seems to be the issue. It's weird when other PC users can't understand even simple concepts. We have more information available, but people don't even bother finding out things.
Yes, thank you. Especially with modern games being less optimized, this is an issue.
2:00 my god, these frame time spikes are atrocious
With the best PC today. And they still say that the game is super playable.
Turn off RT....
Clearly the 7800x3d can't keep up with a RTX 4090 in the games you demonstrated. My question is, how far down the GPU stack you have to go to be GPU bound, in those same games? Thanks for your insights.
If you enable RT in Jedi Survivor or Hogwarts you'll always struggle to get much above 60FPS. When I disable RT on my 7950X3D I get just above 120FPS in Jedi Survivor.
But to answer your question: He was at ~70% GPU utilization in the first example. A 3090Ti is about 70% the performance of a 4090, so that would about max out. A 4080 is about 80% the performance of a 4090 so that would still be slightly under utilized.
In Hogwarts Legacy turn off DLSS and also turn off Nvidia Reflex and the CPU bottleneck is gone.
nerd
Damn, thanks for this video: "Some people were spreading misinformation"....and now I know that I am "Some people"
*I finally get it!!*
The moral of all this is to play at 8K/24FPS for the ultimate "Cinematic Experience!"
8k 60 with some upscaling is a reasonable to maximize your setup with some of these games. You could also turn off RT and run native.(Yes.. rt is heavy on the CPU)
A true cinematic experience will always be superior to 420 FPS @ 720p blaze it... 1337 PC G4M3R 7153...
@@christophermullins7163
I'm not sure if you got that I was joking...
Anyway, the deal with 8K screens is that it's stupid. In fact, I cannot think of a SINGLE benefit for an 8K screen.
You physically cannot resolve more pixel density than 4K provides unless you're sitting unreasonably close. And if a game benefits from rendering at a higher resolution than 4K you can do that in a 4K monitor (i.e. NVidia DSR to render at higher than native then downscale to fit)... but...
UPSCALING has a processing cost. So you'll get a lower FPS if you render at 4K then upscale to 8K than you would rendering at 4K for a 4K screen (using DLAA)...
So...
There's just no scenario where an 8K screen makes sense. Quite the opposite. That doesn't mean we won't get 8K screens so we need to upgrade. Of course it'll go that way.
@@photonboy999 do you play on a 4k screen? I have several and in every normal scenario I play in.. I could most definitely see the difference in 4k and 8k. I see jaggies in every game. "8k has no benefits" is indicative of copying what you hear and having no clue about the reality of the situation. Respectfully of course. 😛
@@christophermullins7163
I clearly said 8K screens serve no benefit I can see.
I did NOT say rendering at 8K served no purpose. It can, and that's why I discussed DSR.
Do you have an 8K screen? If so, have you compared rendering at 8K on an 8K screen to rendering at 8K on a 4K screen?
Unless you have an 8K screen to test then you can't dispute what I've said.
Faster memory would also help. While I agree that a lot of people do not understand bottlenecks, I feel like a lot of this is less of a CPU issue and more of a poor optimization issue. However, that isn't likely to change and it definitely highlights that we're not at peak CPU performance with how lazy many of these of these developers are.
People don't want to think outside the box. GPUs are flying and processors are falling behind.
Except this was all with RT. RT absolutely rapes the CPU. The FPS ceiling in Jedi Survivor is ~130FPS on the 7800X3D and ~120FPS on the 14900k as soon as you disable RT. Remember that Nvidia solely designed and forced the how RT is done on everyone. They don't give a fuck if that standard tanks CPU performance. They just wanted a new gimmick that took up as little space as a possible on the GPU die so they could tell gamers they needed an AI feature (DLSS).
DLSS is amazing though. @@andersjjensen
@@andersjjensen UE4 is pretty inefficient with how RT is done. The engine is known for being notoriously heavy on the CPU with ray tracing. That's why Hogwarts Legacy and Jedi Survivor get such a big impact on the CPU when ray tracing is enabled. Plenty of other games do not have such heavy CPU usage with RT on such as Cyberpunk, Guardians of the Galaxy, Minecraft RTX, Metro Exodus(Both versions), and others. Quit blaming Nvidia and ray tracing as the reason for the inefficient CPU usage with RT on when it's not even present outside of UE4 games, UE5 is by default pretty inefficient with the CPU even without the ray tracing done via Lumen or other traditional RT solutions. Also the standard, DXR, isn't just supported by Nvidia, it's supported by AMD and Intel as well who also tend to follow these examples.
That's what he explained right at the beginning, GPUs always have a benefit from getting wider due to the nature of what they are processing while CPUs don't get a benefit from getting wider because most things the CPU has to process don't have that much data to process all of that has been moved to the GPU years and years ago.
yet when the 5090 comes out people will be rushing to get it ignoring that cpu bottleneck and then wondering why the fps are no better with rt on lmaoooooooooo im not upgrading from my 4090 fk this im just gonna go afew years only upgrading my cpu to try and even it out
I got a good analogy why parallel processing doesn't always make sense for gaming workloads: if you have to calculate in your head the results of "1+1+1+1+1+1+1+1", would it be faster if you find 4 other people, tell each of those people to calculate "1+1" and then ask each of those people what the results of their calculations was and then tell two of those other people to each calculate "2+2", then ask them for their results and then calculate "4+4" in your head to get the result, or would it be faster if you did the "1+1+1+1+1+1+1+1" calculation yourself in your head? the answer is, communicating the tasks to the other people and then fetching their results takes alot longer than doing everything by yourself. thats why not everything in game workloads is parallelized and you usually have one main thread that makes the most important state dependent calculations by itself. stuff that can easily be parallelized, like rendering the frame for your monitor to display, is actually highly parallelized and done by the GPU, which is utilizing thousands of cores. but calculating the game logic, including f.e. what effect your inputs have on the state of the game and so on, cannot be effectively parallelized in the same way. so you end up with one main thread that fully utilizes the core it is assigned to at the moment, while a few support threads are being worked on other cores but who usually don't max out the computing potential of those cores.
Parallelization in game logic can not only slow everything down substantially, it can also cause mistakes in the game logic that manifest in stuff like duping of items, game crashes and all kinds of stuff. so claiming that a game is badly optimized or the programmers are lazy only because it mainly utilizes one core fully while 4 other cores are just used slightly isn't really a valid argument. at most it shows that you don't really know how those things work.
This is not cpu bottleneck. This is bad code or game engine/software bottleneck.
The CPU never goes above around 70% max in those games, so it seems you may be right. I'm no expert so someone could prove me wrong.
You are completely missing the point of the video.
There are going to be badly optimised games that are CPU bottlenecked, the only way to deal with it, is better CPUs.
If you compare this CPU with lower performance CPUs, at these badly optimised games and these high resolutions, the better CPU will do better, therefore the point of the video is, better CPUs will run better even in high resolutions. So the people who say "yOu dOnT nEEd a hIGh eNd CpU fOr hIgH rEsolutIOn" are talking nonsense. Can't explain more simple than that.
Kindergarten basic logic, over.
@@thomasantipleb8512 Your point is valid only when you are ready to waste any money to buy expensive hardware.
@@thomasantipleb8512 You are accepting game developers releasing badly optimized games as a matter of life.
The sane thing to do is, to not buy a game until it is proven to have at least decent optimization.
Honestly, there are so many good games nowadays that you can skip all the terrible optimization and still have more games to play then there is time to play them.
Especially if you have other things to do in live other than playing games.
@@peterfischer2039 I don't disagree with you, but that's not the point.
It would have been good if you showed all of the core utilization to show how many cores it's actually using.
This video is so misleading and does a disservice to gamers. What people are saying about CPU bottlenecks is generally true. In most games at 1440p or higher you will be GPU bottlenecked and and CPU improvements will have marginal impacts. Of course there's going to be outliers but most games aren't poorly optimized like the examples you showed.
exactly
I wish what was true, but many studios are switching over to the mess that is unreal eg. The new silent hill. For they way it looks it runs very poorly.
Misleading how? What he's trying to say is that some people are under the impression that you can offload from CPU to GPU by increasing resolution and thus get better frames. He's just trying to say that is not how it works.
On some types of games, there could be more optimization to use more cores. For example, City Skyline 2 can use up to 64 cores, and that might very well be a soft Window limit. I believe that the bulk of the CPU usage is for pathfinding for the agents. If so, many strategy games could use multithreading much more aggressively. That's a small fraction of the game, but that would be a good start.
Try this without RT. Nvidia pushed an absolutely stupid standard on everyone. It costs an enormous amount of CPU power to calculate the BVHs... and that should have been done on the GPU (which, you know, is pretty good at geometry). The problem is that it would require dedicated silicon like ROPs and TMUs and Nvidia didn't want to take space from their precious compute... which is what the professionals pay big dollars for. So now CPUs have to handle that on top of doing the character animations, physics and building the scene geometry before dispatching it to the GPU.
Ray tracing isn't the issue
Blaming RT & Nvidia is a cope. Cyberpunk 2077 already proved you can have very advanced RT & the game scale CPU performance beautifully. The only people at fault for the crummy game performance are the devs who made the game.
@@atiradeon6320 Cyberpunk also tanks massively on the CPU when you enable RT. It's not nearly as egregious as the UE5 games (which are particularly bad due to Nanite constantly recalculating the BVHs because polygon levels change all the time). Hybrid rendering just categorically sucks. It's not until you get to full scene path tracing that you get out of that pickle.
And no, being on Nvidia's ass is not "cope". I've watched the motherfuckers for over 30 years now. I could write an entire book about how they've consistently tried to lock competition out.
@@crestofhonor2349 I have a 7959X3D. Both Jedi Survivor and Hogwarts Legacy drops to 60-75FPS when I enable highest RT at 720p Native while my GPU is under 50%. If I disable it I get 120-130FPS and my GPU is still under 50% (Obviously, RT is GPU heavy).
@atiradeon6320 I only ran the cyberpunk benchmark a few times, never played it but we were turning RT on and off and I could not really tell much of a difference. Nothing as good as the difference between 60fps and 100fps with it off.
While technically true that some games are "CPU limited" in terms of single-threaded performance, it's also true that even modern games that use a couple of threads are still limited by single-threaded performance. This is likely not something that we'll see huge leaps from one generation to the next on. As you stated, GPU scaling is simpler because making GPUs wider makes games run faster on them. NVIDIA could release an RTX 5099 that's double everything the RTX 5090 is and it'd be more or less twice as fast (and expensive, and consume 800-1000 watts or whatever, but it'd be possible).
So I'd put the blame mostly on game developers and current game engines in use.
Also, if you really wanted the fastest possible gaming CPU, you'd see a very slight increase in performance with a 7950X3D with the non-3D CCD disabled, as the 3D CCD can turbo slightly higher than the 7800X3D's CCD.
We're also holding a what, $400 CPU up to a $1500+ GPU, right? I just don't think CPU manufacturers super selectively binning their CPUs to get a 10% faster version and then selling that for $1000+ would sit very well with consumers.
So yeah, even if we got a +15% uplift in gaming performance for CPUs this year, it would just shift the point of where the "bottleneck" is occurring slightly further back, but nowhere near enough to keep up with the upcoming RTX 5090 in these scenarios you've shown, even though that +15% improvement in CPU gaming performance would be very impressive generation-on-generation.
I don't know Daniel, you're always gonna be bottle-necked by something ... Your point is indeed well taken with regards to the online discourse about "cpu performance don't matter duh", but the more devs rely on upscaling and FG to provide the fps figures instead of optimizing the underlying engine the bigger the need for cpu perf for everything else that makes a game will be glaringly obvious. Love your videos, keep up the good work. Cheers
Im fed up of people on the internet saying it works OK for me. Or it runs smooth as butter. Or no stutter here. Well, here we have it, some games in which you throw more than 2000 dollar in components and you get a sub par experience. Its like higher end parts are meant to bruteforce through badly optimized games instead of getting super next gen graphics. What a waste of money, energy and peoples time. Thank you, Daniel, for exposing such data always.
actually i want to make point that this doesn't mean always cpu bottleneck, unless you see one or more threads hittting 85-90% utilization. The point i am trying to make is that game engine or game itself may be very bad which make neither cpu or gpu culprit. For example , if i write a code to wait for 2ms, the cpu will just wait for 2ms doing nothing or something else and no matter 100 years down the line we get the best cpu, it still has to wait 2ms as the command is given to it. So if i don't see any cores being utilized much and gpu is also being under utilized then i start suspecting optimisation of the game or game engine not fitting well for the game scenes.
Bad optimization literally makes the CPU or GPU a bottleneck when it doesn’t need to be. The reason why it’s bottlenecking doesn’t really matter when there’s nothing you can do to make a game perform better.
Wow. It would be very interesting to test how memory optimization and some CPU tweaks in your system, with that top CPU & GPU combo.
I have been dipping my toes in memory tweaking/overclocking and CPU undervolting/overclocking since, well, forever at this point, but in my current system my GPU is innadequate to escale significantly to changes on the CPU side of things. In your system any improvement will be immediatly obvious.
People who say you don't need a better CPU nowadays obviously havn't played Tarkov.
It's a dogshit coded game - it's not the hardware's fault.
the 7800x3d is a beast for tarkov though
@@ConnorH2111 It is! its what I have, but funny enough I'm sure a better CPU would still help for that game.
Just a note - The main thing that improved the performance in Jedi Survivor, was the removal of the DRM Denuvo, which just shows us that all games running Denuvo, will have the potential for greater performance, if only the developers remove it from their games.
RT cripples CPU's as you showed with Jedi Survivor. Other games are also CPU dependent like MMO's but it's a dying genre for kids, so they probably have just never experienced it. Heck I think SWTOR would bottleneck a PS4 level GPU due to being DX 9 and single core dependent. Add to all this Intel is a desperate company atm. Never underestimate bots and astroturfing. It's rampant on all social media, it's cheap and the FTC rarely fines the companies for it.
I thought rt is completely on the GPU?
@@ArdaSReal Nah, it have huge impact on CPU. Which is why Nvidia made key feature of 4000 series being "frame generation", because this thing doesn't need much of cpu, so Nvidia could sell it's 4000 series as "better graphic and more fps"
Thank you for the video. Unfortunately there are many reasons why a CPU makes bottleneck, many times the not good optimization and full utilization of available resources including the use of individual cores. It would be very useful for example to monitor the % usage of each logical core instead of the whole CPU to show which core saturates and creates bottleneck for the GPU.
to echo Daniel's point about CPU bottleneck. I have a 7800X3D+4070ti @3440x1440, playing Spiderman Rematered (2022 game) on max graphic settings with regular ray tracing, I'm getting 100% GPU usage and 70% CPU usage.
People forget ray tracing costs a lot of CPU, and higher the resolution, the more workload on the CPU to handle ray tracing
When I got a new 40 series I was shocked at how CPU heavy ray tracing is. Stark difference. What I find odd is there aren't many videos doing CPU comparison WITH RT ON specifically.
@@christophermullins7163 I think that's because an average PC user doesn't have a GPU powerful enough to run RT on every game, so to cater to an average user, they test games without RT
FG also adds to cpu usage
For those saying "its not the CPU, its bad optimisation".
You are completely missing the point of the video.
There are going to be badly optimised games that are CPU bottlenecked, the only way to deal with it, is better CPUs.
If you compare this CPU (7800x3D) with lower performance CPUs, at these badly optimised games (that unfortunately exist and will continue to exist) and these high resolutions, the better CPU will do better, therefore the point of the video is, better CPUs will run better even in high resolutions. So the people who say "yOu dOnT nEEd a hIGh eNd CpU fOr hIgH rEsolutIOn" are talking nonsense.
Can't explain more simple than that.
Kindergarten basic logic, over.
This video only proves what I've always thought
I will never enable RT on any games since the difference is mostly minimal and it's really costly on your PC.
Raster performance is what you mostly want and it pains me that devs are baking RT right into games' engines and settings
RT as a technology is better by a mile. The technology to use it well just isn't there yet but people are anyway.
RT is far better as a solution for lighting. We can't stay on rasterization forever, especially since it can also have a lot of time and authoring since it's just an approximation. There are times when rasterization is slower than ray tracing
is not minimal difference. I don't know why but I played calisto protocol today since it was free on epic some time ago now but I never found time to play it. Anyway playing it without raytracing is different experience. Looks like older game not to modern standard until I put on raytracing. I don't mind raytracing in single player campaign even if it come with some stutter hiccups. When I payed for gpu I intended to use it to its limit. Also maybe is not such bad thing that the best gaming cpu is falling behind gpu performance. It may force software developers to improve game engines and we get free fps boost instead of keep upgrading hardware for minimal gains
Some games use it well but yes in most titles i dont see much difference in looks but the difference in fps is very obvious. Im sure in the future i will be standard tho in AAA games at least
This is only true if you have a low quality monitor. I agree with you if you’re not using a oled. I have both a Ray tracing does look way better on a properly tuned oled.
I play at 1440p/PSVR2 using a 4070 paired with a 7600 & 32GB DDR5 and I'm 99% of the time GPU limited. Don't really need a faster CPU until a couple years later.
You're not really making the case for CPU bottlenecking when you're using a 4090, which about 0,01% of gamers has.