Since I had many people ask me about this, here is a short video on how to setup the on-screen overlay as seen in this video: ua-cam.com/video/EgzPXy8YYJw/v-deo.html
My analogy for what the CPU and the GPU do is that every frame is a pizza and the GPU is baking them. When the GPU gets a pizza ready it hands it to the CPU which folds a box for the pizza to deliver it. If you're gpu bound the pizzas are being made as fast as possible and the cpu has no trouble putting them in boxes. If you're cpu bound the gpu starts piling up pizzas and has to slow down because the cpu can't keep up folding and putting the pizzas in boxes for them to be delivered.
And when you lower your resolution, you need to make smaller pizzas which are quicker to make which means boxes have to be folded for them on a faster pace.
It makes sense, but I think the CPU is not the last stage, but the first, with a set of instructions for what the GPU must create. So I'd rather think of it as the CPU preparing the dough and the GPU mounting the ingredients and baking it. The GPU can only mount and bake a pizza made by the dough a CPU has previously prepared. If the dough gets too complicated (with many different ingredients and layers and whatever), then the CPU will take more time. But even if it's simple, the CPU can prepare a limited amount of pizza ready doughs, and some GPUs have incredibly large ovens. In this case, even though we could bake 100 pizza together, our CPU can only prepare 50 of them each time, so we'll use only 50% of our capacity. The thing with many CPU cores handling different amount of workloads can be like different people doing different parts of the preparation of the dough for a pizza. Like when putting the dough to rest becomes a much longer and time consuming task compared to simply opening it in the correct shape. Describing GPU bottleneck would be when there is enough dough prepared by the CPU, but the oven is too small, or when the oven is big enough, but there are more complex steps in the mouting phase, such as when the client asks for filled borders, extra ingredients or simply more calabresa, that take more time to cut.
Very good and interesting video, glad to see more people explaining things! We're going to make a video about this, but I will now add some of you case scenarios here and show your channel as well
Been watching your videos for years, and as you know, had a few pleasant interactions with you on Twitter as well, but never did I think you'd actually watch one of my videos! This really means a ton to me, really appreciate it!
One easier way to see when you have a CPU bottleneck, is to use process explorer to examine the individual threads of the game. Cyberpunk will run dozens of different threads, but not every aspect of the game engine is multithreaded, and the scheduler will move some of those threads between various cores rapidly depending on the load (perfectly normal when a game launches 100+ threads. If you look at the individual threads, you will often see 1 thread using 1 core worth of CPU time, and at that point, frame rates stop improving. A simply test is to use settings that that get the GPU usage in the 85-95% range, then while gaming, downclock the GPU (e.g., lower it by 500MHz in afterburner), that would get the GPU to be pegged at 100% while also lowering frame rates, ensuring a GPU bottleneck). then gradually increase the clock speed of the GPU while looking at the threads in process explorer. You will notice a gradual increase in frame rates that will stop as soon as one of the threads in the game reaches 1 core worth of CPU time (to figure out what is 1 core worth, take 100, and divide it by the number of threads available by the CPU, including hyperthreading. For example, a 16 thread CPU would allow for up to 6.25% of CPU time representing 1 core worth of CPU time. Since a single thread cannot use more than 1 core worth of CPU time, that thread will have encountered a CPU bottleneck. PS, a system memory bandwidth bottleneck is very rare, the only time I have been able to replicate one in a modern system, was when a lower VRAM card, e.g., 8GB or less, and in a game where the engine did not disable the use of shared memory. Once shared memory is in use actively (you can tell by the PCIe bus usage increasing significantly, and once it hots 50%indicating saturation in one direction, you will notice a scenarios where both the GPU and CPU is underutilized. PS, in those cases, you can get small boosts in performance by tightening the timings of the system RAM to improve the latency. as with modern cards, they typically have around 25GB/s DMA access from a PCIe 4.0 X16 bus, but in some cases where full saturation is not reached, lower latency minimizes how much performance is lost while shared memory is in use. Once full saturation happens, then frame times will become very inconsistent and the game will hitch/ hang as well. A storage bottleneck, will show dips in both CPU and GPU usage (most easily seen as simultaneous drops in power consumption).
Memory Bandwidth bottleneck ? like being limited by 128-bit bus ? i mean now AMD and NVIDIA released 16 gigs on entry level (rx 6600 xt + 4060 ti) where the memory bus width are 128 bit, which is potentially limiting the performance
@@i_zoru For the video card VRAM throughput that largely impacts how well the video card will handle throughput intensive tasks , such as many large textures and higher resolutions, as well as game engines that will dynamically resample assets. GPUs can resample textures hundreds of times per second with very little overhead, but it is more intensive on the VRAM throughput. For games that are not using functions like that, a card like the RTX 4060 16GB works decently, but for games that rely heavily on throughput intensive tasks, then the 4060 underperforms. It is also why the RTX 4060 seems to scale too well from VRAM overclocking to a point where some games can get a 5%+ performance boost from just VRAM overclocking, where with the RTX 4070 may get 1-2% at best in gaming, though outside of gaming, tasks like stable diffusion and other memory hard workloads will scale extremely well from VRAM overclocking, especially when using AI models and render resolutions and settings that will get the card to use around 90-95% of the VRAM. PS, stable diffusion will get a massive slowdown if there is any spillover to system RAM. For example, a RTX 3060 12GB and a RTX 4070 12GB will end up performing the same once system RAM is being used, you will also notice the power consumption of the cards drop significantly. The RTX 4060 drastically underperforms when there is spillover to system RAM, the crippled PCIe X8 interface instead of X16 makes the bottleneck far worse.
You could enable GPU time and CPU time in msec. For example if the CPU sits at 5ms per frame and the GPU time at 3ms per frame you'd conclude a CPU bottleneck and vice versa a GPU bottleneck.
Yeah, that's a good option too, but would need to chart it as the on-screen overlay doesnt update quickly enough. But it can give you an idea at least 👍
A faster way, or used in conjunction with this, is to look at the gpu power consumption. The 4070 Super is a 220w card. When gpu bottlenecked, it was running under 150w.
Yeah indeed. I spoke about this, saying it gives you a good idea and indication, and then GPUBusy can confirm it. Well, it's still not 100% accurate, but 99% is good enough for most people 🤣
That's what I've been looking at as well. It's pretty accurate too. I have a 4070 super as well. When it drops to e.g. 210w in cyberpunk, it's always a CPU bottleneck situation, at least in my experience. Adding GPU Busy to my overlay now just confirms it.
This was an excellent video. From someone who started to understand this after getting into PC gaming 2 years ago, it's always quite jarring seeing the comments in other benchmark videos or even in steam forums such as "my system is fine because my CPU is only 30 percent utilized so it must be the game" or "why test the 4090 at 1080P" in CPU benchmark comparisons.
Appreciate the kind words! And I agree, often the comments on some of these videos are mindblowing. And they are also often made by people that are so sure that what they say is correct that it's just spreading misinformation unfortunately. But it's great that you educated yourself on stuff like this within only 2 years of PC gaming. People think stuff like this is not important, but I think it is, as it can really help you plan your build and not overspend on something that'll go to waste. I see so many people that still have 4th gen CPUs rocking 4070 Tis or 4080s because "CPU doesnt matter".
@@Mostly_Positive_Reviews I am curious though, 10:57, does frame gen always get rid of the CPU bottleneck? I notice in some games like Hogwarts Legacy and Immortals of Aveum, where I am CPU bound, GPU usage is unusually underutilized at times. Digital Foundry did a video about Frame gen on console in immortals of Aveum, and the increase in performance was relatively modest despite a probable CPU bottleneck which I suspect is frame gen not entirely getting rid of that bottleneck. Could there be other limits or game engine limitations frame gen doesn't take account ?
@@Stardomplay I only used frame gen in this scenario as when this close to being fully GPU bound it almost always ensures a GPU bound scenario. But it doesnt always. Hogwarts Legacy is actually a very good example where even with frame generation you arent always GPU bound, especially when using ray tracing. Another good example is Dragon's Dogma 2 in the cities. Sometimes other instructions just take up too much of the CPU in terms of cycles that it is almost always CPU bound, even at 4K. In Dragon's Dogma 2, the devs blame the poor CPU performance on AI in the cities for example, and there even at High settings, 4K with Frame Gen on this 4070 Super I am CPU bound. Step outside of the city and you become fully GPU bound. When things like that happens people usually call a game unoptimized, and in the case of DD2 it is definitely true. Render APIs can also impact CPU bottlenecks significantly. For example, DX11 doesnt handle multithreading well at all in most games, and chances of you running into a CPU bottleneck with a DX11 game is greater in my experience. With regards to Immortals, I have only ever played the demo in the main city, and I personally find the game to be very stuttery there, regardless of CPU / GPU used. Something in that game just feels slightly off in terms of motion fluidity. But to answer your question, no, frame gen doesnt always eliminate a CPU bottleneck, but it can help a lot in alleviating it under the correct conditions.
This man's a genius.. Whilst experimenting with disabling cores ( the Intel issue of the day ).. turned it into one of the best explanations I've ever heard on a completely different issue.. can't wait for his Intel cores video ( wonder what I'll learn from that :-)
i only ask to choose whether my contents are paid ones or free ones, bcs seems like they (and myself personally) treated like pro paid one, but generate income like free, jobless one which is very confusing, and things been talking abt everything but the question asked
How to identify a hard CPU bottleneck in 10 seconds: Turn down your (native) resolution and if your framerate stays about the same (or the same) you are CPU bottlenecked, say ~30FPS at 4K and ~32FPS at 1440P. Why? Because if you lower your resolution your GPU should render WAY more frames, since there are A LOT less Pixels to render. In this case your CPU would not be fast enough to (for example) process the NPC AI, BVH for Raytracing etc. to get more than about 30ish FPS despite the fact that the GPU now has to "only" do maths for 3,6 million pixels (1440P) instead of 8.2 million pixels (4K)
Yeah indeed. When people ask me whether they are bottlenecked but they dont want to check it with something like this then I always tell them to lower their resolution. If the framerate doesnt increase much you are held back by the CPU.
@@bigninja6472 Notifications? Meaning as soon as a new video goes up you get notified? I do set it on each video to send notifications to people who have notifications enabled, but it almost never works, even on big accounts :(
I cap the FPS on my GPU so it runs cooler & smoother, since FPS performance over a certain quality threshold amount is just benchmarking giggle numbers, unnecessary strain on components, & extra heat generation for my bedroom. Running a GPU at max FPS is more likely to create occasional stuttering & less-smooth play, since it has no additional overhead to handle the sudden loads that create drop-off lows. So, my R9-7900 CPU likewise is probably just jogging along with my capped GPU instead of sprinting. Works well for me.
That's the great thing about PCs, there's a setting / configuration that caters to everybody! I also prefer to try and max out my monitor's refresh rate with a little headroom to spare to try and keep a more consistent framerate.
One is a CPU bottleneck (the 4 core example) the other is a game engine bottleneck that makes it seem like a CPU bottleneck, which as you said sometimes it is the CPU itself sometimes it isn't. There is a difference and it is pretty damn important to note.
Sure it's the game engine not properly using every cores but that's the case for 99% of games nowadays Still if you'd use let's say a 7ghz 16900k you will still have a low CPU % usage but that GPU will be closer to 100% But by that time you'll probably have a better GPU and the issue will remain, though your framerate might have doubled lol
@@blakey_wakey If the game only uses one or two cores (like older games such as star craft II) if the game uses multiple cores but it uses them at a very low % in your task manager or afterburner read outs. Software usually leaves a lot of compute performance on the table when it comes to CPUs. This is why you can have a lower powered console use a CPU from 4 or 5 gens behind PCs yet still be GPU limited. In the consoles case it is much easier to program for that specific set of hardware. The limiting factor is almost always GPU compute. However those are just two very simplified examples of a game engine being the bottleneck. There are others but they are more in the weeds. Something like.... A cache bottleneck (aka memory latency within the core), or a core to core latency bottleneck (like with the Ryzen chips with two dies on them). Every computer either has a software or hardware bottleneck though as if it didnt your games would run at infinite fps.
I also noticed nvidia user in cyberpunk say they are cpu bottlenecked at about 50% cpu usage, but on amd it often goes right to 100% in this game; amd gpus use the cpu differently
Also, even if your CPU load says 100%, that is most likely a lie. That is actually the average of the CPU’s P cores load. Meaning that if you have both the GPU and the CPU usage at 100%, you can still get a better GPU and not bottleneck it. As long as you don’t play something like Star Citizen. In that game, every single GPU gets bottlenecked no matter what CPU you have 😂
for the layman out there, if you see your GPU usage below 90-100%, then you have most likely have a cpu bottleneck. that cpu bottleneck however could be due to your cpu being underpowered compared to your gpu or it could however be due to the game/engine itself (like dragon dogma 2, jedi survivors, witcher 3 remaster)
all cpus currently are bottlenecked at 4k for example, it wouldnt matter much what cpu you have at 4k. If you have a older cpu, the biggest difference you would notice, would be in the 1%. This man talks a whole lot using so many words to describe something very simple.
Lol! Firstly, not every CPU will max out every GPU at 4K. There are plenty benchmarks out there showing that even something as recent as a 5600X can bottleneck a 4090 at 4K. Even a 4080 can be bottlenecked at 4K by a slower CPU. Secondly, I explained it in "a whole lot of words" because it's not always as simple. Yes, you can get a good idea by checking GPU usage, but using GPUBusy you can get a lot more info. And that's what the video was about...
Thankfully there’s a solution, the 7800X3D. Fastest gaming CPU on the market. Even digital foundry ditched Intel and went AMD. Do the right thing people
Since I have discovered this video over a month ago, thanks to you, now my games have better FPS. Yeah it's bad if the gpu usage is only 50% or lower even though the CPU usage is still at 50% or lower. Making the GPU usage at 100% does infact give my way better FPS and way less stuttering. Taking the load off of the CPU really helps significantly. What is really strange is going from 1080p to 1440p in some games makes almost zero difference, it's because the lower your resolution & graphics settings are, the more likely you might come across a CPU bottleneck.
I am really glad I could help at least one person. And you are 100% correct, if you go from 1080p to 1440p and there is no difference you have a very big CPU bottleneck. Not many people understand that fact. And another important one is that when you are GPU bound your CPU has some breathing room, resulting in better lows. So even though you might not always gain 50% FPS by getting rid of a CPU bottleneck, you will almost always have better lows / less stuttering. Thanks for leaving this comment, it is really appreciated!
@@Mostly_Positive_Reviews Yeah it's better to be GPU bound, CPU bound no thanks, I also subbed. You do know what you're talking about when it comes to PC, you are the man! Edit: I do like the fact that if you go from low to max graphic settings, in some cases there's little to no penalty, sometimes you might even get an fps increase because you're taking the load off of the CPU.
@@niko2002 Thank you for the sub. Just hit 3000 yesterday so I am very grateful for that! I've been around the block when it comes to PCs, but I am still learning every day, and that's what makes it so enjoyable for me!
@@Mostly_Positive_Reviews Yeah since me and my brother Built a PC with a Ryzen 7600x with an Rx6700 (non-xt), I've been very interested in computers. But I also love cars & motorcycles, it's just more fascinating to learn outside of highschool. I'm glad UA-cam exists, that's how I got interested in computers, cars & motorcycles. Because of that, I know how to build computers, which I have learned from reading a manual. I mean you can make custom power switches for PC and it can go anywhere you want, make a PC case out of cardboard, lmao the creativity just never stops, and that's what I love about computers.
@@niko2002 In the olden days of UA-cam you could actually find videos on how to make an atomic bomb, so yeah, there really is something for everyone, and it is a great place to learn for free!
i was bottleneck with my intel 9900X and my 3080ti. I have simply double my FPS in all game. And now i use loseless scaling software for get generation frame and run all game at 150-250 FPS. that work greatly.
Almost every high end game I played, I realized that I had stuttering issues, and when I checked my CPU usage it was 100%, at that time I didn't really know what it meant, but then I showed it to a friend and he told me that my CPU was bottlenecking my GPU, and told me to cap the fps to 60 and use frame gen, then my CPU usage went down and no longer had the stuttering issues.
Yeah, CPU at 100% will definitely do that. I made a video a while ago showing how capping your framerate can resolve a lot of stuttering issues when that happens.
Brilliant video, very clear and concise! It's crazy how many people in comments on benchmarks spread false information. I think everyone watching benchmarks should be made to watch this video first. Thanks for the upload!
Shout out to my fellow South African🤙, great video buddy, very informative. I’ve been out of the pc gaming scene for a while now, so I started searching for videos before I upgrade my hardware, stumbled onto your video. Subscribed, I will definitely support your videos in the future.👍
You can also have CPU Bottleneck even if only 1 Core hits 100%. On other side, if you play online games where you need lower input lag, its better to have CPU Bottleneck. Alternative is to Cap your FPS (or Reflex & Anti-Lag) but CPU bottleneck is preferable if you can get much higher FPS than the Monitor hz. Actually both CPU% & GPU% should be as low as possible while maintain high FPS (lower than 60-70% GPU & the individual CPU cores). Even if the monitor cant show more frames than its refresh rate, the input lag improves (higher 1% & 0.1% FPS).
Yeah, agree! In multiplayer shooters most people prefer much higher fps and dont care about screen tearing as it is all about the lowest input lag and high framerates.
It is not preferable to have a CPU bottleneck as this results in frame pacing issues. Always always always use a variable refresh cap just under your CPU limit for the best combination of input latency and motion fluidity. The exception to the VRR cap rule is if your game runs well in excess of your VRR window (600 FPS on 240hz display for example). In this case cap your game to a multiple of the refresh rate for even lower input lag. Do not. Ever. And I mean ever. Leave your framerate free to do whatever it wants.
@@edragyz8596 Im talking exactly about much higher fps than yout monitor hz. You're talking about average gamer. What you saying make sense but Im talking about competitive gaming (3v3, 5v5, 8v8 ect.) Especially if you play with worse connection than your enemies. Fluid or consistent Frame pacing doesnt mean much if its higher & get worse input lag in a game where you need every possible ms. VRR has inherant latency penalty depending on your system. If you have expensive PC & good monitor around 1ms is the best possible penaly. If the FPS is around the monitor refresh rate I will prefer driver based Ultra Latency Mode if the game doesnt have build-in Nvidia Reflex option. I play on 360hz, 780fps ingame cap which allows my CPU cores to stay around 55-75% max & GPU around 60-70%. 720p. This is the best setup I get and while its not always consistent it gives me best possible input lag. When i cap on 354FPS I get the same 1% lows but I get worse results even though the games feels better. You need at least 4-500fps in these games. If you dont feel the difference you probably play with low ping which compansates & that slight delay isnt going to get a big impact on your game but every player whos location is far from the server will feel a difference. Also it doesnt matter whether can feel a difference or not. If you have an oponent you have trouble killing you will find that way you have better chances to kill him in 1v1 situations.
Crazy indeed! The 2700X was such a good CPU when it launched, and still isnt terrible, but it's amazing to see how far things have improved since then. You do have a saving grace in the 5800X3D though, which you can just plop in.
Thank you, appreciate it! Yeah, the 5600G is definitely holding that 4070S back. Good thing is that you can just slot in a 5800X3D without changing anything else and you should be good.
@@Mostly_Positive_Reviews or just play a 4K DLAA and problem solved lol cpu wont be the bottleneck i have 4070 paired with 12700k and at 1080p i have cpu bound, but not in 4K, if you aim 4K 60 then bottle is much more rare scenario. if you aim 1080 240 fps then you really need a strong cpu fore sure, 7800x3D even more maybe
What im hearing is, unless the CPU is maxed out at 100% and GPU is chilling, which indicates an obvious CPU bottleneck, anything else will damn near be impossible to tell due to the game/software being used.
Yeah, pretty much. You can check other things but not just with normal software. So for instance if you reduce your Cas latency and the framerate increases you know it's more RAM related. Similar thing when you increase your RAM speed. I mean, on a very basic level. It obviously goes a bit deeper than that but there are things you can do to somewhat determine why the GPU is not reaching it's max potential.
Rivatuner has built-in options to show "GPUbusy" time in milliseconds and "frame time" in milliseconds. Whichever takes longer is the limit. It uses PresentMon preset to do the math and spits out a "Limited by: GPU/CPU" message during the benchmark
One of my computers has the 12400f rtx 3050 and 3200Hhz ram, no matter what settings I was never able to saturate the GPU to anything close to 100%. no matter if if was running 1920x1080 or in the 720 range. great video. information. Thanks.
I have a 12400F in my recording system. It's a very underrated CPU and it doesnt break the bank. It should be able to fully saturate a 3050, I use mine with a 3060 and it can keep it at 99% usage in most games I play.
@@Mostly_Positive_Reviews must be the 100 dollar mother board, it was a build a threw together to replace a system that i plugged the 3040 into that was built in 2007. if i had the time I would do some investigating as to why that 2nd system can not peg the GPU.
this might sound like a nitpick but as an RTSS overlay user myself , or at least yours looks pretty much the same , it bugs me to see reversed usage and temp placements for CPU and GPU , gpu is temp - usage and cpu is usage - temp @.@
Hahaha, you arent the first to point this out. It really was an honest mistake. The default order in CapFrameX is slightly different and I didnt pick it up. It has been fixed in subsequent videos.
There's also something else that isn't often mentioned - GPU usage impacting input delay even in CPU bound scenarios. Explanation: If you use an FPS limiter in competitive games, for example I use V-SYNC (nvclp global On) + G-SYNC with a 200Hz monitor in Overwatch 2. That automatically locks my FPS to 189 to prevent V-SYNC activation at all times. This results in a certain amount of input delay. Then if I reduce my render resolution below 100% and use FSR for example, this will result in the same 189FPS cap, but a lower GPU usage and therefore lower input delay, because GPU Busy times are reduced. This is why people who play competitive titles like Overwatch, Valorant or CS would still use low graphics settings even on a RTX 4090. Additionally features like the NVIDIA Reflex + Boost will result in even lower input delay in these scenarios, because +Boost setting causes your GPU clocks to stay at the minimum of the Default Max Clock. It's the same value as if you used the Prefer Maximum Performance setting in nvcpl. This results in your GPU Busy times to be reduced even further on top of the reduced render queue which is the main purpose of the NVIDIA Reflex itself without +Boost enabled.
Yeah, you're right, this topic does not get a lot of airtime, and I think it's because it's not that well known. I might think of a way to string a video together to explain the basics. Many people arent aware that Reflex + V-sync actually caps your framerate just below your monitor's refresh rate for this exact reason. It only needs very little additional GPU headroom to reduce latency. For example, my 165hz panel gets capped at 158 fps with V-sync + Reflex + G-sync. It also prevents the framerate to hit the v-sync limit, which will also increase input latency, as you said.
main problem here might be not what you described. frame generation has a lot of different GPU units doing different types of stuff. shaders or RT usually the slowest ones so the rest of GPU rests. and then after that also post processing for a ready frame... that how you get underloaded GPU in most cases. and BTW the logic of howand in what order frame is generated and what game engine does is different in every game. so you can get same or totally different CPU load on different GPU!
You can also have GPU-Z running on your second screen. Go to sensors tab. It will give you PerfCap Reason. It might be Thermal, which is obvious but it can also give voltage related reasons, such as Reliability Voltage which means cannot boost higher at current voltage, or something else it will indicate.
Great video !! How do you get the GB in ram and cram to appear in Gb instead of mb and also how do you put the names explaining what is what in screen ? Can you provide your afterburner profile ? 😅
I wish someone would create an algorithm/chart into an app where you selected cpu & gpu then selected “desired fps” & it would automatically print the graphics settings you need to apply for your game.
I always looked at it as: Gpu at or near 100% utilisation then your gpu bottlenecked. Gpu consistently not at or near 100% cpu bottleneck. This way does fall apart if a game for example is fps capped
In a huge amount of cases where there's low CPU usage and not 100% GPU usage while unlocked framerate is either software problem not properly optimized or memory botllenecked cause if the same CPU had much more and faster L1/L2/L3 cache I'm sure would get much higher framerates and also much faster ram with lower timings count for increasing CPU performance.
For me i dont mind as long as it isnt dtuttering. If im stuttering due to a bottleneck then its time to get a new cpu. As we know gpu bottlenecks arent as bad as cpu bottlenecks. Also a cpu bottleneck isnt usually bevause the game is demanding its usuallly due to the cpu not being able keep up with communication of the gpu. Thats usually a bottleneck. Games bottlenecking a cpu or being cpu limited usually doesnt effect much. Like here the game was more then likely still running good so for me thats not as bad. Good explanation on shifting the load. I love the comments on facebook and reddit of mone believers. They legit think bottlenecks arent real haha. They dont believe any cpu can bottleneck with a 4090. They really think there 7800x3d can keep up which yes it performs well but they still have a bottleneck more then they may think. The 4090 is just that powerful. Or like me a 4080 super with a 13400f. I have a bottleneck and my bottleneck even causes stuttering at times which is why i will be upgrading soon. But yeah alot of people go naw thats not bottlenecked. Haha
Appreciate you taking the time to write this comment. Yeah, it's amazing how many people deny stuff like this. But I agree 100% with you, unless it's really impacting your experience dont worry too much about it. If you still get framerates that are acceptable to you with no stuttering then it's most probably not worth it upgrading your whole platform. But also, a 13400F is definitely holding back that 4080, so in your case you'll get a very decent bump by going for a 13700K or similar. I have a 12400F system as well that I test more entry-level GPUs with, and once tested the 4070 Ti on there, and it was quite bad. GPU temps were pretty low though as the GPU hardly got used 🤣
@Mostly_Positive_Reviews yep exactly haha. I will say I'm getting over 90% utilization in 4k which is OK still bottlenecked but yeah it's been running ok in 4k. Def has some stutters so def going to be upgrading soon.
Thanks. This showed me that my GPU is sitting idle waiting for frames to be generated and it justified my CPU upgrade if anyone asks :P 3070ti was at 6ms where my 11700k was at 16ms and the worst part is I am getting a 3090 later this week so it will just get worse
Thanks. I learnt a lot from your video. But as I checked this with Tomb Raider, even when the GPU Busy Deviation was lower, the CPU usage became higher. Is this inconsistency exactly what you said that this method is not 100% accurate?
Good video. Cyberpunk is a good game for this because it's been out in some form for 4 years, so it's probably pretty well optimized. Driver issues and egine issues could probably cause some bottlenecking.
Thanks again! Yeah, there are many other reasons for a "CPU bottleneck", and software / engine is just one of them. I touched on it briefly but because I was more focused on showing how to identify a bottleneck via CapFrameX that I didnt go into too much detail, and some people wanted me to, based on the comments.
I have a 7800x3d paired to a 4080 OC. I am currently playing Skyrim at 1440p without a frame rate cap (a mod). And with enough mods, textures, grass mods and DynDOLOD you can absolutely see the CPU bottleneck everything. The FPS will be below the v sync 165hz cap, and GPU will be at 90% with CPU at 20%. I think it's because the game only really uses one thread. So while it seems the CPU is chilling. It's actually not able to let the 4080 go to 99% on Skyrims game engine
For the situation on the right in the beginning of the video, it's not quite CPU bottleneck but RAM speed bottleneck. RAM are not fast enough to deliever all the data that CPU needed. That is why AMD has 3D cache CPU which is just to stack very large L3 cache for a higher Memory Hit Ratio. For most of the RAM speed bottlenect, it's the RAM latency problem. RAM read and write speed isn't a serious problem for majority of the game. Imagine copying files, if you copy and paste single big file like zip or a movie, it is very fast. But if you want to copy a lot of small files like pictures, then it's slow. Same thing to the RAM. Some game is optimized, they will try to combine many small data into a large one or simply cut down unnecessary process. But other are not, especially for the indie games.
this so fucking crazy that your video showed up to me , ive noticed this yesterday when i increase the game graphics settings i get better performance/ fps , now i see why thanks for the video.
GPU may be underutilized(not due to CPU bottlenecking) in the following cases: 1. Old games. They usually need a fraction of modern GPU power to run smoothly. 2. Games that more CPU bound, like stellaris. Graphically speaking there isn't much to render in Stellaris, but CPU calculations and loads are tremendous. In graphically challenging games you should see 95-100% of GPU utilization. If the number is lower, then: 1. Poor optimization. 2. CPU bottleneck.
Your 2nd scenario isn’t necessarily cpu bound. All it shows is that your aren’t gpu bound. Your limiting factor could be cpu but not usage related. So like the cache or something. Or it could be engine limitations, or IO. Or it could be gpu memory bound. That won’t show up at utilization. Typically if your cpu bound and less than 100% utilization you’ll still see one of the threads pegging 100%. I’d imaging frame gen also moved the limiting factor to your gpu memory speed as well.
Yeah, for sure a lot of variables in the second scenario, and I touched on it briefly to say it's not necessarily just the CPU in this case, but many people still just call it a CPU bottleneck for simplicity.
@@Mostly_Positive_Reviews it’s definitely a rabbit hole I got sucked down for a while. Really the only conclusion is how much we lack in monitoring capabilities tbh.
@@sjones72751 100%. If there was an app to tell me definitively "RAM bottleneck" or "PCIE Bandwidth bottleneck" or "SSD bottleneck" I would buy it in an instant!
There are more and more South Africans finding these videos, and I love it! In this video I used CapFrameX instead of MSI Afterburner. It still uses RTSS but it has additional overlay options 👍
@@Sebastianino Hmm, make sure you have the latest version installed perhaps? Other than that, check for the correct labels in the Overlay tab. It is called GPU Active Time Deviation, GPU Active Time Average and Frame Time Average.
@@Mostly_Positive_Reviews I did not download latest... Funny because from searcher site it says 1.7.1 as latest but from inside web site 1.7.2 is newest.
Simple, if your CPU is at 100% but your GPU is not even close to 90. With dynamic boost in both AMD and Nvidia based gpus those 80% could simply be P2 not P1
Sure, but not all CPU bottlenecks present themselves as 100% CPU usage. Your CPU usage can be 40% but still be CPU bound. It's possible that a single thread is running at 100%, and the overall usage is reported lower because it is averaged out across all cores, or you most used thread can also be sitting at 60% but still being CPU bound because it's just not fast enough to keep the GPU fully utilized.
The best way of identifying if youre cpu or gpu bound is by looking at the GPU usage. In general you want your GPU usage to be pinned at 95-100%. If its significantly lower on avg then youre most likely cpu bound.
During the first scenario, I only enabled 4 cores, no hyper threading, and no e-cores, that's why at 100% CPU usage power usage is lower than it would be if all cores were enabled.
To determine CPU bottleneck, beside obviously looking at GPU load, also enable monitoring per core of CPU. Whole CPU load means nothing because one core may be doing 95% of the job while others sitting without any load or even sleeping (parked).
This more applies to older games most games will have a engine limit where no matter your CPU you can't push past a certain frame rate. Normally this is absurdly high framerates and you wouldn't want past 200fps for most games unless they are super competitive esport titles. Something like GTA 5 will let you push into the 180 to 200fps for example but it will extremely unstable so that's why most people suggest capping it at around 120 to 144fps. It's worth doing research on certain titles especially if you have a high end machine that is more likely to run into that sort of problem.
Yeah, GTA V was notorious for breaking over 180 fps, and then some Bethesda games also have their physics tied to the framerate, so if it goes over 60 fps things start to freak out. There are patches that help with that though, but agreed, best to find out if a game has an engine limit or not and try to stay below that. Many new games do have uncapped frame rates though. If you do all you can with frame gen and dlss and overclocking you can push Cyberpunk to 300 fps or more. But there definitely are games that have engine limits, or issues when going above certain framerates.
Yeah, that's the correct approach. Depending on what performance I am getting it's not worth it to upgrade a whole platform for a 10% improvement kind of thing.
The best CPU bottleneck indicator is your GPU, because CPU usage indicator only shows the average usage of your threads. If your CPU behaves well,it will make enough calculations and your GPU will be always loaded. Otherwise the GPU will deliver frames, but CPU won't be able to calculate enough to keep up and you will witness a huge performance decrease. The bottlenecking of Intel's 13th and 14th generation flagship CPUs may be caused by architecture issues, so you should better stick to 12th generation or switch to AMD's AM5 based X3D CPUs. The 7800X3D is a good and reliable CPU that will serve you for decades. According to AMD, 7th generation X3D CPUs will be better than upcoming 9th generation in games. That means that there's no need to switch for X3D CPU owners. By the way, the 9th generation of X3Ds is rumored to be launched in October. Good luck in achieving system stability, folks!
Yeah, which is exactly what GPUBusy does. It measures how well your GPU is being used, meaning if it's not being utilized properly you have a bottleneck somewhere.
Nice video, I like it. I have i7 9700 and GTX 1660 super and 600w PSU. I am planning to upgrade my GPU probably to RTX 4060 Ti. I know this card is questionable, I certainly would like go get 4070 but my system won't handle 4070 for sure, I mean I don't even know yet if i7 9700 would be okay with 4060 Ti. What do you guys think?
The only issue with the 4060 Ti is the price. Other than that it performs very well. I used to run a 9700K with a 3070 and it ran fine with a slight bottleneck at 1080p, but nothing justifiying a CPU upgrade. Your combination will be fine.
@@Mostly_Positive_Reviews Thanks for reply, I appreciate it. There are very few videos with i7 9700 and cards from 40xx series so I was a bit confused. If i7 9700 can handle 3070, then 4060 Ti shouldn't be any different indeed. I agree about the price. There is no bad card - only bad pricing
A CPU can be a bottleneck as a result of hitting the single thread performance limit. And, technically, it is the GPU that is being bottlenecked; the CPU IS the bottleneck, in both of these scenarios.
While I do love the video and the way you explained, you're always gonna be bottlenecked by either the CPU or the GPU, because not all games are designed the same. Some games rely on heavy CPU calculations while other rely on heavy graphical fidelity and some on both. So no matter which way you go you're always gonna have a bottleneck.
Thank you, and yeah, 100% correct. The only reason to really care about a CPU bottleneck in particular is because your GPU is almost always the most expensive component and it would be better overall if that is then your bottleneck as it means you are getting the most out of your system. That said, you shouldnt rush out and buy a 7800X3D if you are 10% CPU bottlenecked. I'd say even up to 20% is fine and not necessarily worth upgrading a whole platform over.
you can either spend 13 mins going through this video or - look at the GPU usage if its lower than 100% you are cpu bound. This one method will do fine 99% of the time. As for the video itself, the first half is useless for most people who follow the tech updates, the gpu busy thing in the second part is the real part for anyone who want to know more about this stuff
Actually bottlenecks are a product of software issues as much as hardware. If a game uses CPU inefficiently then of course CPU can become the bottleneck. Unless it uses GPU inefficiently too.
It's incredible how many people don't understand this concept, and keep sticking new gen GPUs into older machines with almost 10 years old CPUs. Then complain it's the games being unoptimised...
Yeah indeed. A while ago if you had a 3770K or something you could go from a GTX 560 to a 660 to a 760 without worry. That's not the case anymore. There are actually people running 4770K CPUs with 4070 Ti GPUs.
I like to cap my CPU to not go over a certain t° (laptop) so the CPU and GPU aren't always at their best. I think i have a good pairing Ryzen 7 7735HS, RTX4060, 2x8GB (4800) 1080P. Nice explanation, thanks!
There are definitely instances where limiting components is a good thing indeed. But yeah, your components are pretty well paired. Would even be able to power a more powerful GPU with that CPU as well, so you are fine.
🤣🤣🤣 Even if your CPU is at 80% and your GPU usage is at 99% you're fine. Just wanted to show that even if your CPU is not at 100% it can still cause a bottleneck 👍
@@Mostly_Positive_Reviewscan you give a short answer with why thats happening? like is it because the game or the cpu itself ? i watched the video btw but its hard to understand.
i also watched a 4070S + i5 13600k in 1080p benchmark and the gpu was hitting 90+% throught the whole video with differrnt settings so im kinda confused why you getting those numbers with 14600k wich should be better cpu.
I purposely gimped the CPU to show what a CPU bottleneck will look like when displayed in RTSS / CapFrameX. I have done proper benchmarks with this CPU if you want to check it out 👍
It's okay. Sometimes the GPU can render more frames than what the CPU is able to prepare, and then your GPU usage will be lower. Say your CPU can prepare 120 frames per second (resolution has no effect on this number). Then your GPU for example can render 160 fps at 1080p, 120 fps at 1440p, and 60 fps at 4K (all hypothetical numbers, but let's use them). So if you play at 1080p, the GPU can do 160 fps, but the CPU only 120 fps, meaning your final framerate will be 120 fps. That's what we call a CPU bottleneck. Your GPU wont be fully utilized, as it can do 160 fps, but it's only been given 120 frames to render by the CPU. Up that to 1440p, now the GPU can do 120 fps, the CPU still does 120 fps, so the GPU will now be fully utilized, showing 99% usage. CPU usage will most likely be high in this scenario as well. Up that to 4K, the GPU can only do 60 fps, so the CPU only has to prepare 60 fps, and your GPU will be fully utilized. Does that make better sense?
Thanks! I tried to explain it in a way that made me understand, because I watched many a video on GPUBusy and I dont think I found one that explained it in a way for me to understand. So after playing around for a few weeks I thought I'd give it a shot of explaining haha!
So in the case of scenario 2, what would be the bottleneck? You mentioned that it could be anything system related bottlenecking the CPU and thus bottlenecking the GPU. How would you go about diagnosing the cause of the bottleneck on the CPU?
Identifying exactly what the issue is becomes a bit more tricky, but it can be done. You can first overclock the CPU a bit and see if it improves your framerate. You can also tune memory a bit to see if that improves it. But dont do 2 things at the same time otherwise you wont know what improved it. It can be either the CPU is just not fast enough, memory speed, or memory latency, or all 3. But in the case of the 2nd scenario I wouldnt really worry about it too much, unless your performance is severely impacted, and then in most cases all that will fix it to a greater degree is a new CPU and/or new RAM, which is rarely worth the cost, unless, as I said, your performance is severely impacted.
I am 99% sure it limits CPU performance as the GPU frametimes still show sya 6ms, but CPU frame times show 16.6ms for example. But that would reduce all usage if you are well below what the max your system is capable of rendering is. It can help with smoothness indeed. G-sync does not limit the framerate, so if you go above your monitor's refresh rate you will see tearing. So limiting it a few frames below your monitor's refresh rate will prevent tearing, and with g-sync enabled you will get a much smoother presentation.
@Mostly_Positive_Reviews Thank you for responding, I'm using 165hz Dell monitor, when G-Sync Is enabled I'm getting 158 fps without any frame capping. I should still limit frames in games to let's say 155, or limit but to 158? I'm getting strange tearings while gaming, I'm getting 120 and more but I can still see the tearing. 😞
@@V3ntilator More cores doesn't always mean better performance in games. Here are the technical challenges game developers face when trying to leverage multiple cores for better performance. - Not all tasks in a game can be easily parallelized to run on multiple cores simultaneously. Some processes are inherently sequential, meaning they need to be executed one after the other, limiting the benefits of having more cores. - There is a fundamental concept in parallel computing called Amdahl's Law. It states that the speedup of a program from parallelization is limited by the sequential portion of the program. In other words, if a game has certain tasks that can't be parallelized, having more cores won't provide a proportional increase in performance. - Coordination and communication between multiple cores can introduce overhead. If not managed efficiently, the time spent on communication between cores can offset the potential performance gains from parallelization. - Writing code that effectively utilizes multiple cores requires additional effort and expertise. Game developers need to carefully design and implement algorithms that can take advantage of parallel processing without introducing bugs or performance bottlenecks. - PCs come in a wide range of hardware configurations. Developers need to create games that can scale across different systems with diverse CPU architectures and core counts. This can make optimizing for a specific number of cores challenging.
60fps is super smooth and more then enough for any game. But of course with modern unrealistic and totally unnecessary trends we come to the spot where you need 200fps to play game,anything lower is a pure shame and PC hell lol. But seriously,only thing you get is more thermals and bigger energy bills and in the same time not gaining anything important. And then comes monitor manufacturers with 360hz monitors,and now even 500hz lol so in 5 years if you dont play games at 300fps you would be put on trial and get death penalty haha. Joke aside,if your cpu have enough power to push 60fps bottleneck or not,it doesnt matter,except for pure benchmarking.
Yeah, 60 fps is fine, but it is also the minimum for me. Personal preference of course. The video is more aimed at people running heavily underpowered CPUs with very fast GPUs as you ultimately want to make full use of your expensive GPU. A slight bottleneck is far from the end of the world though.
Thank you! The easiest way is to use CapFrameX. It has it built-in already. You can do it with Afterburner but you have to go add plugins in RTSS and configure it from there. It's possible but a bit of a mission.
Hlo brother if the intel i5 13600k had good 1% lows than amd r5 7600x in gaming with rtx 4070 or rx 7800xt then sir please tell the right answer...then should me buy intel or amd..
Tough one. If you go 13600K you wont be able to upgrade much, except to a 14700K for example, whereas AM5 platform should get a few more upgrades. It also depends on price. If you dont care about the upgrade path then go for the 13600K, as it generally has better memory overclocking as well.
@@Mostly_Positive_Reviews but what if i never overclock these two amd r5 7600x and intel i5 13600k because i am afraid to i burn my cpu by overclocking then which prossesor should I buy.. Brother if you give your Instagram id or any your contact for right component then I'll pay you some money..
@@goluop0982 If you dont want to overclock then I'd say go for the 7600. That way you can still upgrade to a 9000 series CPU later without having to change motherboards. And the lows on the 13th gen is only slightly better anyway. It becomes much better with overclocking, but if you arent going to overclock I'd say go for the system with the better upgrade path currently. I dont have IG or anything else, except Twitter, but that account is on hold for now. Really dont need to pay me ;) You'll be perfectly fine with a 7600X, B650 motherboard, and 32GB DDR5 6000Mhz CL30 memory. If you want to save a bit of money then go 5600Mhz cl28 / cl30, but the price difference shouldnt be that big between these kits anyway. You can then get a 750W power supply, which would be perfect for the 7800 XT / 4070, whichever you decide to buy. I will say that if you decide to buy the AMD CPU, go for the AMD GPU as well as Smart Access Memory on AMD is supported on more games than Resizable BAR from Nvidia. But really, either system will be perfect for 1440p gaming. The 7800 XT should last a bit longer due to more VRAM, but the 4070 will be slightly faster in RT, and it also has DLSS< which currently has better image quality than FSR.
U can check if ryzen 7500f is available in ur area it will save u some money and its basically same processor as 7600 but without igpu that u dont need cos u have gpu
@@Mostly_Positive_Reviews brother tell me that what is the reason of buy intel i5 14600kf any reason .does its faster in gaming from amd r5 7600x at stock GHz speeds???????????
Yeah indeed. GPU Power combined with GPU Usage will give you a good idea. Using PresentMon is just a bit more accurate, but we all have our own ways of doing these things and coming to the same conclusion.
Great video, i just had 2 questions 1: is gpubussy only for intel or can i use it for an amd cpu as well 2: when i play games I can see the buildings and signs either pop up or render slower than the surrounding area. Like a billboard's texture load later than buildings and such. And i noticed it on every game from spider man to cyberpunk and Alan wake 2. Do you think it can be a cpu issue? Btw my cpu is ryzen 5 7600 and for gpu i have a 7900 gre with 32 gigs of ddr5 ram 6000mhz. And the percentages are normal meaning gpu is 90-100 and cpu is 60-80
Thanks! You can use it on all systems, no issue. With regards to pop-in, Cyberpunk especially is notorious for a low LOD. It has a lot of pop-in. Your system is pretty well balanced so unless something is wrong with one of your components I dont think it's that. Maybe check Level of Detail settings, view distance settings, and then texture detail settings to see if that helps. It doesnt in Cyberpunk, but I dont remember AW2 having that issue for me, but it has been a while since I last played it.
Maybe a stupid question but will increasing the resolution be enough to reduce cpu bottleneck in some cases? For example I have a 2700x with rx 6600 and I was thinking of buying 1440p 144hz monitor that may help me reach a stable 60 more easily by reducing cpu bottleneck. Is it a good idea?
Not a stupid question at all. If you are getting 50 fps and CPU limited at say 1080p, increasing the load on the GPU by increasing the resolution you will still only get 50 fps max as the CPU is only able to prepare 50 fps, regardless of resolution. But shifting the load to the GPU will most likely get you better frametimes and less stutter.
In the above example where your cpu can only put out 50 fps you will be cpu bottlenecked until you increase the graphics settings enough to drop below 50 fps. You can only ever get the highest fps of your weakest part.
Man could you help me? I need help and nobody could help me yet. I got a RX 6650XT and Ryzen 5 3600. Other people with the exact same setup or an even better GPU dont have the huge stutter i have in Battlefield 5, Battlefield 2042, Palworld etc. I tried EVERYTHING and it looks like a CPU bottleneck to me. But why do other people have a slight bottleneck but dont get stutter so the game is unplayable like mine is? Like its the same exact setup? i got fast 3600mhz cl18 ram, nvme m.2 980 pro, temps are well under the limit and my mobo is great too. XMP on, SAM / Rebar on etc...
Hey man, sure, let's see if I can help. Can you perhaps upload a video while using an overlay to show CPU / GPU usages etc? You can upload to your UA-cam, list it as "Unlisted" and mail me a link so I can have a look? My email address can be found under my info on my channel 👍
@@Mostly_Positive_Reviews i will do that in a Couple days, i found the program called PresentMon and im trying to figure out if i just have insane bottleneck. But trank you so so much dude. I will do that :)
Anytime! If you need help, I have a video on here about setting up RTSS with MSI Afterburner. It shows you step-by-step how to set it up to show CPU and GPU usages etc. Once you are ready in a few days, send that email and we'll take it from there 👍
@@Mostly_Positive_Reviews i have i7 6700k with rtx 3060ti and i dont know i have bottleneck or not hahah. my gpu %95-99 my cpu %70-80 with ultra settings with RT. thanks dlss frame generation. or should i thanks to AMD for open source FSR 3.0 🤣🤣
Since I had many people ask me about this, here is a short video on how to setup the on-screen overlay as seen in this video:
ua-cam.com/video/EgzPXy8YYJw/v-deo.html
My analogy for what the CPU and the GPU do is that every frame is a pizza and the GPU is baking them. When the GPU gets a pizza ready it hands it to the CPU which folds a box for the pizza to deliver it. If you're gpu bound the pizzas are being made as fast as possible and the cpu has no trouble putting them in boxes. If you're cpu bound the gpu starts piling up pizzas and has to slow down because the cpu can't keep up folding and putting the pizzas in boxes for them to be delivered.
And when you lower your resolution, you need to make smaller pizzas which are quicker to make which means boxes have to be folded for them on a faster pace.
As silly as it sounds, it actually makes sense!
pizza pepperoni reconstruction featured now in latest updated! make up the shape of the pepperonis on the go!
It makes sense, but I think the CPU is not the last stage, but the first, with a set of instructions for what the GPU must create. So I'd rather think of it as the CPU preparing the dough and the GPU mounting the ingredients and baking it. The GPU can only mount and bake a pizza made by the dough a CPU has previously prepared. If the dough gets too complicated (with many different ingredients and layers and whatever), then the CPU will take more time. But even if it's simple, the CPU can prepare a limited amount of pizza ready doughs, and some GPUs have incredibly large ovens. In this case, even though we could bake 100 pizza together, our CPU can only prepare 50 of them each time, so we'll use only 50% of our capacity. The thing with many CPU cores handling different amount of workloads can be like different people doing different parts of the preparation of the dough for a pizza. Like when putting the dough to rest becomes a much longer and time consuming task compared to simply opening it in the correct shape. Describing GPU bottleneck would be when there is enough dough prepared by the CPU, but the oven is too small, or when the oven is big enough, but there are more complex steps in the mouting phase, such as when the client asks for filled borders, extra ingredients or simply more calabresa, that take more time to cut.
Mmmm pizza
Very good and interesting video, glad to see more people explaining things! We're going to make a video about this, but I will now add some of you case scenarios here and show your channel as well
Been watching your videos for years, and as you know, had a few pleasant interactions with you on Twitter as well, but never did I think you'd actually watch one of my videos! This really means a ton to me, really appreciate it!
It's the man himself 😱😱
Like, how did this even happen 🤣 I am awestruck here 🥳
Appreciate it my man 🙏
Good stuff. The more educated consumers become it makes it harder for CPU and GPU vendors to BS us with their garbage marketing and slides.
One easier way to see when you have a CPU bottleneck, is to use process explorer to examine the individual threads of the game. Cyberpunk will run dozens of different threads, but not every aspect of the game engine is multithreaded, and the scheduler will move some of those threads between various cores rapidly depending on the load (perfectly normal when a game launches 100+ threads.
If you look at the individual threads, you will often see 1 thread using 1 core worth of CPU time, and at that point, frame rates stop improving. A simply test is to use settings that that get the GPU usage in the 85-95% range, then while gaming, downclock the GPU (e.g., lower it by 500MHz in afterburner), that would get the GPU to be pegged at 100% while also lowering frame rates, ensuring a GPU bottleneck). then gradually increase the clock speed of the GPU while looking at the threads in process explorer. You will notice a gradual increase in frame rates that will stop as soon as one of the threads in the game reaches 1 core worth of CPU time (to figure out what is 1 core worth, take 100, and divide it by the number of threads available by the CPU, including hyperthreading. For example, a 16 thread CPU would allow for up to 6.25% of CPU time representing 1 core worth of CPU time. Since a single thread cannot use more than 1 core worth of CPU time, that thread will have encountered a CPU bottleneck.
PS, a system memory bandwidth bottleneck is very rare, the only time I have been able to replicate one in a modern system, was when a lower VRAM card, e.g., 8GB or less, and in a game where the engine did not disable the use of shared memory. Once shared memory is in use actively (you can tell by the PCIe bus usage increasing significantly, and once it hots 50%indicating saturation in one direction, you will notice a scenarios where both the GPU and CPU is underutilized.
PS, in those cases, you can get small boosts in performance by tightening the timings of the system RAM to improve the latency. as with modern cards, they typically have around 25GB/s DMA access from a PCIe 4.0 X16 bus, but in some cases where full saturation is not reached, lower latency minimizes how much performance is lost while shared memory is in use. Once full saturation happens, then frame times will become very inconsistent and the game will hitch/ hang as well.
A storage bottleneck, will show dips in both CPU and GPU usage (most easily seen as simultaneous drops in power consumption).
Actually very useful info, appreciate it 🙏
Memory Bandwidth bottleneck ? like being limited by 128-bit bus ? i mean now AMD and NVIDIA released 16 gigs on entry level (rx 6600 xt + 4060 ti) where the memory bus width are 128 bit, which is potentially limiting the performance
@@i_zoru For the video card VRAM throughput that largely impacts how well the video card will handle throughput intensive tasks , such as many large textures and higher resolutions, as well as game engines that will dynamically resample assets. GPUs can resample textures hundreds of times per second with very little overhead, but it is more intensive on the VRAM throughput. For games that are not using functions like that, a card like the RTX 4060 16GB works decently, but for games that rely heavily on throughput intensive tasks, then the 4060 underperforms. It is also why the RTX 4060 seems to scale too well from VRAM overclocking to a point where some games can get a 5%+ performance boost from just VRAM overclocking, where with the RTX 4070 may get 1-2% at best in gaming, though outside of gaming, tasks like stable diffusion and other memory hard workloads will scale extremely well from VRAM overclocking, especially when using AI models and render resolutions and settings that will get the card to use around 90-95% of the VRAM. PS, stable diffusion will get a massive slowdown if there is any spillover to system RAM. For example, a RTX 3060 12GB and a RTX 4070 12GB will end up performing the same once system RAM is being used, you will also notice the power consumption of the cards drop significantly.
The RTX 4060 drastically underperforms when there is spillover to system RAM, the crippled PCIe X8 interface instead of X16 makes the bottleneck far worse.
You could enable GPU time and CPU time in msec. For example if the CPU sits at 5ms per frame and the GPU time at 3ms per frame you'd conclude a CPU bottleneck and vice versa a GPU bottleneck.
Yeah, that's a good option too, but would need to chart it as the on-screen overlay doesnt update quickly enough. But it can give you an idea at least 👍
in MSI Afterburner ? how?
A faster way, or used in conjunction with this, is to look at the gpu power consumption. The 4070 Super is a 220w card. When gpu bottlenecked, it was running under 150w.
Yeah indeed. I spoke about this, saying it gives you a good idea and indication, and then GPUBusy can confirm it. Well, it's still not 100% accurate, but 99% is good enough for most people 🤣
That's what I've been looking at as well. It's pretty accurate too. I have a 4070 super as well. When it drops to e.g. 210w in cyberpunk, it's always a CPU bottleneck situation, at least in my experience. Adding GPU Busy to my overlay now just confirms it.
This was an excellent video. From someone who started to understand this after getting into PC gaming 2 years ago, it's always quite jarring seeing the comments in other benchmark videos or even in steam forums such as "my system is fine because my CPU is only 30 percent utilized so it must be the game" or "why test the 4090 at 1080P" in CPU benchmark comparisons.
Appreciate the kind words! And I agree, often the comments on some of these videos are mindblowing. And they are also often made by people that are so sure that what they say is correct that it's just spreading misinformation unfortunately.
But it's great that you educated yourself on stuff like this within only 2 years of PC gaming. People think stuff like this is not important, but I think it is, as it can really help you plan your build and not overspend on something that'll go to waste. I see so many people that still have 4th gen CPUs rocking 4070 Tis or 4080s because "CPU doesnt matter".
@@Mostly_Positive_Reviews I am curious though, 10:57, does frame gen always get rid of the CPU bottleneck? I notice in some games like Hogwarts Legacy and Immortals of Aveum, where I am CPU bound, GPU usage is unusually underutilized at times. Digital Foundry did a video about Frame gen on console in immortals of Aveum, and the increase in performance was relatively modest despite a probable CPU bottleneck which I suspect is frame gen not entirely getting rid of that bottleneck. Could there be other limits or game engine limitations frame gen doesn't take account ?
@@Stardomplay I only used frame gen in this scenario as when this close to being fully GPU bound it almost always ensures a GPU bound scenario. But it doesnt always. Hogwarts Legacy is actually a very good example where even with frame generation you arent always GPU bound, especially when using ray tracing. Another good example is Dragon's Dogma 2 in the cities. Sometimes other instructions just take up too much of the CPU in terms of cycles that it is almost always CPU bound, even at 4K. In Dragon's Dogma 2, the devs blame the poor CPU performance on AI in the cities for example, and there even at High settings, 4K with Frame Gen on this 4070 Super I am CPU bound. Step outside of the city and you become fully GPU bound. When things like that happens people usually call a game unoptimized, and in the case of DD2 it is definitely true.
Render APIs can also impact CPU bottlenecks significantly. For example, DX11 doesnt handle multithreading well at all in most games, and chances of you running into a CPU bottleneck with a DX11 game is greater in my experience.
With regards to Immortals, I have only ever played the demo in the main city, and I personally find the game to be very stuttery there, regardless of CPU / GPU used. Something in that game just feels slightly off in terms of motion fluidity.
But to answer your question, no, frame gen doesnt always eliminate a CPU bottleneck, but it can help a lot in alleviating it under the correct conditions.
This man's a genius.. Whilst experimenting with disabling cores ( the Intel issue of the day ).. turned it into
one of the best explanations I've ever heard on a completely different issue.. can't wait for his Intel cores
video ( wonder what I'll learn from that :-)
You get snarky sarcasm, and then you get well-written sarcasm. I gave this a heart because I do appreciate some well-written sarcasm :)
i only ask to choose whether my contents are paid ones or free ones, bcs seems like they (and myself personally) treated like pro paid one, but generate income like free, jobless one
which is very confusing, and things been talking abt everything but the question asked
its very naturally and make sense if i feel very unfair if then someone else just live easily from here
How to identify a hard CPU bottleneck in 10 seconds:
Turn down your (native) resolution and if your framerate stays about the same (or the same) you are CPU bottlenecked, say ~30FPS at 4K and ~32FPS at 1440P.
Why? Because if you lower your resolution your GPU should render WAY more frames, since there are A LOT less Pixels to render.
In this case your CPU would not be fast enough to (for example) process the NPC AI, BVH for Raytracing etc. to get more than about 30ish FPS despite the fact that the GPU now has to "only" do maths for 3,6 million pixels (1440P) instead of 8.2 million pixels (4K)
Yeah indeed. When people ask me whether they are bottlenecked but they dont want to check it with something like this then I always tell them to lower their resolution. If the framerate doesnt increase much you are held back by the CPU.
Even simpler, if your gpu isn't at max utilization and you don't have a fps limit, you're bottlenecked by something - probably the cpu
Gosh, I really need to pay attention when typing as I constantly make typos. Apologies for typing "Scenario" as "Scenarion" 🤦♂️
Please enable notifications 🙏
@@bigninja6472 Notifications? Meaning as soon as a new video goes up you get notified? I do set it on each video to send notifications to people who have notifications enabled, but it almost never works, even on big accounts :(
You are now attention bottlenecked
@@vash42165 One of my more popular videos and I make rookie mistakes like that 😥
I cap the FPS on my GPU so it runs cooler & smoother, since FPS performance over a certain quality threshold amount is just benchmarking giggle numbers, unnecessary strain on components, & extra heat generation for my bedroom. Running a GPU at max FPS is more likely to create occasional stuttering & less-smooth play, since it has no additional overhead to handle the sudden loads that create drop-off lows. So, my R9-7900 CPU likewise is probably just jogging along with my capped GPU instead of sprinting. Works well for me.
That's the great thing about PCs, there's a setting / configuration that caters to everybody! I also prefer to try and max out my monitor's refresh rate with a little headroom to spare to try and keep a more consistent framerate.
do you actually get better 0.2% lows if you cap your framerate?
Exactly what I do too. Especially for casual gaming.
One is a CPU bottleneck (the 4 core example) the other is a game engine bottleneck that makes it seem like a CPU bottleneck, which as you said sometimes it is the CPU itself sometimes it isn't.
There is a difference and it is pretty damn important to note.
Like the stuttering from amnesia a machine for pigs? Even with nice hardware, you get mini stuttering
Sure it's the game engine not properly using every cores but that's the case for 99% of games nowadays
Still if you'd use let's say a 7ghz 16900k you will still have a low CPU % usage but that GPU will be closer to 100%
But by that time you'll probably have a better GPU and the issue will remain, though your framerate might have doubled lol
@@cuysaurusReady or Not seems to suffer from that too unfortunately
But how do you know its a game engine bottleneck?
@@blakey_wakey If the game only uses one or two cores (like older games such as star craft II)
if the game uses multiple cores but it uses them at a very low % in your task manager or afterburner read outs.
Software usually leaves a lot of compute performance on the table when it comes to CPUs. This is why you can have a lower powered console use a CPU from 4 or 5 gens behind PCs yet still be GPU limited. In the consoles case it is much easier to program for that specific set of hardware. The limiting factor is almost always GPU compute.
However those are just two very simplified examples of a game engine being the bottleneck. There are others but they are more in the weeds. Something like.... A cache bottleneck (aka memory latency within the core), or a core to core latency bottleneck (like with the Ryzen chips with two dies on them).
Every computer either has a software or hardware bottleneck though as if it didnt your games would run at infinite fps.
I also noticed nvidia user in cyberpunk say they are cpu bottlenecked at about 50% cpu usage, but on amd it often goes right to 100% in this game; amd gpus use the cpu differently
First time I've seen this example explained so well. Thank you!
Appreciate the comment, thank you!
Also, with careful adjustment in MSI afterburner you can set all the values to align. CPU % is inline with GPU C° in yours
CPU% is inline with CPU Temp, and GPU % is inline with GPU Temp.
Also, even if your CPU load says 100%, that is most likely a lie. That is actually the average of the CPU’s P cores load. Meaning that if you have both the GPU and the CPU usage at 100%, you can still get a better GPU and not bottleneck it. As long as you don’t play something like Star Citizen. In that game, every single GPU gets bottlenecked no matter what CPU you have 😂
Star Citizen is indeed the destroyer of CPUs 🤣
cyberpunk is so CPU
@@raiden_131 Cyberpunk is fine! Go play any ps5 exclusive that released on pc and see for yourself how much worse that is in comparison to Cyberpunk.
for the layman out there, if you see your GPU usage below 90-100%, then you have most likely have a cpu bottleneck.
that cpu bottleneck however could be due to your cpu being underpowered compared to your gpu or it could however be due to the game/engine itself (like dragon dogma 2, jedi survivors, witcher 3 remaster)
all cpus currently are bottlenecked at 4k for example, it wouldnt matter much what cpu you have at 4k. If you have a older cpu, the biggest difference you would notice, would be in the 1%. This man talks a whole lot using so many words to describe something very simple.
Yeah, pretty much.
Lol!
Firstly, not every CPU will max out every GPU at 4K. There are plenty benchmarks out there showing that even something as recent as a 5600X can bottleneck a 4090 at 4K. Even a 4080 can be bottlenecked at 4K by a slower CPU.
Secondly, I explained it in "a whole lot of words" because it's not always as simple. Yes, you can get a good idea by checking GPU usage, but using GPUBusy you can get a lot more info. And that's what the video was about...
Shut up kid its depends on the setting or even the game optimize. It doesn't make any sense if there are games your gpu 90-100% and some not
Thankfully there’s a solution, the 7800X3D. Fastest gaming CPU on the market. Even digital foundry ditched Intel and went AMD. Do the right thing people
fastest is 147- 14900k atm
@@iikatinggangsengii2471 specifically in gaming no the 7800X3D is better
i dont want amd thanks
@@IBaknam amd shit
@@raiden_131 you rather enjoy blue screens i get it to each their own
Since I have discovered this video over a month ago, thanks to you, now my games have better FPS. Yeah it's bad if the gpu usage is only 50% or lower even though the CPU usage is still at 50% or lower.
Making the GPU usage at 100% does infact give my way better FPS and way less stuttering. Taking the load off of the CPU really helps significantly.
What is really strange is going from 1080p to 1440p in some games makes almost zero difference, it's because the lower your resolution & graphics settings are, the more likely you might come across a CPU bottleneck.
I am really glad I could help at least one person. And you are 100% correct, if you go from 1080p to 1440p and there is no difference you have a very big CPU bottleneck. Not many people understand that fact.
And another important one is that when you are GPU bound your CPU has some breathing room, resulting in better lows. So even though you might not always gain 50% FPS by getting rid of a CPU bottleneck, you will almost always have better lows / less stuttering.
Thanks for leaving this comment, it is really appreciated!
@@Mostly_Positive_Reviews Yeah it's better to be GPU bound, CPU bound no thanks, I also subbed. You do know what you're talking about when it comes to PC, you are the man!
Edit: I do like the fact that if you go from low to max graphic settings, in some cases there's little to no penalty, sometimes you might even get an fps increase because you're taking the load off of the CPU.
@@niko2002 Thank you for the sub. Just hit 3000 yesterday so I am very grateful for that!
I've been around the block when it comes to PCs, but I am still learning every day, and that's what makes it so enjoyable for me!
@@Mostly_Positive_Reviews Yeah since me and my brother Built a PC with a Ryzen 7600x with an Rx6700 (non-xt), I've been very interested in computers.
But I also love cars & motorcycles, it's just more fascinating to learn outside of highschool. I'm glad UA-cam exists, that's how I got interested in computers, cars & motorcycles.
Because of that, I know how to build computers, which I have learned from reading a manual. I mean you can make custom power switches for PC and it can go anywhere you want, make a PC case out of cardboard, lmao the creativity just never stops, and that's what I love about computers.
@@niko2002 In the olden days of UA-cam you could actually find videos on how to make an atomic bomb, so yeah, there really is something for everyone, and it is a great place to learn for free!
i was bottleneck with my intel 9900X and my 3080ti. I have simply double my FPS in all game. And now i use loseless scaling software for get generation frame and run all game at 150-250 FPS. that work greatly.
Almost every high end game I played, I realized that I had stuttering issues, and when I checked my CPU usage it was 100%, at that time I didn't really know what it meant, but then I showed it to a friend and he told me that my CPU was bottlenecking my GPU, and told me to cap the fps to 60 and use frame gen, then my CPU usage went down and no longer had the stuttering issues.
Yeah, CPU at 100% will definitely do that. I made a video a while ago showing how capping your framerate can resolve a lot of stuttering issues when that happens.
@madrain can I message you rq?
Brilliant video, very clear and concise! It's crazy how many people in comments on benchmarks spread false information. I think everyone watching benchmarks should be made to watch this video first. Thanks for the upload!
Thank you! Really appreciate the kind words! It's funny you say this because even on this video some comments are way out there 🤣
@Mostly_Positive_Reviews I have seen them. Some people are just stuck in their ways I suppose lol. Keep up the great content 👏🏻
@@BenchmarkGaming01 Thanks! Just subscribed to your channel as well, will check your videos out soon 👍
@Mostly_Positive_Reviews oh wow. Thank you very much! Highly appreciate it. Any feedback/ suggestions would be highly appreciated. Thank you again!
Shout out to my fellow South African🤙, great video buddy, very informative.
I’ve been out of the pc gaming scene for a while now, so I started searching for videos before I upgrade my hardware, stumbled onto your video.
Subscribed, I will definitely support your videos in the future.👍
Ha, another Saffa! Glad to have you here buddy, appreciate watching the video and subscribing 🙏
Nice video. I have i5-13600K with RX 7800 XT and 32 GB DDR5 5600 Mhz RAM. Running everything on 1440p native. Im GPU bound all the time.
Yeah, decent pairing. Not much difference between a 13600K and 14600K really.
You can also have CPU Bottleneck even if only 1 Core hits 100%. On other side, if you play online games where you need lower input lag, its better to have CPU Bottleneck. Alternative is to Cap your FPS (or Reflex & Anti-Lag) but CPU bottleneck is preferable if you can get much higher FPS than the Monitor hz. Actually both CPU% & GPU% should be as low as possible while maintain high FPS (lower than 60-70% GPU & the individual CPU cores). Even if the monitor cant show more frames than its refresh rate, the input lag improves (higher 1% & 0.1% FPS).
Yeah, agree! In multiplayer shooters most people prefer much higher fps and dont care about screen tearing as it is all about the lowest input lag and high framerates.
It is not preferable to have a CPU bottleneck as this results in frame pacing issues. Always always always use a variable refresh cap just under your CPU limit for the best combination of input latency and motion fluidity.
The exception to the VRR cap rule is if your game runs well in excess of your VRR window (600 FPS on 240hz display for example). In this case cap your game to a multiple of the refresh rate for even lower input lag.
Do not. Ever. And I mean ever. Leave your framerate free to do whatever it wants.
@@edragyz8596 Im talking exactly about much higher fps than yout monitor hz. You're talking about average gamer. What you saying make sense but Im talking about competitive gaming (3v3, 5v5, 8v8 ect.) Especially if you play with worse connection than your enemies. Fluid or consistent Frame pacing doesnt mean much if its higher & get worse input lag in a game where you need every possible ms. VRR has inherant latency penalty depending on your system. If you have expensive PC & good monitor around 1ms is the best possible penaly. If the FPS is around the monitor refresh rate I will prefer driver based Ultra Latency Mode if the game doesnt have build-in Nvidia Reflex option. I play on 360hz, 780fps ingame cap which allows my CPU cores to stay around 55-75% max & GPU around 60-70%. 720p. This is the best setup I get and while its not always consistent it gives me best possible input lag. When i cap on 354FPS I get the same 1% lows but I get worse results even though the games feels better. You need at least 4-500fps in these games. If you dont feel the difference you probably play with low ping which compansates & that slight delay isnt going to get a big impact on your game but every player whos location is far from the server will feel a difference. Also it doesnt matter whether can feel a difference or not. If you have an oponent you have trouble killing you will find that way you have better chances to kill him in 1v1 situations.
@@n1kobg Cap your framerate at 720 for the best results with your setup man.
Can't believe my Ryzen 2700x came out 6 years ago and can't handle 1% low anymore. Time flies so fast
Crazy indeed! The 2700X was such a good CPU when it launched, and still isnt terrible, but it's amazing to see how far things have improved since then. You do have a saving grace in the 5800X3D though, which you can just plop in.
Ryzen 2000 and 3000 series came out and still slower than Intel's Core i9 9900k so it's not really a surprise
Great video dude, well explained. I have a 5600G paired with an rtx 4070 super and definitely get a cpu bottleneck.
Thank you, appreciate it! Yeah, the 5600G is definitely holding that 4070S back. Good thing is that you can just slot in a 5800X3D without changing anything else and you should be good.
@@Mostly_Positive_Reviews or just play a 4K DLAA and problem solved lol cpu wont be the bottleneck
i have 4070 paired with 12700k and at 1080p i have cpu bound, but not in 4K, if you aim 4K 60 then bottle is much more rare scenario. if you aim 1080 240 fps then you really need a strong cpu fore sure, 7800x3D even more maybe
Very informative video, thank you for your efforts !
My pleasure! Thank you for watching and for the kind words, it is much appreciated!
What im hearing is, unless the CPU is maxed out at 100% and GPU is chilling, which indicates an obvious CPU bottleneck, anything else will damn near be impossible to tell due to the game/software being used.
Yeah, pretty much. You can check other things but not just with normal software. So for instance if you reduce your Cas latency and the framerate increases you know it's more RAM related. Similar thing when you increase your RAM speed. I mean, on a very basic level. It obviously goes a bit deeper than that but there are things you can do to somewhat determine why the GPU is not reaching it's max potential.
Rule of thumb for games: less than 100% GPU utilization means you are CPU bottlenecked. >90% of games are graphics heavy.
Love these vids!
Not as much as I love you! Wait, what...
Rivatuner has built-in options to show "GPUbusy" time in milliseconds and "frame time" in milliseconds. Whichever takes longer is the limit. It uses PresentMon preset to do the math and spits out a "Limited by: GPU/CPU" message during the benchmark
I've seen this in some videos but havent played around with it before, and only used CapFrameX for the GPUBusy metric. Will check it out, thanks!
HWinfo64 does frame time /busy too
I've done this with Intel PresentMon and GPU/CPU Wait to see what next I would change
Yeah, this is using Intel's Presentmon just tied to the overlay. But you can use Intel PresentMon standalone too like you did to get the same result.
One of my computers has the 12400f rtx 3050 and 3200Hhz ram, no matter what settings I was never able to saturate the GPU to anything close to 100%. no matter if if was running 1920x1080 or in the 720 range. great video. information. Thanks.
I have a 12400F in my recording system. It's a very underrated CPU and it doesnt break the bank. It should be able to fully saturate a 3050, I use mine with a 3060 and it can keep it at 99% usage in most games I play.
@@Mostly_Positive_Reviews must be the 100 dollar mother board, it was a build a threw together to replace a system that i plugged the 3040 into that was built in 2007. if i had the time I would do some investigating as to why that 2nd system can not peg the GPU.
@@videocruzer could be. Might have some PCIE limitations or very poor VRMs holding the CPU back or something.
this might sound like a nitpick but as an RTSS overlay user myself , or at least yours looks pretty much the same , it bugs me to see reversed usage and temp placements for CPU and GPU , gpu is temp - usage and cpu is usage - temp @.@
Hahaha, you arent the first to point this out. It really was an honest mistake. The default order in CapFrameX is slightly different and I didnt pick it up. It has been fixed in subsequent videos.
There's also something else that isn't often mentioned - GPU usage impacting input delay even in CPU bound scenarios.
Explanation: If you use an FPS limiter in competitive games, for example I use V-SYNC (nvclp global On) + G-SYNC with a 200Hz monitor in Overwatch 2. That automatically locks my FPS to 189 to prevent V-SYNC activation at all times.
This results in a certain amount of input delay. Then if I reduce my render resolution below 100% and use FSR for example, this will result in the same 189FPS cap, but a lower GPU usage and therefore lower input delay, because GPU Busy times are reduced.
This is why people who play competitive titles like Overwatch, Valorant or CS would still use low graphics settings even on a RTX 4090.
Additionally features like the NVIDIA Reflex + Boost will result in even lower input delay in these scenarios, because +Boost setting causes your GPU clocks to stay at the minimum of the Default Max Clock.
It's the same value as if you used the Prefer Maximum Performance setting in nvcpl. This results in your GPU Busy times to be reduced even further on top of the reduced render queue which is the main purpose of the NVIDIA Reflex itself without +Boost enabled.
Yeah, you're right, this topic does not get a lot of airtime, and I think it's because it's not that well known.
I might think of a way to string a video together to explain the basics.
Many people arent aware that Reflex + V-sync actually caps your framerate just below your monitor's refresh rate for this exact reason. It only needs very little additional GPU headroom to reduce latency. For example, my 165hz panel gets capped at 158 fps with V-sync + Reflex + G-sync. It also prevents the framerate to hit the v-sync limit, which will also increase input latency, as you said.
Amazing video.
How do you make is so that RTSS shows correct CPU % utilization? Task Manager show correct value while RTSS is false for me.
Thanks!
I didnt do anything special, but make sure you have the latest version installed. I think it's 3.2.7 now.
main problem here might be not what you described. frame generation has a lot of different GPU units doing different types of stuff. shaders or RT usually the slowest ones so the rest of GPU rests. and then after that also post processing for a ready frame... that how you get underloaded GPU in most cases. and BTW the logic of howand in what order frame is generated and what game engine does is different in every game. so you can get same or totally different CPU load on different GPU!
You can also have GPU-Z running on your second screen. Go to sensors tab. It will give you PerfCap Reason. It might be Thermal, which is obvious but it can also give voltage related reasons, such as Reliability Voltage which means cannot boost higher at current voltage, or something else it will indicate.
Great video !!
How do you get the GB in ram and cram to appear in Gb instead of mb and also how do you put the names explaining what is what in screen ?
Can you provide your afterburner profile ? 😅
Busy uploading a video now as there have been a few people that asked 👍
Here is a quick video of my CapFrameX setup: ua-cam.com/video/EgzPXy8YYJw/v-deo.html
I wish someone would create an algorithm/chart into an app where you selected cpu & gpu then selected “desired fps” & it would automatically print the graphics settings you need to apply for your game.
This would indeed be a good idea, and would be popular as well. Would take a lot of effort, but I think it can work.
You are amazing, your explanation is clear 😊
Thank you! 😃 I really appreciate the kind words, a lot!
Great video. Very informative and enjoyable!
Thank you! And thanks for watching and commenting, really appreciate the support and engagement!
I always looked at it as: Gpu at or near 100% utilisation then your gpu bottlenecked. Gpu consistently not at or near 100% cpu bottleneck. This way does fall apart if a game for example is fps capped
In a huge amount of cases where there's low CPU usage and not 100% GPU usage while unlocked framerate is either software problem not properly optimized or memory botllenecked cause if the same CPU had much more and faster L1/L2/L3 cache I'm sure would get much higher framerates and also much faster ram with lower timings count for increasing CPU performance.
you get more stutters in CPU bound scenarios than GPU bound scenarios
100% 👍
For me i dont mind as long as it isnt dtuttering. If im stuttering due to a bottleneck then its time to get a new cpu. As we know gpu bottlenecks arent as bad as cpu bottlenecks.
Also a cpu bottleneck isnt usually bevause the game is demanding its usuallly due to the cpu not being able keep up with communication of the gpu. Thats usually a bottleneck. Games bottlenecking a cpu or being cpu limited usually doesnt effect much.
Like here the game was more then likely still running good so for me thats not as bad.
Good explanation on shifting the load.
I love the comments on facebook and reddit of mone believers. They legit think bottlenecks arent real haha. They dont believe any cpu can bottleneck with a 4090. They really think there 7800x3d can keep up which yes it performs well but they still have a bottleneck more then they may think. The 4090 is just that powerful.
Or like me a 4080 super with a 13400f. I have a bottleneck and my bottleneck even causes stuttering at times which is why i will be upgrading soon. But yeah alot of people go naw thats not bottlenecked. Haha
Appreciate you taking the time to write this comment. Yeah, it's amazing how many people deny stuff like this. But I agree 100% with you, unless it's really impacting your experience dont worry too much about it. If you still get framerates that are acceptable to you with no stuttering then it's most probably not worth it upgrading your whole platform. But also, a 13400F is definitely holding back that 4080, so in your case you'll get a very decent bump by going for a 13700K or similar. I have a 12400F system as well that I test more entry-level GPUs with, and once tested the 4070 Ti on there, and it was quite bad. GPU temps were pretty low though as the GPU hardly got used 🤣
@Mostly_Positive_Reviews yep exactly haha. I will say I'm getting over 90% utilization in 4k which is OK still bottlenecked but yeah it's been running ok in 4k. Def has some stutters so def going to be upgrading soon.
Thanks. This showed me that my GPU is sitting idle waiting for frames to be generated and it justified my CPU upgrade if anyone asks :P
3070ti was at 6ms where my 11700k was at 16ms and the worst part is I am getting a 3090 later this week so it will just get worse
Oh right, you already saw this video hahaha. Just replied to your other comment 🤣
Thanks. I learnt a lot from your video. But as I checked this with Tomb Raider, even when the GPU Busy Deviation was lower, the CPU usage became higher. Is this inconsistency exactly what you said that this method is not 100% accurate?
I got vertigo watching this.
Good video. Cyberpunk is a good game for this because it's been out in some form for 4 years, so it's probably pretty well optimized. Driver issues and egine issues could probably cause some bottlenecking.
Thanks again! Yeah, there are many other reasons for a "CPU bottleneck", and software / engine is just one of them. I touched on it briefly but because I was more focused on showing how to identify a bottleneck via CapFrameX that I didnt go into too much detail, and some people wanted me to, based on the comments.
@@Mostly_Positive_Reviews The video was good. You'll never be able to hit every single point. Your videos would be hours long.
Yeah, not a lot of people realize that. Appreciate the kind words and watching the video 👍
I have a 7800x3d paired to a 4080 OC.
I am currently playing Skyrim at 1440p without a frame rate cap (a mod).
And with enough mods, textures, grass mods and DynDOLOD you can absolutely see the CPU bottleneck everything.
The FPS will be below the v sync 165hz cap, and GPU will be at 90% with CPU at 20%.
I think it's because the game only really uses one thread.
So while it seems the CPU is chilling. It's actually not able to let the 4080 go to 99% on Skyrims game engine
Yeah, Skyrim being quite an old game at this point means it's not optimized for multithreading. And mods can be brutal too!
Skyrim is horrible optimised
Interesting
@@Greenalex89 yeah I figured it was at least noteworthy
For the situation on the right in the beginning of the video, it's not quite CPU bottleneck but RAM speed bottleneck. RAM are not fast enough to deliever all the data that CPU needed. That is why AMD has 3D cache CPU which is just to stack very large L3 cache for a higher Memory Hit Ratio.
For most of the RAM speed bottlenect, it's the RAM latency problem. RAM read and write speed isn't a serious problem for majority of the game. Imagine copying files, if you copy and paste single big file like zip or a movie, it is very fast. But if you want to copy a lot of small files like pictures, then it's slow. Same thing to the RAM.
Some game is optimized, they will try to combine many small data into a large one or simply cut down unnecessary process. But other are not, especially for the indie games.
this so fucking crazy that your video showed up to me , ive noticed this yesterday when i increase the game graphics settings i get better performance/ fps , now i see why thanks for the video.
Sometimes the algorithm works in mysterious ways 🤣
GPU may be underutilized(not due to CPU bottlenecking) in the following cases:
1. Old games. They usually need a fraction of modern GPU power to run smoothly.
2. Games that more CPU bound, like stellaris. Graphically speaking there isn't much to render in Stellaris, but CPU calculations and loads are tremendous.
In graphically challenging games you should see 95-100% of GPU utilization. If the number is lower, then:
1. Poor optimization.
2. CPU bottleneck.
Yeah indeed, many cases where GPU can be underutilized.
Your 2nd scenario isn’t necessarily cpu bound. All it shows is that your aren’t gpu bound. Your limiting factor could be cpu but not usage related. So like the cache or something. Or it could be engine limitations, or IO. Or it could be gpu memory bound. That won’t show up at utilization. Typically if your cpu bound and less than 100% utilization you’ll still see one of the threads pegging 100%. I’d imaging frame gen also moved the limiting factor to your gpu memory speed as well.
Yeah, for sure a lot of variables in the second scenario, and I touched on it briefly to say it's not necessarily just the CPU in this case, but many people still just call it a CPU bottleneck for simplicity.
@@Mostly_Positive_Reviews it’s definitely a rabbit hole I got sucked down for a while. Really the only conclusion is how much we lack in monitoring capabilities tbh.
@@sjones72751 100%. If there was an app to tell me definitively "RAM bottleneck" or "PCIE Bandwidth bottleneck" or "SSD bottleneck" I would buy it in an instant!
How could it be engine limitation if changing to better and faster CPU get you higher GPU usage and frame rates?
You can also cap the framerate of the game to overcome cpu bottleneck.
How did you get CPUbusy in MSI AFterburner/RTSS? I would like that functionality Shout out from SA BTW
There are more and more South Africans finding these videos, and I love it!
In this video I used CapFrameX instead of MSI Afterburner. It still uses RTSS but it has additional overlay options 👍
@@Mostly_Positive_Reviews How you get it GPUBusy and GPUBusy Deviation? I can't see it in CapFrameX
@@Sebastianino Hmm, make sure you have the latest version installed perhaps? Other than that, check for the correct labels in the Overlay tab. It is called GPU Active Time Deviation, GPU Active Time Average and Frame Time Average.
@@Mostly_Positive_Reviews I did not download latest... Funny because from searcher site it says 1.7.1 as latest but from inside web site 1.7.2 is newest.
Simple, if your CPU is at 100% but your GPU is not even close to 90. With dynamic boost in both AMD and Nvidia based gpus those 80% could simply be P2 not P1
Sure, but not all CPU bottlenecks present themselves as 100% CPU usage. Your CPU usage can be 40% but still be CPU bound. It's possible that a single thread is running at 100%, and the overall usage is reported lower because it is averaged out across all cores, or you most used thread can also be sitting at 60% but still being CPU bound because it's just not fast enough to keep the GPU fully utilized.
The best way of identifying if youre cpu or gpu bound is by looking at the GPU usage. In general you want your GPU usage to be pinned at 95-100%. If its significantly lower on avg then youre most likely cpu bound.
Can we talk about the CPU power use too? I can see it's not 100% so either clock or memory isn't at full tilt within that "100% usage"
During the first scenario, I only enabled 4 cores, no hyper threading, and no e-cores, that's why at 100% CPU usage power usage is lower than it would be if all cores were enabled.
@@Mostly_Positive_Reviews thank you
To determine CPU bottleneck, beside obviously looking at GPU load, also enable monitoring per core of CPU. Whole CPU load means nothing because one core may be doing 95% of the job while others sitting without any load or even sleeping (parked).
100%. I spoke about this as well. If you have 20 threads and 1 is running 100%, and the rest do nothing, CPU usage will be reported as 5%.
Threads. If the game has only 4 threads total and you have a 24 core CPU, you can have a 15% CPU usage and still be CPU bottlenecked.
100%. The CPU usage shown is an average usage across all cores / threads.
Very useful info, thanks
Glad it was helpful!
This more applies to older games most games will have a engine limit where no matter your CPU you can't push past a certain frame rate. Normally this is absurdly high framerates and you wouldn't want past 200fps for most games unless they are super competitive esport titles. Something like GTA 5 will let you push into the 180 to 200fps for example but it will extremely unstable so that's why most people suggest capping it at around 120 to 144fps. It's worth doing research on certain titles especially if you have a high end machine that is more likely to run into that sort of problem.
Yeah, GTA V was notorious for breaking over 180 fps, and then some Bethesda games also have their physics tied to the framerate, so if it goes over 60 fps things start to freak out. There are patches that help with that though, but agreed, best to find out if a game has an engine limit or not and try to stay below that.
Many new games do have uncapped frame rates though. If you do all you can with frame gen and dlss and overclocking you can push Cyberpunk to 300 fps or more. But there definitely are games that have engine limits, or issues when going above certain framerates.
very informative, thank you so much!
Thanks go to you for watching!
I don't mind if I'm CPU bottlenecked, as long as I get the performance I'm after.
Yeah, that's the correct approach. Depending on what performance I am getting it's not worth it to upgrade a whole platform for a 10% improvement kind of thing.
This is a very good video!
Thank you bud. Except I ruined it with typos 🤣🤣🤣
The best CPU bottleneck indicator is your GPU, because CPU usage indicator only shows the average usage of your threads. If your CPU behaves well,it will make enough calculations and your GPU will be always loaded. Otherwise the GPU will deliver frames, but CPU won't be able to calculate enough to keep up and you will witness a huge performance decrease. The bottlenecking of Intel's 13th and 14th generation flagship CPUs may be caused by architecture issues, so you should better stick to 12th generation or switch to AMD's AM5 based X3D CPUs. The 7800X3D is a good and reliable CPU that will serve you for decades. According to AMD, 7th generation X3D CPUs will be better than upcoming 9th generation in games. That means that there's no need to switch for X3D CPU owners. By the way, the 9th generation of X3Ds is rumored to be launched in October. Good luck in achieving system stability, folks!
Yeah, which is exactly what GPUBusy does. It measures how well your GPU is being used, meaning if it's not being utilized properly you have a bottleneck somewhere.
Nice video, I like it. I have i7 9700 and GTX 1660 super and 600w PSU. I am planning to upgrade my GPU probably to RTX 4060 Ti. I know this card is questionable, I certainly would like go get 4070 but my system won't handle 4070 for sure, I mean I don't even know yet if i7 9700 would be okay with 4060 Ti. What do you guys think?
The only issue with the 4060 Ti is the price. Other than that it performs very well. I used to run a 9700K with a 3070 and it ran fine with a slight bottleneck at 1080p, but nothing justifiying a CPU upgrade. Your combination will be fine.
@@Mostly_Positive_Reviews Thanks for reply, I appreciate it. There are very few videos with i7 9700 and cards from 40xx series so I was a bit confused. If i7 9700 can handle 3070, then 4060 Ti shouldn't be any different indeed.
I agree about the price. There is no bad card - only bad pricing
A CPU can be a bottleneck as a result of hitting the single thread performance limit. And, technically, it is the GPU that is being bottlenecked; the CPU IS the bottleneck, in both of these scenarios.
Yeah, indeed 👍
While I do love the video and the way you explained, you're always gonna be bottlenecked by either the CPU or the GPU, because not all games are designed the same. Some games rely on heavy CPU calculations while other rely on heavy graphical fidelity and some on both. So no matter which way you go you're always gonna have a bottleneck.
Thank you, and yeah, 100% correct. The only reason to really care about a CPU bottleneck in particular is because your GPU is almost always the most expensive component and it would be better overall if that is then your bottleneck as it means you are getting the most out of your system.
That said, you shouldnt rush out and buy a 7800X3D if you are 10% CPU bottlenecked. I'd say even up to 20% is fine and not necessarily worth upgrading a whole platform over.
you can either spend 13 mins going through this video or - look at the GPU usage if its lower than 100% you are cpu bound. This one method will do fine 99% of the time.
As for the video itself, the first half is useless for most people who follow the tech updates, the gpu busy thing in the second part is the real part for anyone who want to know more about this stuff
Frame Generation really helped with Starfield!
Yeah indeed, from both Nvidia and AMD. That game is just so CPU heavy and Frame Gen is almost a must there.
Actually bottlenecks are a product of software issues as much as hardware. If a game uses CPU inefficiently then of course CPU can become the bottleneck. Unless it uses GPU inefficiently too.
It's incredible how many people don't understand this concept, and keep sticking new gen GPUs into older machines with almost 10 years old CPUs.
Then complain it's the games being unoptimised...
Yeah indeed. A while ago if you had a 3770K or something you could go from a GTX 560 to a 660 to a 760 without worry. That's not the case anymore. There are actually people running 4770K CPUs with 4070 Ti GPUs.
I like to cap my CPU to not go over a certain t° (laptop) so the CPU and GPU aren't always at their best. I think i have a good pairing Ryzen 7 7735HS, RTX4060, 2x8GB (4800) 1080P. Nice explanation, thanks!
There are definitely instances where limiting components is a good thing indeed. But yeah, your components are pretty well paired. Would even be able to power a more powerful GPU with that CPU as well, so you are fine.
@@Mostly_Positive_Reviews Keep up the good videos!
Thank you, appreciate it a lot 🙏
I'm playing Cyberpunk. My CPU is at 63% usage. When I saw the video with 63% CPU usage and bottleneck. I sweat.
After seeing the video. I'm safe.
🤣🤣🤣
Even if your CPU is at 80% and your GPU usage is at 99% you're fine. Just wanted to show that even if your CPU is not at 100% it can still cause a bottleneck 👍
@@Mostly_Positive_Reviewscan you give a short answer with why thats happening? like is it because the game or the cpu itself ? i watched the video btw but its hard to understand.
i also watched a 4070S + i5 13600k in 1080p benchmark and the gpu was hitting 90+% throught the whole video with differrnt settings so im kinda confused why you getting those numbers with 14600k wich should be better cpu.
I purposely gimped the CPU to show what a CPU bottleneck will look like when displayed in RTSS / CapFrameX. I have done proper benchmarks with this CPU if you want to check it out 👍
It's okay. Sometimes the GPU can render more frames than what the CPU is able to prepare, and then your GPU usage will be lower.
Say your CPU can prepare 120 frames per second (resolution has no effect on this number). Then your GPU for example can render 160 fps at 1080p, 120 fps at 1440p, and 60 fps at 4K (all hypothetical numbers, but let's use them).
So if you play at 1080p, the GPU can do 160 fps, but the CPU only 120 fps, meaning your final framerate will be 120 fps. That's what we call a CPU bottleneck. Your GPU wont be fully utilized, as it can do 160 fps, but it's only been given 120 frames to render by the CPU.
Up that to 1440p, now the GPU can do 120 fps, the CPU still does 120 fps, so the GPU will now be fully utilized, showing 99% usage. CPU usage will most likely be high in this scenario as well.
Up that to 4K, the GPU can only do 60 fps, so the CPU only has to prepare 60 fps, and your GPU will be fully utilized.
Does that make better sense?
good video mate finally understand this shit now haha
Thanks! I tried to explain it in a way that made me understand, because I watched many a video on GPUBusy and I dont think I found one that explained it in a way for me to understand. So after playing around for a few weeks I thought I'd give it a shot of explaining haha!
Is that MSI afterburner/riva tuner to check all the info? How did you manage to get it spaced out like that?
This is Riva Tuner combined with CapFrameX for the GPU Busy metric 👍 I used the default layout found in CapFrameX.
very teaching video.. thanks
I have no idea what's going on =). Lol will forever be confused about cpu/gpu bottlenecks
It's okay 🤣 As long as you dont pair a very entry level CPU with a very high end GPU you'll be mostly fine.
So in the case of scenario 2, what would be the bottleneck? You mentioned that it could be anything system related bottlenecking the CPU and thus bottlenecking the GPU. How would you go about diagnosing the cause of the bottleneck on the CPU?
Identifying exactly what the issue is becomes a bit more tricky, but it can be done. You can first overclock the CPU a bit and see if it improves your framerate. You can also tune memory a bit to see if that improves it. But dont do 2 things at the same time otherwise you wont know what improved it. It can be either the CPU is just not fast enough, memory speed, or memory latency, or all 3. But in the case of the 2nd scenario I wouldnt really worry about it too much, unless your performance is severely impacted, and then in most cases all that will fix it to a greater degree is a new CPU and/or new RAM, which is rarely worth the cost, unless, as I said, your performance is severely impacted.
Is caping frames affects cpu or gpu usage? And also does it helps with g sycn and overall smooth of the game?
I am 99% sure it limits CPU performance as the GPU frametimes still show sya 6ms, but CPU frame times show 16.6ms for example. But that would reduce all usage if you are well below what the max your system is capable of rendering is.
It can help with smoothness indeed. G-sync does not limit the framerate, so if you go above your monitor's refresh rate you will see tearing. So limiting it a few frames below your monitor's refresh rate will prevent tearing, and with g-sync enabled you will get a much smoother presentation.
@Mostly_Positive_Reviews
Thank you for responding, I'm using 165hz Dell monitor, when G-Sync Is enabled I'm getting 158 fps without any frame capping. I should still limit frames in games to let's say 155, or limit but to 158? I'm getting strange tearings while gaming, I'm getting 120 and more but I can still see the tearing. 😞
How did you get your MSI afterburner to display like that?
This was using CapFrameX instead of MSI Afterburner overlay 👍
Funfact: computer mouse polling rate will affect the cpu bottleneck. So 500hz polling rate is better than 1000hz or more
I have the opposite problem. After years there is still no game that utilizes my CPU.
🤣🤣🤣
What CPU do you have?
@@Mostly_Positive_Reviews Ryzen 9 5900X.
My older gaming PC have a CPU bottleneck, so it doesn't matter if you use 1080p or 1440p on it. ;)
@@V3ntilator More cores doesn't always mean better performance in games. Here are the technical challenges game developers face when trying to leverage multiple cores for better performance.
- Not all tasks in a game can be easily parallelized to run on multiple cores simultaneously. Some processes are inherently sequential, meaning they need to be executed one after the other, limiting the benefits of having more cores.
- There is a fundamental concept in parallel computing called Amdahl's Law. It states that the speedup of a program from parallelization is limited by the sequential portion of the program. In other words, if a game has certain tasks that can't be parallelized, having more cores won't provide a proportional increase in performance.
- Coordination and communication between multiple cores can introduce overhead. If not managed efficiently, the time spent on communication between cores can offset the potential performance gains from parallelization.
- Writing code that effectively utilizes multiple cores requires additional effort and expertise. Game developers need to carefully design and implement algorithms that can take advantage of parallel processing without introducing bugs or performance bottlenecks.
- PCs come in a wide range of hardware configurations. Developers need to create games that can scale across different systems with diverse CPU architectures and core counts. This can make optimizing for a specific number of cores challenging.
60fps is super smooth and more then enough for any game.
But of course with modern unrealistic and totally unnecessary trends we come to the spot where you need 200fps to play game,anything lower is a pure shame and PC hell lol.
But seriously,only thing you get is more thermals and bigger energy bills and in the same time not gaining anything important.
And then comes monitor manufacturers with 360hz monitors,and now even 500hz lol so in 5 years if you dont play games at 300fps you would be put on trial and get death penalty haha.
Joke aside,if your cpu have enough power to push 60fps bottleneck or not,it doesnt matter,except for pure benchmarking.
Yeah, 60 fps is fine, but it is also the minimum for me. Personal preference of course. The video is more aimed at people running heavily underpowered CPUs with very fast GPUs as you ultimately want to make full use of your expensive GPU. A slight bottleneck is far from the end of the world though.
How can i get “GPUBusy Avg”,”Frametime Avg” and “GPUBusy Deviation” in msi afterburner? Very good video!
Thank you!
The easiest way is to use CapFrameX. It has it built-in already. You can do it with Afterburner but you have to go add plugins in RTSS and configure it from there. It's possible but a bit of a mission.
@@Mostly_Positive_Reviews Oh thank you!!
Very interesting. Thx mate
Thanks for stopping by, it's really appreciated!
Hlo brother if the intel i5 13600k had good 1% lows than amd r5 7600x in gaming with rtx 4070 or rx 7800xt then sir please tell the right answer...then should me buy intel or amd..
Tough one. If you go 13600K you wont be able to upgrade much, except to a 14700K for example, whereas AM5 platform should get a few more upgrades. It also depends on price. If you dont care about the upgrade path then go for the 13600K, as it generally has better memory overclocking as well.
@@Mostly_Positive_Reviews but what if i never overclock these two amd r5 7600x and intel i5 13600k because i am afraid to i burn my cpu by overclocking then which prossesor should I buy..
Brother if you give your Instagram id or any your contact for right component then I'll pay you some money..
@@goluop0982 If you dont want to overclock then I'd say go for the 7600. That way you can still upgrade to a 9000 series CPU later without having to change motherboards. And the lows on the 13th gen is only slightly better anyway. It becomes much better with overclocking, but if you arent going to overclock I'd say go for the system with the better upgrade path currently.
I dont have IG or anything else, except Twitter, but that account is on hold for now. Really dont need to pay me ;)
You'll be perfectly fine with a 7600X, B650 motherboard, and 32GB DDR5 6000Mhz CL30 memory. If you want to save a bit of money then go 5600Mhz cl28 / cl30, but the price difference shouldnt be that big between these kits anyway.
You can then get a 750W power supply, which would be perfect for the 7800 XT / 4070, whichever you decide to buy. I will say that if you decide to buy the AMD CPU, go for the AMD GPU as well as Smart Access Memory on AMD is supported on more games than Resizable BAR from Nvidia. But really, either system will be perfect for 1440p gaming. The 7800 XT should last a bit longer due to more VRAM, but the 4070 will be slightly faster in RT, and it also has DLSS< which currently has better image quality than FSR.
U can check if ryzen 7500f is available in ur area it will save u some money and its basically same processor as 7600 but without igpu that u dont need cos u have gpu
@@Mostly_Positive_Reviews brother tell me that what is the reason of buy intel i5 14600kf any reason .does its faster in gaming from amd r5 7600x at stock GHz speeds???????????
Gpu power usage may be an easier, lazy indicator for full gpu usage
Yeah indeed. GPU Power combined with GPU Usage will give you a good idea. Using PresentMon is just a bit more accurate, but we all have our own ways of doing these things and coming to the same conclusion.
Great video, i just had 2 questions
1: is gpubussy only for intel or can i use it for an amd cpu as well
2: when i play games I can see the buildings and signs either pop up or render slower than the surrounding area. Like a billboard's texture load later than buildings and such. And i noticed it on every game from spider man to cyberpunk and Alan wake 2. Do you think it can be a cpu issue?
Btw my cpu is ryzen 5 7600 and for gpu i have a 7900 gre with 32 gigs of ddr5 ram 6000mhz. And the percentages are normal meaning gpu is 90-100 and cpu is 60-80
Thanks!
You can use it on all systems, no issue.
With regards to pop-in, Cyberpunk especially is notorious for a low LOD. It has a lot of pop-in. Your system is pretty well balanced so unless something is wrong with one of your components I dont think it's that. Maybe check Level of Detail settings, view distance settings, and then texture detail settings to see if that helps. It doesnt in Cyberpunk, but I dont remember AW2 having that issue for me, but it has been a while since I last played it.
Low CPU usage and not fully used GPU "with high fps" means game is well optimized!
God dude stop talking in circles. You could cut this video in half
Ever tried to be nicer on the internet? Thought not...
Lekker vid from my fellow South African 😅
It will never not be funny to me how we can immediately tell when someone is South African 🤣
Maybe a stupid question but will increasing the resolution be enough to reduce cpu bottleneck in some cases? For example I have a 2700x with rx 6600 and I was thinking of buying 1440p 144hz monitor that may help me reach a stable 60 more easily by reducing cpu bottleneck. Is it a good idea?
Not a stupid question at all. If you are getting 50 fps and CPU limited at say 1080p, increasing the load on the GPU by increasing the resolution you will still only get 50 fps max as the CPU is only able to prepare 50 fps, regardless of resolution. But shifting the load to the GPU will most likely get you better frametimes and less stutter.
In the above example where your cpu can only put out 50 fps you will be cpu bottlenecked until you increase the graphics settings enough to drop below 50 fps. You can only ever get the highest fps of your weakest part.
sometimes its cpu sometimes its gpu bottleneck depending upon the scene of same game but one will be less prominent than the other
Man could you help me? I need help and nobody could help me yet. I got a RX 6650XT and Ryzen 5 3600. Other people with the exact same setup or an even better GPU dont have the huge stutter i have in Battlefield 5, Battlefield 2042, Palworld etc. I tried EVERYTHING and it looks like a CPU bottleneck to me. But why do other people have a slight bottleneck but dont get stutter so the game is unplayable like mine is?
Like
its the same exact setup? i got fast 3600mhz cl18 ram, nvme m.2 980 pro, temps are well under the limit and my mobo is great too. XMP on, SAM / Rebar on etc...
Hey man, sure, let's see if I can help. Can you perhaps upload a video while using an overlay to show CPU / GPU usages etc? You can upload to your UA-cam, list it as "Unlisted" and mail me a link so I can have a look? My email address can be found under my info on my channel 👍
@@Mostly_Positive_Reviews i will do that in a Couple days, i found the program called PresentMon and im trying to figure out if i just have insane bottleneck. But trank you so so much dude. I will do that :)
Anytime! If you need help, I have a video on here about setting up RTSS with MSI Afterburner. It shows you step-by-step how to set it up to show CPU and GPU usages etc.
Once you are ready in a few days, send that email and we'll take it from there 👍
which programm do you use for benchmark?? (left top corner) :)
Busy uploading a short video now to show you how to set it up!
@@Mostly_Positive_Reviews i have i7 6700k with rtx 3060ti and i dont know i have bottleneck or not hahah. my gpu %95-99 my cpu %70-80 with ultra settings with RT. thanks dlss frame generation. or should i thanks to AMD for open source FSR 3.0 🤣🤣
@@toolzgosu Here you go: ua-cam.com/video/EgzPXy8YYJw/v-deo.html
so why the cpu indicates only 75%?