Watch our video with Tom Petersen talking about how GPU drivers actually work (and what "optimization" means: ua-cam.com/video/Qp3BGu3vixk/v-deo.html Learn more about system latency in our interview with an NVIDIA technical expert previously: ua-cam.com/video/Fj-wZ_KGcsg/v-deo.html Watch our previous interview with Tom Petersen where we discussed GPU Busy: ua-cam.com/video/5hAy5V91Hr4/v-deo.html Watch our debut of GPU Busy testing here: ua-cam.com/video/raf_Qo60Gi4/v-deo.html
Please do another real independent story on why the prices of industry GPUs are as high as those of used vehicles. I'm not saying the vehicle is in A-1 condition, but more like a hooptie. Considering the iPhone 14 can be mass-produced at the price of $10 each, what's the story behind these outrageous prices on these GPUs being produced? What are the hidden costs of mass-producing them?
@@BaBaNaNaBa You're probably hitting a coding problem where they didn't fix the memory allocation from the console code to be PC code. You could report to AMD's driver team too as bug, or the Game developers.
This is a second comment. First got deleted. Maybe was too long... First, I really love that the "Animation Error" will finally see the word) Second, I myself a graphics engineer and was working on Frame pacer. To measure the Frame pacer's performance I also come up with the "Animation Error" metric (funny, that name is exactly the same). However, I calculated "Animation Error" and "Display Time" differently. At first, "Animation Error" was comparing "Animation Time" and "Display Time" of each frame. So if "Animation Time" == "Display Time", then we have no animation error, and presumably - smooth animation. However, on practice this was wrong. For example: using 25 ms "Animation Time" for 40 FPS on 60 Hz (no VRR) display produces better visual results than matching the 16.7/33.3ms "Display Time". Problem in the Sample and Hold nature of modern displays and how our eyes perceive the animation. After long investigation, I introduced "Ideal Animation Time" that is a Display delta time, measured between centers of the Display frames. Formulas I used: DisplayTime[i] = DisplayBegin[i + 1] - DisplayBegin[i] IdealAnimationTime[i] = (DisplayBegin[i + 1] - DisplayBegin[i - 1]) / 2 AnimationError[i] = 20 * log10(AnimationTime[i] / IdealAnimationTime[i]) I hope Tom Petersen will read this comment.
Tom is a great engineer. At NVIDIA, he was the one driving a lot of the OC functionality and was the reason NVIDIA kept the cards overclockable. At Intel, he's working with his team to create open source tools that will likely reveal more flaws with Intel GPUs than anyone else, but they are still making and releasing them. Hopefully the corporate overlords realize how important this work is and how Arc will have to lose money for a while before it can find success.
@@GamersNexusAh so now that he's not at Nvidia, we can (maybe by circumstance) see why overclocking is basically dead (or at least any that can be felt by users for typical at home gaming).
Seriously. Any video with Tom Peterson I will watch. So clearly knowledgeable, passionate, and an excellent communicator. You just want to trust him as an authoritative source.
Im not sure about other board partners, but "soft overclocking" through a video cards software is usually very easy to do and relatively safe. For example asus gpu tweak@Photo-Jay
@@JayJayYUPCards are pushed so hard out of the box these days, that the gains you can get are minimal. They all ship with algorithms that basically continuously overclock the card to whatever is stable at that point. Something that's absolutely not dead, is undervolting. With a bit of time and effort you can significantly reduce power draw for your card, which often even leads to slightly better performance as you won't hit the current limit. In many cards you can drop power draw by as much as a third without giving up performance. If you like a quiet computer, this is a must. Plus it's impossible to damage your card this way. There really is no downside.
They seem to be figuring it out more and more, at least for our channel! Intel, NVIDIA, and AMD have now all allowed their engineers to go on with us multiple times, so that's been awesome to see. Big change from a few years ago! But we need to keep showing them that people care about these discussions to get them to keep doing it!
As an engineer I can say letting "us" speak is a high risk, low reward strat. Most engineers have a hard time explaining things without unnecessary details. Additionally, strategic wording is important otherwise you end up with situations like the LTT Labs "retesting every time" comment. People like Tom are rare and therefor too valuable to have them spend resources on presentations like this that only reach a tiny slice of potential customers.
I used FRAPS in 2000 and onward with optimizing UT99 (and every game I played). I wrote my own AVG functions. You could log framerate in .csv. So I created my own functions that gave me %inMIN, MIN, MIN AVG and AVG. Then I messed with graphical settings so that MIN AVG was as close to AVG, and %inMIN was as low as possible. This was as close to "fluid" gameplay as we could get. Frametime graphs didn't exist back then, and I think there wasn't any FRAPS-like application to log frametimes.
Team Red? 13:51 - According to this guy both Intel and Nvidia are sharing something with the world. What "Team Red"? Oh... hold on, there is "US" and... "others" whatever that means.
@@alexkay7823 How can you be so confidently out of the loop? AMD's logo is red, so Team Red, Nvidia's is green, Intel's is blue and the cold war ended decades ago, now it's just the standard govt shenanigans from _every country_
intel is actually doing interesting stuff with their graphics. I think this will be great for testers and end users alike to see where they are actually losing performance.
I don't like their cards because they perform bad in older games. I like my old games literally to fly, to play them like I couldn't at the time. I do that with my RTX 3060, with Intel it's impossible, with Intel it's like I'm with my 1050ti all over again in old games. So unless they repair that, I'll probably never get an Intel card. Also once you step out of the benchmark "stack" and "most popular" Intel performs bad. They probably need 10 or 20 years to catch up with Nvidia.
You may want to consider a camera stand or something when the majority of time you are only in 1 location. The jidder can be very odd (motion sickness inducing for me) when it is in full view frame shot. Great video!
As an Intel engineer, "GO TAP!" I always enjoy seeing TAP talking about Intel Arc development. Though I do not work on Intel Arc, so it makes me wish TAP could present things on other projects haha.
For real. IFS needs him to talk up what we're doing over in stacking and packaging land. I love seeing people like him out and talking about this stuff. Even if people don't understand everything in a presentation, they do pick up on when the person presenting cares about it and knows their stuff.
@@arthurmoore9488 you mean Roman "Der8auer" Hartung? Yeah he did interview Mark about CPU thermals and power. It is good to see an engineer talking directly to the media, but it's still far too rare. And regardless, I just think TAP has a certain enthusiasm and way of explaining things which is really effective.
3d printing gets a lot of credit and praise for all of the crazy things we can do in rapid prototyping. But under the radar has been just how easy it is to make a pcb. There are pcb desks that are no bigger than your computer desk. And plenty of services like PCBWAY where you can submit your pcb and have it manufactured in mass, quickly. Its really crazy how fast that has progressed too.
I remember spending hundreds of dollars because I wanted a PCB made in under a month, and getting anything less than a full panel required sending the files to a guy who would act as a middle man.
A concrete example I've ran into is a situation where mouse controlled camera motion is stuttery even though the frame time graph shows flat. That's because the when the input lag is uneven, the captured mouse motion becomes uneven. Issues like these usually appear when you switch between CPU and GPU bottleneck or framerate cap. I wish there was a way to make games operate at a stable input lag that smoothly transitions from capped framerate to dropping below the cap, or between CPU/GPU bottleneck, even if it meant the game had to use worst case input lag all the time.
1% lows have been bothering me for a long bit, because it's not the low framerate represented in that number that I necessarily experience/notice, it's the flat-out halts, the jitters, the massive chunks of redrawn frames. It's hard to parse, what you REALLY want to know his "how often am I going to notice I do not have a new frame ready", that's what you FEEL. I know it's hard to represent these sort of things with numbers and averages. By design they don't really illustrate the extremes very well. Software input to frame data is gonna be sick as well!
@@GamersNexusstill though 1% doesnt tells you how often it stutters right? Like the game freezing for 1 second 3 times would affect 1% lows the same as it freezing for 10 ms 300 times no? (I picked extreme examples)
I can agree that picking out just a few percentiles can be frustrating, because they're not reflecting the whole experience comprehensively, but single values plucked out of the data, and so whether your "stutter problem" in-game is just above or just below that arbitrary percentile will make as much impact on the metric as the true overall pacing/experience will have. How tightly clustered or how spread out the slowness is over time is not knowable from percentiles, as others in this thread have said. There is some artificial impreciseness from plucking out a few percentiles and calling it a day. As the video pointed out, GN does take us in for a closer look sometimes with frametime graphs, which show the whole dataset (or whole segment of gameplay containing the performance issue + surrounding context) for a fuller look instead of plucking out certain percentiles. That's great. But having to go to a graph means it can't happen very often, not enough time to graph framtime plots of every game/setting/PC tested, so having to do a whole separate graph for it is a bit of a barrier to seeing the whole picture very often. I get that percentiles, as an approximate or rough look at the whole dataset, are very very valuable. Definitely they are. Probably gets us most of the way there to where it's not always meaningfully relevant to look a lot closer in each case. But I also do agree it's frustrating to have to settle for an approximation OR else reach for a separate graph (maybe a separate graphing tool altogether) just to really meaningfully look at the whole dataset, rather than a small sample of the dataset.
I tested up to version 0.6 beta, my initial impression was after a diagnostic session I wanted to see an analysis of the logged metrics which only highlighted the dips and peaks where relevant in a summary that could be used for reference when attempting to reproduce a problem with the hardware, driver or application under test. Also he raises a good method of recognizing when the graphics card is performing optimally when the processor workload and graphics card workload has a wait of a couple of milliseconds but not too long. Though for average users that might be a bit hard to do, so really presentmon should have a visual cue when the processor or graphics card wait distribution is uneven for example with a simple visual colour icon that changes dynamically with tasks as the frame changes etc. for the session duration
I'd actually been thinking about this recently. I couldn't figure out how the Cpu would know how long to advance the game logic without knowing how long the previous frame would be on screen for. I figured that either I lacked sufficient understanding of the low level workings and had missed something or it simply didn't matter. But it turns out that I hadn't misunderstood & it does actually matter. It seems pretty obvious in hindsight but most things do, the fact the Frametime itself was so poorly understood for so long still seems pretty crazy. I'd love it if someone could come up with a demo that allowed us to play around with the simulation time consistency in realtime so we could play around with this effect for ourselves.
At least that's easy to fix when you're doing vsync, since the simulation step would be always the same. For some games, it is also critical to avoid issues as when changing the framerate the physics will be affected and it can lead to inconsistent results. It is important for any RTS if you want your replays to work correctly.
Unfortunately, most physics engines have that part locked. Last time I looked, which was a while ago, many of them hard-coded the physics to 60fps. So if you have say 120 FPS, and watched a capture of something colliding in slow-mo, you would see it clipping for a frame then going back the next.
Very, very interesting and informative video! Super impressed with Tom's communication skills and ability to be an engineer but talk like a layman. Very rare.
Really great stuff, although I wish they wouldn't call a purely software metric "ClickToPhoton". By definition, that number _cannot_ represent click-to-photon latency if it's not starting in hardware at the click and ending in hardware at the monitor pixels (photons). Animation error is an interesting metric, and makes me think of some of the tricks used on VR headsets to reduce discomfort (although they're not directly related). Because we're so sensitive to framerate and latency for something strapped to our heads, most VR headsets support rendering a "fake" frame containing the same scene data but skewed to a slightly different camera perspective so that the scene still matches your head position as your turn and move around, even if the game engine itself is updating at a slower rate. This can result in a bit of a weird situation where objects in the scene (objects, enemies) update at a visibly slow animation rate, but the scene itself still "feels" fluid because the viewport is updating more frequently. This is way better than the alternative, and just highlights that perception of a rendered scene is a whole lot more complicated than "high fps = good"!
If these were the only videos you had I would be content. Interviewing these highly technical guys is so entertaining. Plz do all you can (P.S.: not to say your other content isn't great, it's just that this stuff is SO good)
There's just something so interesting about listening to two knowledgeable people talk about something very geeky. Keep Tom on the channel, he is very interesting to listen to!
Gotta say, it’s amazing that Intel GPU and Intel CPU are owned by the same company when they seem to be such opposites. The CPU division has been so anti consumer and the gpu division is throwing stuff out as open source and doing interviews like this. Hopefully the GPU division stays like this and doesn’t turn into Nvidia lite in 10 years
Tom's a top tier guest. Love seeing this guy deliver updates on a passion project like this, with such impactful potential. Not sure when this was filmed, but hopefully he got to see some of this good weather and not just the rain we've had.
Even though this is PR for Intel, I hope people appreciate all the work Tom has done and his overall mindset. This industry needs be pressured towards an open source, pro-consumer ideology as much as possible. That is not the direction most silicon valley executives would prefer to go. Even if you're someone who just hops on to play games and doesn't think about it outside of occasional driver updates, accessibility to diagnostic tools and advanced control over device behavior will eventually impact your experience one way or another. Just imagine if the corporate ideology behind many products we see from Apple, Google, Samsung, Dell, etc. became industry standard for PC hardware. Intel certainly isn't perfect but it's people like Tom, Bill at AMD and others behind the scenes trying to advocate for enthusiasts that helps to keep the suits in check. We'll never get everything we want and will get frustrated with all of them time and time again. This is why it's critical for consumers and reviewers like Steve to push back and keep this industry competitive.
It would be awesome if presentmon could capture and output an interactive frametime chart and when you click on a frametime spike it opens up a chart which steps of that frame took how long to complete as detailed as possible. That would be good not only for reviewers but to troubleshoot frametime spikes as a consumer and to optimise your settings. You could have a "record" button within presentmon to start capturing all the details and after you have recorded some minutes of data you can close the game and open up the frametime chart. Then you manually scroll through the chart and click on any specific frame. It opens up a second window showing all the steps of creating the frame and how long each took to complete. The steps should be colour coded: green = normal yellow = took slightly longer then expected red = took way longer then expected Now you know which step caused the spike. Then you click on the yellow or red marked step and it tells you the most common causes for it to take longer then expected.
Kind of known this for a long time watching an animation or film shot at 24fps is world's apart from trying to play a game that is apparently running at anywhere near that number
These types of video are so freakin cool. I love getting a small little "peak" at the actual engineers and developers and what they are doing and working toward. Its so enjoyable seeing his enthusiasm and excitement for it all.
A nice video on simulation time error is "Myths and Misconceptions of Frame Pacing / Alen Ladavac, Croteam / Reboot Develop Blue 2019". It would be important to give developers the tools (e.g., a way to measure display time) they need to avoid simulation time errors (as it is not always possible to pre-emptively avoid it).
Very cool! The discussion on GPU Busy was a game changer for me! Allowed me to really understand what to be looking for and how to understand performance. Really excited to try out the beta.
My favorite part of this vid is that Tom, The Tech Expert who is moving the tech ball forward in a big way by developing Presentmon to provide an in-depth analysis of what's actually happening inside a computer when it's running games, uses a white board.
I wanna see Tom and Blur Busters Chief in the same room. :O That explanation about short vs long frames being perceived as blur really got me thinking about how big the relationship is between a great engine, great hardware and a great display. Was some knowledge lost between the CRT platformer era and now?
Motion clarity issues from sample and hold displays are caused by how our brain works. It's not that the knowledge is lost, its just hard to blank frames on sample and hold displays.
I'm working on software technology for this problem, and yes there are more steps after this: particularly knowing ahead of time how long it'll take to compute the next frame.
The simulation/animation time error is something really interesting to consider when analyzing stutters. Sometimes the framerate is really good, but the animation or camera movement is completely borked. I wonder if that is what occurs in Jedi Survivors, in addition to shader compilation and traversal stutters. Even though the frame rate is high, the animation is strangely jerky and stuttery. Digital Foundry's Alex Battaglia talks about it, and shows what it looks like in this video: ua-cam.com/video/lsskwVyPoxs/v-deo.html (Hopefully, I got the time stamp right..) It would be really nice to be able to measure and quantify phenomena like that.
Yes! Steve and crew, please run this Jedi game through the simulation time error beta test if you can! That game is one of the worst "this just doesn't feel right" games in a long time.
A lot of this was pretty advanced but cool to watch. I will test the program but I’m a little intimidated based on a lot of the language and science. This video probably went over the head of a major portion of the audience but I still am glad you made this. I wish there was a summary or idiot version of this videos did us average folks.
I love these deep dives. I know it may not speak to everyone but to me at least it really does add the extra value of education about computing to the channel that makes GN so special and authentic. Thank you for the informative and the work it took to pull this off.
As an engineer who has a great personality (I've focused on that in my career as it's really important to accelerate your career) I wish we could talk more to the end customers. There's a duality of purposes. Customers LOVE it, because we have the answers that marketing/management just won't have. Also, it helps us because we don't get a filtered response from customers on what needs improving. We get the raw data from customers.
I don't know if it's related to Animation Time Error, and I don't know if it's a thing that happens in almost every game, but I remember when playing games without enough VRAM or with very low inconsistent FPS, sometimes the games didn't just feel low FPS, they didn't just feel stuttery, but the animations of the characters felt jumpy (even worse with the physics), it seemed like the body coordinates of their animations moved inaccurately and like online rubberbanding. Kinda like PS1 polygon wobbling but based on movement.
YO IT WOULD BE REALLY COOL for you to take the PCB and follow it through the whole process then use it for the thing. I know we have all the bits and peices but like just following it through would be GIGA COOL.
As a programmer, I can confirm that animations are paced based on how long the last frame took, since the program is not a time traveler and doesn't know how long it will be by the time you see a frame, so if the frame time variance is extreme, you can see jerky looking animations because the assumed frame time is inaccurate.
Shout out to Scott Wasson! The Tech Report was my favorite tech site back in the day. It was sad when he left and it fell apart. He really helped move things forward.
Thats why i love you. Now we get frame times and stuttering as a category in gpu reviews. It blows my mind how bad some games run on certain gpus. In my case ist an amd gpu and it just got better after going for shaders always on in the registry. Also i saw insane performance in games that pre cache its shaders like "Lies of P" as an example.
This is excellent information & insight here, as"simulation time error" pretty much sums up my online gaming experience. 9900K - 5Ghz OC, 3080Ti, 64GB 3600 DDR4, Samsung 32 Odyssey 240 Hz. Escape from Tarkov player. Where did I go wrong lol....
I'm glad FPS is finally being dethroned as the measurement standard for benchmarking graphic cards. For so long its been argued: "who cares how Nvidia or AMD are obtaining higher frame rates, raw TFlops doesn't tell the whole story." But... neither does FPS and its nice to see common sense prevailing.
Thanks Tom Petersen for whatever you explain. The only one speaking more than 50% chance of telling the truth, humble, infromative and earnest. The lack of use by Reviewers of the intel free proper metrics tools is upsetting, who cares about the 99th percentile without a scale to compare and mean something? 1%low, not useful at all?
Engineering Discussion needs to be a recurrent topic, pleas ask algorithm God to be grateful on those informative not 100% call it now, Buy now, sponsored link to follow...
Show the peaks! Show the widths! Explain distributions!!! Dive into bimodal behavior. THIS is the real performance data. (Hard to explain but if anyone can GN can)
Awesome! Kudos to him for coming out! Thanks for the video thats awesome software to not only monitor and compare component combinations but much much more! Sweet
nvidia had in some driver versions a prototype feature to fix just this called SILK, it was later removed, but in the games wherer it worked it was amazing.
Given how few leaks there have been, it seems really unlikely that Battlemage will launch by May. But Intel did, for whatever it's worth, officially tell us that it's coming. They just don't have a date for the public yet.
Indications have been Q2, but it could be _late_ Q2. Can you wait a ~month? Integrated graphics have come a long way. I feel like you could get by until BattleMage releases.
some feedback for the tool: setting up custom triggers that either popup the overlay, or write the configured datapoints to a log file. a trigger could be something like a ratio between gpu-busy and the total gpu-time, or when the cpu has too little of a wait. simply speaking, some noteworthy events you'd consider needing some investigation/change of settings/hw upgrade. this would be useful in order to not have to constantly watch the overlay during a session, but could focus on the game and would only have to take a look if something needs attention.
yes, this is exactly how it should be looked at. my previous comment on the last video about this subject was to load the GPU to 100% or slightly over. This should be the scale for the worst scene in a game and once easier scenes are rendered the CPU should have between 20-30% to use to pick up the FPS as it should be running roughly between 60-70,75%. this ensures the hardware is balanced correctly and passes work between each other with as little wait time as possible taking into account an unbalanced workload such as a game which fluctuates through loading the GPU and CPU within their own performance envelope. this means that if you have a weak CPU and a much stronger GPU, load the GPU until its 100%. This is where you use the input latency to guess how much the GPU is actually overloaded. High input latency means the GPU is too overloaded. usually in this case, lowering settings like shadows one step or in some cases Ambient Occlusion in certain games. settings like texture quality and in some games post processing can be used to stabilise frame fluidity if set one or two steps higher and sometimes set to max. In some games, with texture quality maxed you could set everything else medium or low then raise each setting individually aiming for frame fluidity and not maximum frametimes then adjust using input latency. high input latency probably means the CPU is waiting more than the GPU, where the GPU is holding up the animation and total FPS. the frames will be super smooth but the latency will be very high so not great for FPShooters. The opposite creates stutters because the weak CPU is overloaded because the settings are too low and the GPU is no longer holding the rope taught by being fully utilised. this is where the frame graphs start spiking and going all over the place. the only reason I can think is possibly that CPU's clock alot higher than GPU's in general, so if a GPU fluctuates it goes between 1000mhz to 1600mhz whereas most CPU's will go between 2ghz all the way up 5ghz so maybe its a clock scaling issue where the bigger the min and max clocks are the bigger fluctuations you can have and this is reflected in the frame graphs so maxing out the GPU you're effectively setting a limiter on the CPU. whatever it is smoother frames come from a maxed GPU and a 60-70,75% CPU Like a synchronised swimming team with a new member... the new swimmer messes up and the faster more efficient swimmers have to stop to get the new swimmer up to speed. its stop and start, the same as stutter The ideal situation would be like having two bodybuilders and they have to pass a boulder between each other except when the boulder passes from the strong GPU Bodybuilder to the weaker CPU bodybuilder the boulder shrinks to accommodate the weaker bodybuilder. once the smaller boulder gets passed back the workload is matched to the performance of that bodybuilder This ensures that both bodybuilders can do the task at the same speed and will wait the exact same time between passes even though the loads they are working with are different strange metaphor but, this is the way I try to run my games. a strong GPU with a weak CPU isnt the end of the world. crank the settings to match your GPU's performance and take the weight/wait - pun intended - off of the CPU so that it can run within its limitations. the stronger component has to be slowed down with load so the weaker part can run at the same 'speed' so it can keep up Like tying 100kg weights to usain bolt so that I can run the same speed as him over 100m sorry for the long post and great video. appreciate it
Watch our video with Tom Petersen talking about how GPU drivers actually work (and what "optimization" means: ua-cam.com/video/Qp3BGu3vixk/v-deo.html
Learn more about system latency in our interview with an NVIDIA technical expert previously: ua-cam.com/video/Fj-wZ_KGcsg/v-deo.html
Watch our previous interview with Tom Petersen where we discussed GPU Busy: ua-cam.com/video/5hAy5V91Hr4/v-deo.html
Watch our debut of GPU Busy testing here: ua-cam.com/video/raf_Qo60Gi4/v-deo.html
can anyone explain to me if SAM on GPU side can cause very low AMDip FPS drops.
I've noticed it lately in a lot of 'console' prt games.
Please do another real independent story on why the prices of industry GPUs are as high as those of used vehicles. I'm not saying the vehicle is in A-1 condition, but more like a hooptie.
Considering the iPhone 14 can be mass-produced at the price of $10 each, what's the story behind these outrageous prices on these GPUs being produced? What are the hidden costs of mass-producing them?
@@BaBaNaNaBa You're probably hitting a coding problem where they didn't fix the memory allocation from the console code to be PC code. You could report to AMD's driver team too as bug, or the Game developers.
This is a second comment. First got deleted. Maybe was too long...
First, I really love that the "Animation Error" will finally see the word)
Second, I myself a graphics engineer and was working on Frame pacer. To measure the Frame pacer's performance I also come up with the "Animation Error" metric (funny, that name is exactly the same). However, I calculated "Animation Error" and "Display Time" differently.
At first, "Animation Error" was comparing "Animation Time" and "Display Time" of each frame. So if "Animation Time" == "Display Time", then we have no animation error, and presumably - smooth animation. However, on practice this was wrong. For example: using 25 ms "Animation Time" for 40 FPS on 60 Hz (no VRR) display produces better visual results than matching the 16.7/33.3ms "Display Time".
Problem in the Sample and Hold nature of modern displays and how our eyes perceive the animation. After long investigation, I introduced "Ideal Animation Time" that is a Display delta time, measured between centers of the Display frames.
Formulas I used:
DisplayTime[i] = DisplayBegin[i + 1] - DisplayBegin[i]
IdealAnimationTime[i] = (DisplayBegin[i + 1] - DisplayBegin[i - 1]) / 2
AnimationError[i] = 20 * log10(AnimationTime[i] / IdealAnimationTime[i])
I hope Tom Petersen will read this comment.
when the computer has STE's O.o
This guy single handedly carrying Intel's consumer relations
Tom is a great engineer. At NVIDIA, he was the one driving a lot of the OC functionality and was the reason NVIDIA kept the cards overclockable. At Intel, he's working with his team to create open source tools that will likely reveal more flaws with Intel GPUs than anyone else, but they are still making and releasing them. Hopefully the corporate overlords realize how important this work is and how Arc will have to lose money for a while before it can find success.
@@GamersNexusAh so now that he's not at Nvidia, we can (maybe by circumstance) see why overclocking is basically dead (or at least any that can be felt by users for typical at home gaming).
Seriously. Any video with Tom Peterson I will watch. So clearly knowledgeable, passionate, and an excellent communicator. You just want to trust him as an authoritative source.
Im not sure about other board partners, but "soft overclocking" through a video cards software is usually very easy to do and relatively safe. For example asus gpu tweak@Photo-Jay
@@JayJayYUPCards are pushed so hard out of the box these days, that the gains you can get are minimal. They all ship with algorithms that basically continuously overclock the card to whatever is stable at that point.
Something that's absolutely not dead, is undervolting. With a bit of time and effort you can significantly reduce power draw for your card, which often even leads to slightly better performance as you won't hit the current limit. In many cards you can drop power draw by as much as a third without giving up performance. If you like a quiet computer, this is a must. Plus it's impossible to damage your card this way. There really is no downside.
to all hardware companies:
let
your
engineers
TALK
to us, customers
Engineers have a nasty habit of telling the truth that executives don't like.
They seem to be figuring it out more and more, at least for our channel! Intel, NVIDIA, and AMD have now all allowed their engineers to go on with us multiple times, so that's been awesome to see. Big change from a few years ago! But we need to keep showing them that people care about these discussions to get them to keep doing it!
It's a great idea if the person talking can break it down in a way people can understand, otherwise people would just be confused.
As an engineer I can say letting "us" speak is a high risk, low reward strat. Most engineers have a hard time explaining things without unnecessary details. Additionally, strategic wording is important otherwise you end up with situations like the LTT Labs "retesting every time" comment. People like Tom are rare and therefor too valuable to have them spend resources on presentations like this that only reach a tiny slice of potential customers.
I only care about what engineers say. Don't care for executives, marketing material, PR stunts, landing pages.
Dude... Fraps takes me way back.
Also love this guy, he clearly knows his stuff and is excited to talk about it.
Paid for a FRAPS licence a very long time ago. It was the best way to take screenshots back then. What a blast from the past
Used the free version to record Minecraft videos when I was a kid lol
sameee fraps for runescape lol god dayum an other things too
@@muchen1 XD runescape what a throwback
I used FRAPS in 2000 and onward with optimizing UT99 (and every game I played). I wrote my own AVG functions. You could log framerate in .csv. So I created my own functions that gave me %inMIN, MIN, MIN AVG and AVG. Then I messed with graphical settings so that MIN AVG was as close to AVG, and %inMIN was as low as possible. This was as close to "fluid" gameplay as we could get. Frametime graphs didn't exist back then, and I think there wasn't any FRAPS-like application to log frametimes.
i swear when intel becomes a true competitor to team green and red if Tom becomes the face of the brand they cant lose. dont change Tom
He will get poached before that happens.
Team Red? 13:51 - According to this guy both Intel and Nvidia are sharing something with the world. What "Team Red"? Oh... hold on, there is "US" and... "others" whatever that means.
@@alexkay7823 amd
can't*
@@alexkay7823 How can you be so confidently out of the loop? AMD's logo is red, so Team Red, Nvidia's is green, Intel's is blue
and the cold war ended decades ago, now it's just the standard govt shenanigans from _every country_
Damn, all the bots got nukes
Thanks, Steve.
NUCLEAR LAUNCH DETECTED
@@GamersNexus _"A-Bomb prepping. A-Bomb launch detected."_
@@GamersNexus spawn more overlords
Engineering Discussion is a good name.
It's about as literal as I could make it! hahaha
TAP!! Always nice to see him in your videos. Highly knowledgeable and an excellent explainer.
TAP is definitely among the best for this kind of video!
intel is actually doing interesting stuff with their graphics. I think this will be great for testers and end users alike to see where they are actually losing performance.
No they aren't. It's mainly movie focused and they are designing for energy efficiency now. 15w graphics tile 😢
I don't like their cards because they perform bad in older games. I like my old games literally to fly, to play them like I couldn't at the time. I do that with my RTX 3060, with Intel it's impossible, with Intel it's like I'm with my 1050ti all over again in old games. So unless they repair that, I'll probably never get an Intel card. Also once you step out of the benchmark "stack" and "most popular" Intel performs bad. They probably need 10 or 20 years to catch up with Nvidia.
Tom Petersen is the best GN guest, can't wait for Intel Arc Battlemage
Best PR man of Intel : Tom is an incredible and presentMon is a very well made project ....
You may want to consider a camera stand or something when the majority of time you are only in 1 location. The jidder can be very odd (motion sickness inducing for me) when it is in full view frame shot. Great video!
Good point.
I just noticed the slight tilts during the video.
Im noticing it now how it's a little jolty thank you
GN x TAP content always the best 💖
As an Intel engineer, "GO TAP!"
I always enjoy seeing TAP talking about Intel Arc development. Though I do not work on Intel Arc, so it makes me wish TAP could present things on other projects haha.
For real. IFS needs him to talk up what we're doing over in stacking and packaging land. I love seeing people like him out and talking about this stuff. Even if people don't understand everything in a presentation, they do pick up on when the person presenting cares about it and knows their stuff.
De8ar did a deep dive with an Intel engineer a while ago on heat spreader design, so he's definitely not the only one.
@@arthurmoore9488 you mean Roman "Der8auer" Hartung? Yeah he did interview Mark about CPU thermals and power. It is good to see an engineer talking directly to the media, but it's still far too rare. And regardless, I just think TAP has a certain enthusiasm and way of explaining things which is really effective.
The display stand - looks like it could lift a car
Very interesting video - thx
I found it on the side of a road! Literally!
@@GamersNexus 😂
post-consumer frugal chad
3d printing gets a lot of credit and praise for all of the crazy things we can do in rapid prototyping.
But under the radar has been just how easy it is to make a pcb. There are pcb desks that are no bigger than your computer desk. And plenty of services like PCBWAY where you can submit your pcb and have it manufactured in mass, quickly.
Its really crazy how fast that has progressed too.
I remember spending hundreds of dollars because I wanted a PCB made in under a month, and getting anything less than a full panel required sending the files to a guy who would act as a middle man.
A concrete example I've ran into is a situation where mouse controlled camera motion is stuttery even though the frame time graph shows flat. That's because the when the input lag is uneven, the captured mouse motion becomes uneven. Issues like these usually appear when you switch between CPU and GPU bottleneck or framerate cap. I wish there was a way to make games operate at a stable input lag that smoothly transitions from capped framerate to dropping below the cap, or between CPU/GPU bottleneck, even if it meant the game had to use worst case input lag all the time.
1% lows have been bothering me for a long bit, because it's not the low framerate represented in that number that I necessarily experience/notice, it's the flat-out halts, the jitters, the massive chunks of redrawn frames. It's hard to parse, what you REALLY want to know his "how often am I going to notice I do not have a new frame ready", that's what you FEEL. I know it's hard to represent these sort of things with numbers and averages. By design they don't really illustrate the extremes very well. Software input to frame data is gonna be sick as well!
1% lows would represent flat-out halts. They manifest as a huge drop, which we then show in frametime plots.
@@GamersNexus How about standard deviation plots? the a wide plot indicates choppy rates, while a narrow one would indicate steady framerate.
@@GamersNexusstill though 1% doesnt tells you how often it stutters right? Like the game freezing for 1 second 3 times would affect 1% lows the same as it freezing for 10 ms 300 times no? (I picked extreme examples)
Learn how to ram overclock. It both dissporoportionately and massively smooths out frames rather than giving you that many more on average. It's great
I can agree that picking out just a few percentiles can be frustrating, because they're not reflecting the whole experience comprehensively, but single values plucked out of the data, and so whether your "stutter problem" in-game is just above or just below that arbitrary percentile will make as much impact on the metric as the true overall pacing/experience will have. How tightly clustered or how spread out the slowness is over time is not knowable from percentiles, as others in this thread have said. There is some artificial impreciseness from plucking out a few percentiles and calling it a day.
As the video pointed out, GN does take us in for a closer look sometimes with frametime graphs, which show the whole dataset (or whole segment of gameplay containing the performance issue + surrounding context) for a fuller look instead of plucking out certain percentiles. That's great. But having to go to a graph means it can't happen very often, not enough time to graph framtime plots of every game/setting/PC tested, so having to do a whole separate graph for it is a bit of a barrier to seeing the whole picture very often.
I get that percentiles, as an approximate or rough look at the whole dataset, are very very valuable. Definitely they are. Probably gets us most of the way there to where it's not always meaningfully relevant to look a lot closer in each case. But I also do agree it's frustrating to have to settle for an approximation OR else reach for a separate graph (maybe a separate graphing tool altogether) just to really meaningfully look at the whole dataset, rather than a small sample of the dataset.
How many rabbit holes do you want in one video?
Steve: ALL OF THEM
Tom Peterson is great, it was sad watching the Alchemist PR tour, hopefully BattleMage turns out better
Great video! Thanks Steve and the team. Love content like this. Also just got my commerative pint glasses today, love them!
That's awesome! Thank you so much for ordering!
Of course! Got the modmat too and looking forward to breaking it in with a new build :) Keep up the awesome content!@@GamersNexus
I tested up to version 0.6 beta, my initial impression was after a diagnostic session I wanted to see an analysis of the logged metrics which only highlighted the dips and peaks where relevant in a summary that could be used for reference when attempting to reproduce a problem with the hardware, driver or application under test. Also he raises a good method of recognizing when the graphics card is performing optimally when the processor workload and graphics card workload has a wait of a couple of milliseconds but not too long. Though for average users that might be a bit hard to do, so really presentmon should have a visual cue when the processor or graphics card wait distribution is uneven for example with a simple visual colour icon that changes dynamically with tasks as the frame changes etc. for the session duration
not only open software, but open hardware too. Good to see. Good job Tom
This whole discussion was legit riveting, I feel like I could watch you two talk for hours. Tom has me actually excited for things at Intel.
It's always great when Tom Peterson is on. Love learning from him!
I'd actually been thinking about this recently. I couldn't figure out how the Cpu would know how long to advance the game logic without knowing how long the previous frame would be on screen for.
I figured that either I lacked sufficient understanding of the low level workings and had missed something or it simply didn't matter.
But it turns out that I hadn't misunderstood & it does actually matter.
It seems pretty obvious in hindsight but most things do, the fact the Frametime itself was so poorly understood for so long still seems pretty crazy.
I'd love it if someone could come up with a demo that allowed us to play around with the simulation time consistency in realtime so we could play around with this effect for ourselves.
At least that's easy to fix when you're doing vsync, since the simulation step would be always the same. For some games, it is also critical to avoid issues as when changing the framerate the physics will be affected and it can lead to inconsistent results. It is important for any RTS if you want your replays to work correctly.
Unfortunately, most physics engines have that part locked. Last time I looked, which was a while ago, many of them hard-coded the physics to 60fps. So if you have say 120 FPS, and watched a capture of something colliding in slow-mo, you would see it clipping for a frame then going back the next.
Very, very interesting and informative video! Super impressed with Tom's communication skills and ability to be an engineer but talk like a layman. Very rare.
Really great stuff, although I wish they wouldn't call a purely software metric "ClickToPhoton". By definition, that number _cannot_ represent click-to-photon latency if it's not starting in hardware at the click and ending in hardware at the monitor pixels (photons).
Animation error is an interesting metric, and makes me think of some of the tricks used on VR headsets to reduce discomfort (although they're not directly related). Because we're so sensitive to framerate and latency for something strapped to our heads, most VR headsets support rendering a "fake" frame containing the same scene data but skewed to a slightly different camera perspective so that the scene still matches your head position as your turn and move around, even if the game engine itself is updating at a slower rate. This can result in a bit of a weird situation where objects in the scene (objects, enemies) update at a visibly slow animation rate, but the scene itself still "feels" fluid because the viewport is updating more frequently. This is way better than the alternative, and just highlights that perception of a rendered scene is a whole lot more complicated than "high fps = good"!
The brightest minds on the planet are trying to make my CS2 matches as smooth as possible, and I'm very grateful for their skills 😅
No more excuses. You can only blame the chair now.
If these were the only videos you had I would be content. Interviewing these highly technical guys is so entertaining. Plz do all you can (P.S.: not to say your other content isn't great, it's just that this stuff is SO good)
I would pay for more videos with Tom for real. He's the goat
Thank you for learning with the audience and making this channel the most elite in terms of technicality in the overly saturated tech news space.
Not an Intel fan but I love Tom Petersen, he is a bundle of great information..
I like this guy. He's enthusiastic and seems to know his stuff.
There's just something so interesting about listening to two knowledgeable people talk about something very geeky. Keep Tom on the channel, he is very interesting to listen to!
Very interesting discussion. Can't wait for more of these Engineering Discussions!
We need more Tom videos on GN!!! Always a blast to hear it from the actual engineers who make it all possible.
Gotta say, it’s amazing that Intel GPU and Intel CPU are owned by the same company when they seem to be such opposites. The CPU division has been so anti consumer and the gpu division is throwing stuff out as open source and doing interviews like this. Hopefully the GPU division stays like this and doesn’t turn into Nvidia lite in 10 years
Tom's a top tier guest. Love seeing this guy deliver updates on a passion project like this, with such impactful potential. Not sure when this was filmed, but hopefully he got to see some of this good weather and not just the rain we've had.
Mr. Peterson reimendes me of my friends Dad how got me into PC building in the first place. Thank‘s for giving us some new cool toys to play with ❤
Even though this is PR for Intel, I hope people appreciate all the work Tom has done and his overall mindset. This industry needs be pressured towards an open source, pro-consumer ideology as much as possible. That is not the direction most silicon valley executives would prefer to go. Even if you're someone who just hops on to play games and doesn't think about it outside of occasional driver updates, accessibility to diagnostic tools and advanced control over device behavior will eventually impact your experience one way or another.
Just imagine if the corporate ideology behind many products we see from Apple, Google, Samsung, Dell, etc. became industry standard for PC hardware. Intel certainly isn't perfect but it's people like Tom, Bill at AMD and others behind the scenes trying to advocate for enthusiasts that helps to keep the suits in check.
We'll never get everything we want and will get frustrated with all of them time and time again. This is why it's critical for consumers and reviewers like Steve to push back and keep this industry competitive.
The demo at the end is awesome. These tools are seriously cool. In a nerd sense of course.
It would be awesome if presentmon could capture and output an interactive frametime chart and when you click on a frametime spike it opens up a chart which steps of that frame took how long to complete as detailed as possible.
That would be good not only for reviewers but to troubleshoot frametime spikes as a consumer and to optimise your settings.
You could have a "record" button within presentmon to start capturing all the details and after you have recorded some minutes of data you can close the game and open up the frametime chart. Then you manually scroll through the chart and click on any specific frame. It opens up a second window showing all the steps of creating the frame and how long each took to complete.
The steps should be colour coded:
green = normal
yellow = took slightly longer then expected
red = took way longer then expected
Now you know which step caused the spike. Then you click on the yellow or red marked step and it tells you the most common causes for it to take longer then expected.
Great topic! Very interesting to watch and can't wait to see the improvements available to the public.
To bad nothing on Battlemage yet...
Kind of known this for a long time watching an animation or film shot at 24fps is world's apart from trying to play a game that is apparently running at anywhere near that number
These types of video are so freakin cool. I love getting a small little "peak" at the actual engineers and developers and what they are doing and working toward. Its so enjoyable seeing his enthusiasm and excitement for it all.
RIP The Tech Report. Scott Wasson's frame time work was HUGE for GPU reviews and finally looking into this performance metric.
Very excited about the opensource LDAT, you saved me some dev time:)
Plus, someone else has double checked the work. Accidentally miss-reading a scope trace sucks.
A nice video on simulation time error is "Myths and Misconceptions of Frame Pacing / Alen Ladavac, Croteam / Reboot Develop Blue 2019". It would be important to give developers the tools (e.g., a way to measure display time) they need to avoid simulation time errors (as it is not always possible to pre-emptively avoid it).
ua-cam.com/video/n0zT8YSSFzw/v-deo.html
Thanks guys … you are one of the only channels not pushing clickbait … really appreciate the valuable content you provide.
Very cool! The discussion on GPU Busy was a game changer for me! Allowed me to really understand what to be looking for and how to understand performance. Really excited to try out the beta.
This dude is amazing! really has me waiting for battlemage now
what a guy! I can listen to tom talk for days.
I LOVE the "AMD Ryzen" hovering over Tom's shoulder the entire video hahaha
I love seeing you guys constantly try to raise the bar with increasingly finnicky/complicated methodologies.
My favorite part of this vid is that Tom, The Tech Expert who is moving the tech ball forward in a big way by developing Presentmon to provide an in-depth analysis of what's actually happening inside a computer when it's running games, uses a white board.
Tom Petersen is one of the BEST thing that Intel can do with letting him just talk. VERY cool stuff.
TBH this is a educational/ university level review , talking , discussion and damn wish i had time to watch it completely.
I wanna see Tom and Blur Busters Chief in the same room. :O
That explanation about short vs long frames being perceived as blur really got me thinking about how big the relationship is between a great engine, great hardware and a great display.
Was some knowledge lost between the CRT platformer era and now?
Motion clarity issues from sample and hold displays are caused by how our brain works. It's not that the knowledge is lost, its just hard to blank frames on sample and hold displays.
Intresting! We will have to wait to see if this catches on. Holding my thumbs for it to succeed since everyone benefits from more accurate reviews!
These are some of the most informative Videos ive seen on UA-cam. Great guest and great presentation as always.
I'm working on software technology for this problem, and yes there are more steps after this: particularly knowing ahead of time how long it'll take to compute the next frame.
Nice shoutout and thank you again Scott Wasson! RIP TechReport
The simulation/animation time error is something really interesting to consider when analyzing stutters. Sometimes the framerate is really good, but the animation or camera movement is completely borked. I wonder if that is what occurs in Jedi Survivors, in addition to shader compilation and traversal stutters. Even though the frame rate is high, the animation is strangely jerky and stuttery. Digital Foundry's Alex Battaglia talks about it, and shows what it looks like in this video: ua-cam.com/video/lsskwVyPoxs/v-deo.html (Hopefully, I got the time stamp right..) It would be really nice to be able to measure and quantify phenomena like that.
Yes! Steve and crew, please run this Jedi game through the simulation time error beta test if you can! That game is one of the worst "this just doesn't feel right" games in a long time.
Man I really love the talks Tom does, as a Software guy but that finds HW fascinating this is like a dream class to me.
24:48 it’s so satisfying to see GPU metrics that are sooooooo smooth!
A lot of this was pretty advanced but cool to watch. I will test the program but I’m a little intimidated based on a lot of the language and science. This video probably went over the head of a major portion of the audience but I still am glad you made this. I wish there was a summary or idiot version of this videos did us average folks.
I love these deep dives. I know it may not speak to everyone but to me at least it really does add the extra value of education about computing to the channel that makes GN so special and authentic. Thank you for the informative and the work it took to pull this off.
As an engineer who has a great personality (I've focused on that in my career as it's really important to accelerate your career) I wish we could talk more to the end customers. There's a duality of purposes. Customers LOVE it, because we have the answers that marketing/management just won't have. Also, it helps us because we don't get a filtered response from customers on what needs improving. We get the raw data from customers.
Its good to see old tom again i rememeber him working at nvidia for years
That sponsor spot placement was diabolical, Steve + Team.
Would love some demonstration videos of game having bad cpu or gpu performance, and how it looks when tweaking with the setings
YES! That would bring it home. AGREE👍
This PresentMon tool is absolutely genius! Thanks a lot Tom for supplying us with such in depth analysis!
that's absolutely amazing tool
please add "VRAM filled" & VRAM related thiings to the mix of overlay
Good to see Tom, and good to see PresentMon keep getting meaningful updates.
I don't know if it's related to Animation Time Error, and I don't know if it's a thing that happens in almost every game, but I remember when playing games without enough VRAM or with very low inconsistent FPS, sometimes the games didn't just feel low FPS, they didn't just feel stuttery, but the animations of the characters felt jumpy (even worse with the physics), it seemed like the body coordinates of their animations moved inaccurately and like online rubberbanding. Kinda like PS1 polygon wobbling but based on movement.
Tom is AWESOME!
He's the kind of tech guy I could hear talking all day long.
YO IT WOULD BE REALLY COOL for you to take the PCB and follow it through the whole process then use it for the thing. I know we have all the bits and peices but like just following it through would be GIGA COOL.
As a programmer, I can confirm that animations are paced based on how long the last frame took, since the program is not a time traveler and doesn't know how long it will be by the time you see a frame, so if the frame time variance is extreme, you can see jerky looking animations because the assumed frame time is inaccurate.
Fraps is now under the yellow bus beside the green tree.
very cool. This will definitely change how I test PC builds.
Completely off-topic, but I love that they used the Kuma trailer as an example for fixed animations. Bear gang
Shout out to Scott Wasson! The Tech Report was my favorite tech site back in the day. It was sad when he left and it fell apart. He really helped move things forward.
Tom is awesome. Also, love these types of interviews. Glad to see more of them.
Thats why i love you. Now we get frame times and stuttering as a category in gpu reviews. It blows my mind how bad some games run on certain gpus. In my case ist an amd gpu and it just got better after going for shaders always on in the registry. Also i saw insane performance in games that pre cache its shaders like "Lies of P" as an example.
This was an awesome and insightful discussion!
This is excellent information & insight here, as"simulation time error" pretty much sums up my online gaming experience. 9900K - 5Ghz OC, 3080Ti, 64GB 3600 DDR4, Samsung 32 Odyssey 240 Hz. Escape from Tarkov player. Where did I go wrong lol....
I'm glad FPS is finally being dethroned as the measurement standard for benchmarking graphic cards. For so long its been argued: "who cares how Nvidia or AMD are obtaining higher frame rates, raw TFlops doesn't tell the whole story." But... neither does FPS and its nice to see common sense prevailing.
Man. I don't even care that much for the topic, but i could listen to these two talking all day ❤
Always a good day when Tom visits the channel.
Thanks Tom Petersen for whatever you explain. The only one speaking more than 50% chance of telling the truth, humble, infromative and earnest. The lack of use by Reviewers of the intel free proper metrics tools is upsetting, who cares about the 99th percentile without a scale to compare and mean something? 1%low, not useful at all?
Engineering Discussion needs to be a recurrent topic, pleas ask algorithm God to be grateful on those informative not 100% call it now, Buy now, sponsored link to follow...
Dang! Those histogram charts look awesome!
Show the peaks! Show the widths! Explain distributions!!! Dive into bimodal behavior. THIS is the real performance data. (Hard to explain but if anyone can GN can)
Tom is awesome! Intel should be grateful to have him!
Awesome! Kudos to him for coming out! Thanks for the video thats awesome software to not only monitor and compare component combinations but much much more! Sweet
Intel needs more of this. It’s so good to hear from the engineers. The financial news about Intel is so gloomy but the engineers are killing it.
nvidia had in some driver versions a prototype feature to fix just this called SILK, it was later removed, but in the games wherer it worked it was amazing.
I need a new GPU in May. I hope that Battlemage would be out by then. If not, I'll get a A750
Given how few leaks there have been, it seems really unlikely that Battlemage will launch by May. But Intel did, for whatever it's worth, officially tell us that it's coming. They just don't have a date for the public yet.
Indications have been Q2, but it could be _late_ Q2. Can you wait a ~month?
Integrated graphics have come a long way. I feel like you could get by until BattleMage releases.
some feedback for the tool: setting up custom triggers that either popup the overlay, or write the configured datapoints to a log file. a trigger could be something like a ratio between gpu-busy and the total gpu-time, or when the cpu has too little of a wait. simply speaking, some noteworthy events you'd consider needing some investigation/change of settings/hw upgrade.
this would be useful in order to not have to constantly watch the overlay during a session, but could focus on the game and would only have to take a look if something needs attention.
yes, this is exactly how it should be looked at. my previous comment on the last video about this subject was to load the GPU to 100% or slightly over. This should be the scale for the worst scene in a game and once easier scenes are rendered the CPU should have between 20-30% to use to pick up the FPS as it should be running roughly between 60-70,75%.
this ensures the hardware is balanced correctly and passes work between each other with as little wait time as possible taking into account an unbalanced workload such as a game which fluctuates through loading the GPU and CPU within their own performance envelope.
this means that if you have a weak CPU and a much stronger GPU, load the GPU until its 100%. This is where you use the input latency to guess how much the GPU is actually overloaded.
High input latency means the GPU is too overloaded. usually in this case, lowering settings like shadows one step or in some cases Ambient Occlusion in certain games. settings like texture quality and in some games post processing can be used to stabilise frame fluidity if set one or two steps higher and sometimes set to max.
In some games, with texture quality maxed you could set everything else medium or low then raise each setting individually aiming for frame fluidity and not maximum frametimes then adjust using input latency. high input latency probably means the CPU is waiting more than the GPU, where the GPU is holding up the animation and total FPS. the frames will be super smooth but the latency will be very high so not great for FPShooters.
The opposite creates stutters because the weak CPU is overloaded because the settings are too low and the GPU is no longer holding the rope taught by being fully utilised. this is where the frame graphs start spiking and going all over the place.
the only reason I can think is possibly that CPU's clock alot higher than GPU's in general, so if a GPU fluctuates it goes between 1000mhz to 1600mhz whereas most CPU's will go between 2ghz all the way up 5ghz so maybe its a clock scaling issue where the bigger the min and max clocks are the bigger fluctuations you can have and this is reflected in the frame graphs so maxing out the GPU you're effectively setting a limiter on the CPU. whatever it is smoother frames come from a maxed GPU and a 60-70,75% CPU
Like a synchronised swimming team with a new member... the new swimmer messes up and the faster more efficient swimmers have to stop to get the new swimmer up to speed. its stop and start, the same as stutter
The ideal situation would be like having two bodybuilders and they have to pass a boulder between each other except when the boulder passes from the strong GPU Bodybuilder to the weaker CPU bodybuilder the boulder shrinks to accommodate the weaker bodybuilder. once the smaller boulder gets passed back the workload is matched to the performance of that bodybuilder
This ensures that both bodybuilders can do the task at the same speed and will wait the exact same time between passes even though the loads they are working with are different
strange metaphor but, this is the way I try to run my games. a strong GPU with a weak CPU isnt the end of the world. crank the settings to match your GPU's performance and take the weight/wait - pun intended - off of the CPU so that it can run within its limitations. the stronger component has to be slowed down with load so the weaker part can run at the same 'speed' so it can keep up
Like tying 100kg weights to usain bolt so that I can run the same speed as him over 100m
sorry for the long post and great video. appreciate it
This is amazing. I thought I was crazy seeing all the microstutters in my games. Thank you, Tom, for helping move this problem forward.