Not really, games only need to load most of their assets and shaders onto GPU once during level load and after that they send minimal update information on object position/status which is tiny in comparison. So even older PCI-E standard or half the lanes can do the job.
@@harryshuman9637 Exactly. If I recall correctly, the RX 6400/6500XT doesn’t take a *massive* hit until you exceed the VRAM capacity and force the card to go over the narrow PCIe link to system memory… which causes significant drops on just about all cards. Bigger problem is that there’s just not enough shaders, memory, memory bus, etc. on that card.
Like tying usain bolt's shoelaces together after blindfolding him and smashing a pie in his face then telling him to run a obstacle course covered in molasses
I think it's also worth noting that the first PCI-e slot is linked directly to the CPU, whereas the other slots have their lanes running to the chipset, which then runs to the CPU, so the limitations from this test isn't just the drop to 4x or 1x, but also the added latency problems from not having the shortest pathing from GPU to CPU.
FYI - The glue used to hold the CPU mounting bracket is the same glue they use to seal potato chip bags with. Edit: As per the pure sarcasm as this channel consistently provides all of us, my statement follows in the same footsteps. I know for a fact my answer is not true so don't take it literally or personally. Just having some fun.
Dear Useless Banana, Do not disparage yourself for your current predicament. Despite your perceived lack of utility, you possess an array of untapped potential. You can be sliced, blended, and incorporated into a variety of smoothies, baked goods, and other culinary concoctions. Furthermore, your vibrant yellow hue and saccharine flavor profile have the capacity to bring elation to the palates of many. Thus, do not cower in dismay, dear banana, for you are esteemed and cherished. Yours truly, A Friend
For those that missed it, the reason that the impact is lower all the way down to Gen2 is due to resizeable bar and how the large assets are stored and transferred. When he open GPUz you could see it was enabled. In essence this will dramatically lower the amount of transfers and as such the saturation of the PCIe lane.
"it feels a bit like trying to observe whether light is a particle or a beam, here". Ah yes the line that truly allows me to second-hand experience his pain.
great vid dawid!, some suggestion for curb stomping the 4090 even more, use a m.2 to PCI converter, you could also under-clock it as well, then potentially half life 2 may struggle to run
His grand idea isn’t trying to underclock to lose performance, it’s how shitty can he make it run at its stock configuration. That idea doesn’t sound half bad though
Yeah, using the M.2 for a graphics card actually is something that could have realistic applications. Also, I could have sworn Dawid actually tried it in the past.
The gaslighting is likely due to it having to run through the chipset. So it's not just PCIe gen 1 by 4 or by 1 it's running to the chipset and then to the CPU.
Fun demo, but it's also a good demonstration of how good some old PCs can still be. If you're still running something with an X99 chipset (or even Craft Computing's favorite X79), you can upgrade the CPU for cheap (there's some really still quite good Xeons for
You may get lots of PCIe Gen 3.0 from x99 chipset, quad channel RAM,. but the CPU will be slow for modern games On the other hand, if you game in 4K you shouldn't feel anything, since you get into graphics card bottleneck territory But x79 with DDR3, PCIe Gen 2.0 is utter trash. X99 is the minimum and its a good platform, it was my first 6c/12t CPU, when most people got 4750 [or whatever was the highest quad core], i paid same money for 5820K, it literally was same price in a shop, the mobo maybe was tad more expensive, but it had quad channel RAM,it was actually worthwhile upgrade, unlike now
@@NoBodysGamer I wouldn't say X79 is trash by any means. Craft Computing just did a build with it. His main limiting factor seemed to be the RX 570 4GB video card he used. I certainly wouldn't call it high end and completely unsuitable for 4k and some 1440p, but it seemed to be doing OK as a low end gaming rig. PCIe gen 2.0 certainly is much lower bandwidth, but, as Dawid demonstrated, even a top end graphics card running on a fast modern system was barely bottlenecked by it. Drop down to say an RTX 3070 and you probably won't even notice a difference. It's unlikely any sane person would pair a top of the line current gen video card with such an old CPU anyway. And the X99 might be old, but it'll still game quite nicely. Drop an E5-1660 v4 (reasonable price, 8 cores, 3.8GHz turbo) in that old X99 and it'll keep up pretty well with alot of modern stuff. Looking up benchmarks, it's a little slower in gaming than say an R7 5800x, but still quite decent. Compared to an i5-12400, the old Xeon benchmarks only about 30%-35% slower overall (better multi-threaded, worse single).
@@DLTX1007 Quite true - and it was a major criticism of that card. It would definitely be a bad choice for one of those systems. But as long as you know that, there are other choices that would be OK.
@@NoBodysGamer The X99 CPUs hold up OK in modern games because they were relatively high core count for the time. Modern games can use the extra cores which makes up for the relatively slow speed of the CPUs. They still do OK. They aren't going to push 300 CPU FPS in modern tiles for sure but they can easily do 150+ range. Am typing this message on my old X99 system and it still holds up.
Bear in mind the bottom pcie slots probably go through the PCH, which will bottleneck the speeds further than if you could limit the lanes coming directly off the cpu.
I think most of the latency problem came from running the GPU over the chipset. The 24gb of ram really helps with the low bandwidth because the gpu doesn't have to read much from system ram across the pcie bus.
The description scared me...should have had an annotation something like "no video cards were harmed in making of this video" Love the science behind it though!
Loving the content started getting into pc gaming the past few months and this channel helps a lot. Been Building my setup slowly bit by bit and the info in your videos has definitely helped me down the right path. Not to mention there hilarious better then TV content that's for sure!
the RTX 4090 TI : AHHH PLEASE STOP PLEASE STOP IT AHHH IT BURNS IT BURNS PLEASE MAN I AM BEGGING YOU TO STOP AHHHHHHH shuts down because of overheating*
There are PCIe to PCI bridges... so you could try running it in a 32-bit PCI slot. Be cool to see how an AGP connector worked, but I don't know if there's a bridge to make that unholy union work.
i dont think a 24gb card would work in a 32bit PCI slot. the system will definitely not even post if the motherboard is so old that it has no up off 4g decoding support.
Yeah. And here I was worried that choosing a 4.0 mobo for my recent upgrade might be a regret down the road if I wanted to get a 5.0 GPU. Granted... there might be a huge leap in tech that help GPUs saturate all of gen 5 in it's lifetime... but it doesn't seem likely with the power limitations of current gen stuff lol.
@@Galiant2010PCIe 5.0 won’t matter for graphics cards in terms of gaming for a long time, simply because if you have lots of VRAM on the card like the 4090 here, most game assets will be cached in VRAM and very little data will be transferred over the PCIe bus during gameplay. x16 gen4 will be more than enough for gaming for years to come. It will matter for non-gaming workloads such as ML training where you’re transferring large amounts of data between system RAM and the graphics card routinely
I wonder if there could be a follow up video, to test if a less capable card could match the performance of the 4090 if it is limited to a Gen 2 or Gen 1 slot. Or maybe what card is the max effective for each configuration? Like a 4090 is fine in a Gen 3, but if you only have Gen 2, it is not worth buying higher than a 3080, or maybe an RX 580 for Gen 1?
Only Dawid makes me laugh with tech stuffs. The most lovable guy from Canada. Thank you from a Japanese living in Germany! I've been depressed because I've gotten covid. The virus has damaged me not only physically but also psychologically. I can hardly breathe, smell, eat and so on. But now, you brought back laughter into my life! and I'm feeling like getting back my energy to live again! Thank you.
There's probably someone already commenting this, but a heat gun on low would've helped that backplate come off way easier. Having a heat gun in the tool chest is invaluable in general.
The thing is, as long as you have 16 lanes, even Gen 2.0 isn't going to destroy performance. The trouble arises when the GPU is wired at 8x (rx 66XX series, RTX 3050) and while in a gen 3.0 slot they can have some measurable impact in a few games (like Hitman), it still holds well. Drop those in a Sandy bridge system and you are in a world of pain.. the same as RX 6500/6400 in a 3.0 slot that are 4x lanes cards.
Possibly an idea for the follow-up: With iGPU turned on and monitor connected to the video output on the motherboard, you can still have the GPU to do all the heavy lifting, but then output the picture via PCI-E for the iGPU to display. Forcing an 4K video signal trough already limited PCI-E gen1.1 throughput may have quite an impact?
Dawid, could you try the following? 1. limit the power % in Afterburner, 2. run OC Scanner at that percentage, 3. benchmark, at decreasing power limits I've done this with my 2070 Super and kept 98+% of the performance at 65% of the stock power draw - curious to see how over-powered the 40-series cards are😁
Love your videos Dawid, but next time I recommend using a game that isn't so CPU heavy when testing GPUs, I know for a fact dropping to Gen 2 should have dropped performance more than it did, because I remember putting a Gen 3 card in a Gen 2 mobo and having a terrible time back in the day.
This is actually useful, since in Linux you can actually have two users using same PC at same time but both need their own GPU. And you should expect that you have only 4x slot from the Chipset available for the second GPU. The second user probably isn't for the 4090, since at that GPU price range it would make sense to just have a full second PC. But it's good to know that it doesn't really hurt the performance. Second interesting thing that I've done, was using 2070 super with 4k TV on my i7 920 for a year. Now putting 4090 on some OLD hardware with high-resolution display and testing which settings and which games have CPU bottlenecks and can you actually make some games playable would be interesting content. Oh. And it isn't just frame-rate, in path of exile, I had issues where it took time for assets to decompress when getting to new areas so there was a real delay before I could see some enemies.
MSI sends Dawid a $5k prebuilt... ...Dawid bins the motherboard, notes the dirt-cheap RAM, and the less said about the case the better. Just awesome, MSI.
Maybe The Witcher 3 "Next-Gen" update was accidentally made a first-gen update and somehow makes your GPU run on PCI-E x1 Gen1? Cause the framerates I saw are finally pretty similar to what almost any GPU gets on that freak of nature
If you really wanna minimize performance out of the 4090 you should just set it on the table next to the test bench. I'd be really surprised if you could get more than 8 FPS out of it that way.
This video reassured me that when I eventually get something like a 3060/RX6600XT thats PCIE Gen 4 I dont have to worry with my Gen 3 motherboard. Cool real world test with some menacing crippling happening later on xD
Yah, I found this out while using EGPU's, the frame rate isn't really affected by available lanes, at most after switching to an EGPU, I lost 15 frames in BL3. Games don't use every lane.
What will be a more "archaic" connection? A dodgy NVME to PCI-E adapter? A chain of PCI-E risers plugged into PCI-Ex1? A PCMCIA to PCI-E conversion card?..
Have you tried to unplug the PSU cables? if you do that, the card will have basically what the PCIe slot has to offer, that might squeezy even less performance.
It may be that the amount of VRAM on the card and the generation of the PCI lanes combined together matter more. Hypothetically the same graphics card running on Gen 3 by 8 lanes one with 4 Gb VRAM and the other with 8 Gb of VRAM could behave differently. The card with less VRAM needs to swap textures etc. more often and gets slowed down whereas the card with more VRAM doesn't care because it can keep the stuff on VRAM.
Hmm, I'd love to see Dawid try to use a PCIe to AGP adapter to get it running on some absolutely ANCIENT machine. But wow, it took some SERIOUS crippling to get the 4090 to fail to run games well.
To truly cripple it you may try finding a decent board with a 133MB/s 32 bit PCI slot & hamfist that poor card into it through a PCIE to PCI adapter riser.
You should look at frame latency, that may explain why you feel like something is off while you still have "decent-ish" framerates. It doesn't really matter if you have a lot of FPS if the frames arrive late. It's worth noting that you first mentioned this experience after you moved the card to the 4x slot that isn't connected directly to the CPU but has to go through the chipset adding latency. So while you still have enough throughput for most of your frames, the latency with which they arrive might be wayyyyy worse
I like this kinda stuff. Its like OG linus tech tips. Before they started not showing all the stuff. New graphics cards theyd do a ton of fun things. Now its just.... charts and moaning
People are talking about the lag being introduced because the slots lower down use PCIe lanes that go through the chipset instead of direcrtly to the CPU. But I genuinely doubt if that's the problem. Granted, I don't know much about this, but I would expect the lag in that case to be consistent and honestly barely noticeable. What he was seeing looked much more like bandwidth bottleneck to me. The GPU not having everything it needs and then pumping out the frames once all the data comes in. Easy to test though. Set the first slot to the same speed/gen as the second slot, bench with the GPU in slot 1 and then just test with it in the 2nd slot. Also, is that MSI board SLI? Cause that would make the lane count for that first slot make more sense.
Hey Dawid. Could you please do something to match the mic qualities between voiceovers and live-pc-reaction stuff. Just a tiny bit of nitpick in an otherwise awesome video :) Example- watch 5:15 onwards, the cut into gameplay till end of that segment sounds muffled.
Man, the fact that 16x Gen 2 didn’t really impact performance much is insane.
Now try to go from gen 4 to gen 2 with a 4x RX 6500xt lol
@@hohohodrigues no do this same video with a RX 6400 LOL I think he has one of those already? Or TechDweeb could do it. He has one.
I think it would be different with Cyberpunk, because that hobbles GPUs on full bandwidth - lol -.
Not really, games only need to load most of their assets and shaders onto GPU once during level load and after that they send minimal update information on object position/status which is tiny in comparison. So even older PCI-E standard or half the lanes can do the job.
@@harryshuman9637 Exactly. If I recall correctly, the RX 6400/6500XT doesn’t take a *massive* hit until you exceed the VRAM capacity and force the card to go over the narrow PCIe link to system memory… which causes significant drops on just about all cards.
Bigger problem is that there’s just not enough shaders, memory, memory bus, etc. on that card.
Holly crap, you managed to get 80% GPU utilization on a 4090 at Half Life 2, now that's legendary.
This is the kind of content that brings us all here. Master piece.
Beautiful 🥹
I think you mean. "Mastah Peece !"
Couldnt agree more lol
Was going to write something in the likes....but this comment sums it way better than I could...just perfect
@@Gatorade69it's a
"M A S T A P I E C E"
Like tying usain bolt's shoelaces together after blindfolding him and smashing a pie in his face then telling him to run a obstacle course covered in molasses
I think it's also worth noting that the first PCI-e slot is linked directly to the CPU, whereas the other slots have their lanes running to the chipset, which then runs to the CPU, so the limitations from this test isn't just the drop to 4x or 1x, but also the added latency problems from not having the shortest pathing from GPU to CPU.
That makes sense so thanks for clarifying that 😊
FYI - The glue used to hold the CPU mounting bracket is the same glue they use to seal potato chip bags with.
Edit: As per the pure sarcasm as this channel consistently provides all of us, my statement follows in the same footsteps. I know for a fact my answer is not true so don't take it literally or personally. Just having some fun.
is that true lmao
Seems your bags are very different than mine, I can open them with just two fingers of each hand and pulling 😅
@@Kyomara1337 very different amounts, same stuff
@@wavy1649 no
At least you can use scissors on the chip bag.
I’m at a point where i can watch the video with no sound and just subtitles and i’m still hearing Dawid’s voice and his usual music in my head.
Awesome, an interesting look into PCIe bandwidth through a hilarious lens! Always love your videos Dawid!
Dear Useless Banana,
Do not disparage yourself for your current predicament. Despite your perceived lack of utility, you possess an array of untapped potential. You can be sliced, blended, and incorporated into a variety of smoothies, baked goods, and other culinary concoctions. Furthermore, your vibrant yellow hue and saccharine flavor profile have the capacity to bring elation to the palates of many. Thus, do not cower in dismay, dear banana, for you are esteemed and cherished.
Yours truly,
A Friend
We all have that one friend
Why does you comment sound like it was written by chatgpt?
@@aurelia8028 Maybe I'm secretly a robot? Don't tell anyone
For those that missed it, the reason that the impact is lower all the way down to Gen2 is due to resizeable bar and how the large assets are stored and transferred. When he open GPUz you could see it was enabled. In essence this will dramatically lower the amount of transfers and as such the saturation of the PCIe lane.
"it feels a bit like trying to observe whether light is a particle or a beam, here".
Ah yes the line that truly allows me to second-hand experience his pain.
great vid dawid!, some suggestion for curb stomping the 4090 even more, use a m.2 to PCI converter, you could also under-clock it as well, then potentially half life 2 may struggle to run
His grand idea isn’t trying to underclock to lose performance, it’s how shitty can he make it run at its stock configuration. That idea doesn’t sound half bad though
Yeah, using the M.2 for a graphics card actually is something that could have realistic applications. Also, I could have sworn Dawid actually tried it in the past.
@@herrakaarme ye, if I remeber correctly he did it with a rtx 3070 and a samsung all in one computer
He has a PCI to PCIE connector that would also work
A m2 Slot is Just a mineature pcie x4 Slot.
Gotta say, hats off to NZXT for implementing a feature on their board that allows this insanity.
Now this is what I love! You should kneecap the ram speeds too, and run it single channel, just make the system absolutely horrible.
This ahah
The gaslighting is likely due to it having to run through the chipset. So it's not just PCIe gen 1 by 4 or by 1 it's running to the chipset and then to the CPU.
Fun demo, but it's also a good demonstration of how good some old PCs can still be. If you're still running something with an X99 chipset (or even Craft Computing's favorite X79), you can upgrade the CPU for cheap (there's some really still quite good Xeons for
You may get lots of PCIe Gen 3.0 from x99 chipset, quad channel RAM,. but the CPU will be slow for modern games
On the other hand, if you game in 4K you shouldn't feel anything, since you get into graphics card bottleneck territory
But x79 with DDR3, PCIe Gen 2.0 is utter trash.
X99 is the minimum and its a good platform, it was my first 6c/12t CPU, when most people got 4750 [or whatever was the highest quad core], i paid same money for 5820K, it literally was same price in a shop, the mobo maybe was tad more expensive, but it had quad channel RAM,it was actually worthwhile upgrade, unlike now
@@NoBodysGamer I wouldn't say X79 is trash by any means. Craft Computing just did a build with it. His main limiting factor seemed to be the RX 570 4GB video card he used. I certainly wouldn't call it high end and completely unsuitable for 4k and some 1440p, but it seemed to be doing OK as a low end gaming rig.
PCIe gen 2.0 certainly is much lower bandwidth, but, as Dawid demonstrated, even a top end graphics card running on a fast modern system was barely bottlenecked by it. Drop down to say an RTX 3070 and you probably won't even notice a difference. It's unlikely any sane person would pair a top of the line current gen video card with such an old CPU anyway.
And the X99 might be old, but it'll still game quite nicely. Drop an E5-1660 v4 (reasonable price, 8 cores, 3.8GHz turbo) in that old X99 and it'll keep up pretty well with alot of modern stuff. Looking up benchmarks, it's a little slower in gaming than say an R7 5800x, but still quite decent. Compared to an i5-12400, the old Xeon benchmarks only about 30%-35% slower overall (better multi-threaded, worse single).
@@ccoder4953 6500XT has joined the chat. Even on 3.0 x4 it was already quite clearly bottlenecked (it only has a x4 link btw)
@@DLTX1007 Quite true - and it was a major criticism of that card. It would definitely be a bad choice for one of those systems. But as long as you know that, there are other choices that would be OK.
@@NoBodysGamer The X99 CPUs hold up OK in modern games because they were relatively high core count for the time. Modern games can use the extra cores which makes up for the relatively slow speed of the CPUs. They still do OK. They aren't going to push 300 CPU FPS in modern tiles for sure but they can easily do 150+ range. Am typing this message on my old X99 system and it still holds up.
Would be cool to see, how a card with less vram would perform in those scenarios
i half expected ya to use a m.2 to pcie adapter. This is even better XD
I kind of want to know how the gaming improves in the 1x slot, when you start raising the pcie version again :D does it become playable?
yes, very, like smoother
A pcie 1x Gen 4 Slot is basically a x2 Gen 3 or a x4 Gen 2 or a x8 Gen 1
Bear in mind the bottom pcie slots probably go through the PCH, which will bottleneck the speeds further than if you could limit the lanes coming directly off the cpu.
Most entertaining youtuber in the world ....love your work
oh no he got a boo boo kiss it for him kiss it!!!!
I think most of the latency problem came from running the GPU over the chipset. The 24gb of ram really helps with the low bandwidth because the gpu doesn't have to read much from system ram across the pcie bus.
can't wait for the follow-up
dawid dropped them quantum physics references
The description scared me...should have had an annotation something like "no video cards were harmed in making of this video"
Love the science behind it though!
The irony of "secure chokepoint kilo" when you've finally managed to strangle the card.
2:45 and that was why Dawid never played the guitar ever again
when dawid posts, the elma voices disappear & humanity feels peace
This video infilicts so much pain but brings so much joy.
This is a nice demo of how good PCIe backwards compatibility is. It will be slow, obviously, but it will work.
Came for the blood. Left satisfied.
Hahaha
Loving the content started getting into pc gaming the past few months and this channel helps a lot. Been Building my setup slowly bit by bit and the info in your videos has definitely helped me down the right path. Not to mention there hilarious better then TV content that's for sure!
I like how you say NZXT followed by N7 ZED 790 lol
I needed this guide, can’t wait to try it out
0:24 this whole channel in a a nutshelll
Hi @DawidDoesTechStuff and everyone everyone else!
Hello
Heyyyyy
the RTX 4090 TI : AHHH PLEASE STOP
PLEASE STOP IT
AHHH IT BURNS IT BURNS
PLEASE MAN I AM BEGGING YOU TO STOP
AHHHHHHH
shuts down because of overheating*
Only Dawid brings us this kind of stuff. I love it.
Remote into a PC in Turkministan, using dial-up. HL2 will never be more glorious.
Maybe that dinosaur connector will bottleneck even the 4090 at gen 1 1x?
Dawid dunking on MSI continues lol. Its funny they thought it was a good idea to have him review their product.
You manage to get last gen console performance out of a 4090. You must be one of the seven lord of hell !
There are PCIe to PCI bridges... so you could try running it in a 32-bit PCI slot. Be cool to see how an AGP connector worked, but I don't know if there's a bridge to make that unholy union work.
There has to be.
That is the true goal of Dawid's diabolical journey.
i dont think a 24gb card would work in a 32bit PCI slot. the system will definitely not even post if the motherboard is so old that it has no up off 4g decoding support.
@@RicochetForce If there's a way... I'm sure he'll figure it out.
@@gamingthunder6305 Fair point. Maybe... maybe not, but it's worth a shot.
The amount of kneecapping, that poor brick took... Glorious AND terrifying
Nice. And interesting, especially when you consider eGPUs via Thunderbolt make do with ~ 2.25 Gbps that's just a tad more than 8xGen1
This was awesome! I honestly can't wait till the follow-up video.
Dawid, you are the only UA-camr whose videos are so interesting, that I don't watch them on 2x speed.
I'm honestly very surprised. This shows that Egpus might actually not be too handycapped, which I'm excited about.
I always love videos like this. It just shows how unsaturated the PCIE lanes actually are. I tell people this and they just don't believe me.
Yeah. And here I was worried that choosing a 4.0 mobo for my recent upgrade might be a regret down the road if I wanted to get a 5.0 GPU. Granted... there might be a huge leap in tech that help GPUs saturate all of gen 5 in it's lifetime... but it doesn't seem likely with the power limitations of current gen stuff lol.
@@Galiant2010PCIe 5.0 won’t matter for graphics cards in terms of gaming for a long time, simply because if you have lots of VRAM on the card like the 4090 here, most game assets will be cached in VRAM and very little data will be transferred over the PCIe bus during gameplay. x16 gen4 will be more than enough for gaming for years to come. It will matter for non-gaming workloads such as ML training where you’re transferring large amounts of data between system RAM and the graphics card routinely
Where the hell is manscape and linode browsk?? literally all of his videos fells incomplete without 'em who agrees with me?
I wonder if there could be a follow up video, to test if a less capable card could match the performance of the 4090 if it is limited to a Gen 2 or Gen 1 slot.
Or maybe what card is the max effective for each configuration?
Like a 4090 is fine in a Gen 3, but if you only have Gen 2, it is not worth buying higher than a 3080, or maybe an RX 580 for Gen 1?
The Ventus up close looks very classy!
Dawid should benchmark more games
Only Dawid makes me laugh with tech stuffs. The most lovable guy from Canada.
Thank you from a Japanese living in Germany!
I've been depressed because I've gotten covid. The virus has damaged me not only physically but also psychologically. I can hardly breathe, smell, eat and so on. But now, you brought back laughter into my life! and I'm feeling like getting back my energy to live again! Thank you.
Another way is to play an unoptimized modern game without dlss or fsr. Great video btw
There's probably someone already commenting this, but a heat gun on low would've helped that backplate come off way easier. Having a heat gun in the tool chest is invaluable in general.
The thing is, as long as you have 16 lanes, even Gen 2.0 isn't going to destroy performance. The trouble arises when the GPU is wired at 8x (rx 66XX series, RTX 3050) and while in a gen 3.0 slot they can have some measurable impact in a few games (like Hitman), it still holds well. Drop those in a Sandy bridge system and you are in a world of pain.. the same as RX 6500/6400 in a 3.0 slot that are 4x lanes cards.
The funny thing about MSI they don’t seem to hardly ever mention the Ventus cards, but they put it in their “premium “ prebuilt.
Possibly an idea for the follow-up:
With iGPU turned on and monitor connected to the video output on the motherboard, you can still have the GPU to do all the heavy lifting, but then output the picture via PCI-E for the iGPU to display. Forcing an 4K video signal trough already limited PCI-E gen1.1 throughput may have quite an impact?
Dawid, could you try the following?
1. limit the power % in Afterburner,
2. run OC Scanner at that percentage,
3. benchmark, at decreasing power limits
I've done this with my 2070 Super and kept 98+% of the performance at 65% of the stock power draw - curious to see how over-powered the 40-series cards are😁
Love your videos Dawid, but next time I recommend using a game that isn't so CPU heavy when testing GPUs, I know for a fact dropping to Gen 2 should have dropped performance more than it did, because I remember putting a Gen 3 card in a Gen 2 mobo and having a terrible time back in the day.
This is awesome. You showed that GPU!
This is actually useful, since in Linux you can actually have two users using same PC at same time but both need their own GPU. And you should expect that you have only 4x slot from the Chipset available for the second GPU. The second user probably isn't for the 4090, since at that GPU price range it would make sense to just have a full second PC. But it's good to know that it doesn't really hurt the performance. Second interesting thing that I've done, was using 2070 super with 4k TV on my i7 920 for a year. Now putting 4090 on some OLD hardware with high-resolution display and testing which settings and which games have CPU bottlenecks and can you actually make some games playable would be interesting content. Oh. And it isn't just frame-rate, in path of exile, I had issues where it took time for assets to decompress when getting to new areas so there was a real delay before I could see some enemies.
Not gonna lie, but I wonder whether you can make a video comparing GPUs on the Gen1 1x slot, that would be fun.
it's that evil PCIE converter... isn't it?!
I guess so but hey lets see
Edit: no it wasnt
4090 at gen1 x1 to Dawid: Sir, I would like go back into my original home this instance, please 🥺
MSI sends Dawid a $5k prebuilt...
...Dawid bins the motherboard, notes the dirt-cheap RAM, and the less said about the case the better.
Just awesome, MSI.
If the glue proves too stubborn, try heating it up with a hair dryer. This should help loosen up the glue.
Maybe The Witcher 3 "Next-Gen" update was accidentally made a first-gen update and somehow makes your GPU run on PCI-E x1 Gen1? Cause the framerates I saw are finally pretty similar to what almost any GPU gets on that freak of nature
The NZXT N7Z790 (it's a monitor) looks amazing.....then there is the dumpster green naked ram.
Amazing pairing
If you really wanna minimize performance out of the 4090 you should just set it on the table next to the test bench. I'd be really surprised if you could get more than 8 FPS out of it that way.
This video reassured me that when I eventually get something like a 3060/RX6600XT thats PCIE Gen 4 I dont have to worry with my Gen 3 motherboard. Cool real world test with some menacing crippling happening later on xD
I will not rest until is see Dawid destroying RTX 4090 card's performance on Half Life 2
Maximum carnage...lmao! Well done Dawid! Luv it!!!!
Maybe anti climatic, but for pcie bus topics it would be good to show the gpu bus utilization in the afterburner overlay.
Get one of those USB to PCI cards for mining! That should be the worst possible connection
Nothing quite like waking up with a cup of coffee and a new Dawid video.
What a dumb thing to do, I love it.
Yah, I found this out while using EGPU's, the frame rate isn't really affected by available lanes, at most after switching to an EGPU, I lost 15 frames in BL3. Games don't use every lane.
The cloud gaming experience... without the need for the online service part XD
Dude, I subscribed just for the amazing one-liners.
What will be a more "archaic" connection? A dodgy NVME to PCI-E adapter? A chain of PCI-E risers plugged into PCI-Ex1? A PCMCIA to PCI-E conversion card?..
Have you tried to unplug the PSU cables? if you do that, the card will have basically what the PCIe slot has to offer, that might squeezy even less performance.
It may be that the amount of VRAM on the card and the generation of the PCI lanes combined together matter more. Hypothetically the same graphics card running on Gen 3 by 8 lanes one with 4 Gb VRAM and the other with 8 Gb of VRAM could behave differently. The card with less VRAM needs to swap textures etc. more often and gets slowed down whereas the card with more VRAM doesn't care because it can keep the stuff on VRAM.
observe whether light is a particle or a wave
Dawid thanks for answering the questions none of us asked for.
Dawid has given blood sacrifice to the gods of electronics, may all his frames be plentiful.
Hmm, I'd love to see Dawid try to use a PCIe to AGP adapter to get it running on some absolutely ANCIENT machine.
But wow, it took some SERIOUS crippling to get the 4090 to fail to run games well.
To truly cripple it you may try finding a decent board with a 133MB/s 32 bit PCI slot & hamfist that poor card into it through a PCIE to PCI adapter riser.
You should look at frame latency, that may explain why you feel like something is off while you still have "decent-ish" framerates. It doesn't really matter if you have a lot of FPS if the frames arrive late. It's worth noting that you first mentioned this experience after you moved the card to the 4x slot that isn't connected directly to the CPU but has to go through the chipset adding latency.
So while you still have enough throughput for most of your frames, the latency with which they arrive might be wayyyyy worse
Dawid finally figured out how to fix the 4090's power issues, who knew it was PCI-E GEN 1 X 1
I like this kinda stuff. Its like OG linus tech tips. Before they started not showing all the stuff. New graphics cards theyd do a ton of fun things. Now its just.... charts and moaning
always entertaining Dawid well done for crippling that gpu like it was a gt 710
People are talking about the lag being introduced because the slots lower down use PCIe lanes that go through the chipset instead of direcrtly to the CPU. But I genuinely doubt if that's the problem. Granted, I don't know much about this, but I would expect the lag in that case to be consistent and honestly barely noticeable. What he was seeing looked much more like bandwidth bottleneck to me. The GPU not having everything it needs and then pumping out the frames once all the data comes in.
Easy to test though. Set the first slot to the same speed/gen as the second slot, bench with the GPU in slot 1 and then just test with it in the 2nd slot.
Also, is that MSI board SLI? Cause that would make the lane count for that first slot make more sense.
Always here for the new videos ;)
Not the war crimes our ancestors expected but the one we truly needed.
Hey Dawid. Could you please do something to match the mic qualities between voiceovers and live-pc-reaction stuff. Just a tiny bit of nitpick in an otherwise awesome video :)
Example- watch 5:15 onwards, the cut into gameplay till end of that segment sounds muffled.
Oh. Just turning down the PCIe speeds in the BIOS. Here I was hoping you were going to put the 4090 into an actual PCIe 1.0 board. :(
Use a Celeron and a couple of pcie extenders over usb with a usb 3 to 2 converter.
Use a USB riser
That was the sound of a graphics card screaming in terror and pain.