One correction of an off-hand remark at 4:25 - the RX 6500 XT has 4 PCIe lanes (Gen4), not 8. We forgot how much of a piece of trash that card was, sorry. Our mistake for not double-checking. Watch our video about the melting RTX 4090 12VHPWR cables here: ua-cam.com/video/EIKjZ1djp8c/v-deo.html The best way to support our work is through our store: store.gamersnexus.net/ Like our content? Please consider becoming our Patron to support us: www.patreon.com/gamersnexus
@@legostarwarsrulez it doesn't have hardware decoding support... or was it encoding?? or was it both?? ... sigh it was so bad i can't even remember how bad it was or if i'm making it even worse (if thats possible)
I know this sounds like it'd be trivial and pointless. But does PCI-E performance differ on platform? Such as AMD vs Intel? I know it seems like a no brainer they shouldn't differ. But never know unless it's tested.
@@wnxdafriz it can do hardware decoding, but not encoding so things like Relive don't actually work (even though the "feature" was still selectable in the software, at least when it released)...
@@budgetking2591 The 5500XT had 8 lanes, and still got bottlenecked on 4 gigs on double the bus size. That card was doomed from the start, and 4 more lanes wouldn't have saved it.
As a owner of a RX 6600 because I was really unhappy with buying a RX 6500 XT or a RX 6400 I can confirm, it's 4 lanes PCI-E gen 4 with memory bandwidth at 64 bits.
@@ChrisM541 Very true, I am honestly surprised at how well Gen3 keeps up with high framerates at 4K. It just goes to show how little bandwidth graphics cards need
I am staggered that something claiming twice the speed (3-4) makes almost no difference, even on the 4090. I feel like the current hardware tests don't accurately show performance scaling (and each GPU generation seems better than they are)
Good devs minimize the amount of data actually transfering to the video card, because we're not idiots. It's *always* a bottleneck, making it less bad doesn't make it good.
> "...PCIe 4.0 is more than adequate for today’s applications." Relative to PCIe 3.0 for sure. But since the card does not support any higher (i.e. PCIe 5.0) it is impossible to say it wouldn't benefit. For sure I would argue it likely doesn't, but in the interests of accuracy, I thought it worth mentioning that no conclusion can be drawn there.
This was really useful for some of the guys who are on am4 with some older motherboards and upgraded to Ryzen 5000. Making a newer GPU purchase will still extend the life of these systems even more.
Wait isnt the minimum spec for Ryzen 5000 a X470 Chipset? X370 seems a little old, since these boards also were not made for the amount of watts that 5900X and 5950X can pull.
@@TheNerd Some X370 boards did get BIOS updates to support 5000 series. In fact, even as low as A320 have some support, but I definitely wouldn't run an R9 on those.
@Gamers Nexus shorts 🅥 This spammer is actually the FieRcE channel on UA-cam, going by "thanks" or Gamers Nexus shorts. Don't click it unless you want to give them a free view, very sad way to try to do this. I wouldn't have even cared if you just didn't try to copy and pretend to be Gamers Nexus including using their logo to trick people.
I think the benefits of pcie gen 5 is more about being able to use less lanes for equal bandwidth. Gpu's who use 16x slots shouldn't see a real difference.
That was true of PCIe 3.0 when it first came out, and is still true of 4.0. But mainboard manufacturers have their heads so far up their asses that they refuse to offer anything beyond a pathetic x8/x8/x4 allocation, and even that's not overly common and often requires forgoing some other feature or port (or set of ports) when it's not really necessary.
@@ffwast cost plus somethings such ad nvme ssds take up pcie lanes so if even if the gpu has less lanes it has the same bandwidth and other devices can use those lanes
@@ffwast because there is many gpu's that dont need more then 8 lanes on pci 4.0, also that way you dont lose performance when you use 2 nvme drives because only 8 pci lanes will be left on a lot boards. im glad my rx6600xt only uses 8 lanes for its max potential, or else i would of lost even more performance, 8 pci lanes is all that is left for the gpu on b450.
I literally just asked myself if my PCIe 3.0 board would be a bottleneck since I also have two NVMe SSDs. And here's GN's video. Perfect timing, as always!
The CPU on the board might be a limit, though. Keep in mind also that lane count may get thin with multiple devices, depending on how the board assigns lanes and on which CPU you have.
For real ive been wondering aswell. Ive a 10700k system so PCIE 3.0 and got both nvme ssd slots occupied AND i got a 4090 recently and i can definitely feel the massive bottleneck the 10700k puts on the 4090. Planning on upgrading when black friday hits though.
This is incredibly useful information to determine whether or not a platform upgrade is necessary if I want a new GPU. The 9900 is still running like a charm and it looks like it will continue to do so. Thanks y’all!
Thank you so much for this testing. I am someone who has a 4090FE in a PCIE3.0 slot, mostly as I was GPU bound running 2k ultrawide. Now I can get my monitors 175hz again, even with DLDSR thrown into the mix, and my 9900k at 5.2 can get another generations worth of use before I upgrade it. :)
I have a 4090 and a 5800X3D on a x370 AM4 board with PCIe 3.0, and 16GB DDR4-3200 RAM. I could not care less about PCIe 5.0 and DDR 5. It does not matter.
Thank you Steve and GN team! I just remembered that my b350 motherboard from 2017 is still on PCIe 3.0 with a 4070 Super on the way. This video alleviated my anxiety around ordering a GPU a bit too early.
@@jacobparis6576 That's how PCie bottleneck can also starts to show. Lows don't get too much effect. Highs get much lower, so average lowers. It's more about when in the benchmark the actual PCIe bottleneck happens.
6:18 In my experience, playing GTA V with the grass setting set to the absolute maximum creates a lot of PCIe traffic. Especially in the area near the "Vinewood" hills. Back when PCIe 3.0 was still recent-ish, I remember taking a huge hit in performance when using PCIe 2.0 around this area. If I'm not mistaken, HWmonitor has sensor for measuring the PCIe traffic, I believe it's called GPU IO or something. Any other game or game area with a high number of draw calls should spike PCIe bandwidth usage too. EDIT: From memory, 3DMark used to have a draw call benchmark and it showed a very clear performance scaling between the PCIe generations.
Maybe I'm asking out of ignorance, but why DX11 and not DX12 or Vulcan. I run all games that can on Vulcan or at least DX12 on my rig (ASUS z790-E, Intel Core-i9-13000K, 128GB DDR5-6000 RAM, nVidia RTX4090-24GB), and I can run all games with all settings at ULTRA with FPS never dropping below 100 on my 49" Samsung G9 NEO SuperUltraWide 5120x1440px (aka 32:9) GSync (240hz) monitor. I love your mentioning of the final death of SLI. Man the headaches I had finding motherboards with enough room for 3 nvidia 1080gtx in tripple SLI or just running two 2080Ti OC in SLI and a Motherboard running two times genuine PCIe x16 with two GPUs mounted in SLI, not to mention the extreme cooling needed with water pipes running everywhere. IT was a pest! I love plugging in just one mf.....ing brick of a 440mm long RTX4090 and get ten times the performance. And literally no noise at all from any fan. I do have one question though. I do have 8 SSD in my rig. four of them are normal SATA 8TB Samsung 870QVO, but the other four are identical Corsair MP600 PRO NH PCIe 4.0 NVMe M.2 - 8TB. Should it be a problem that I've used all four NVMe slots on the motherboard when they are all gen4 capable? I actually thought the RTX4090 Card was the first to take advantage of PCIe gen5 so I set the PCIe x16 slot to gen5 in BIOS. Could thet be the reason I constantly get a startup BIOS error saying that BIOS slot x16 has downgraded to PCI GEN3 (Which is default if I load BIOS default settings) and that I should press F1 if I want to change BIOS settings manually? Should I simply try to set my PCIe x16 to GEN4 in BIOS to avoid the fallback to GEN3?
I noticed that the game "Flower" is actually extremely bandwidth sensitiv and also that the grass density setting impacts this a lot. I was curious so I tested with different pcie bandwidths. I used an RTX 3080. The frame rate scales almost linearly with the bandwidth! 1.0 = 36.4fps 2.0 = 69.9fps 3.0 = 127.7fps 4.0 = 222.4fps It's possible that the bottleneck shifts at 5.0, but even then it could be interesting to test for cards that are only x8.
Thank you. This helps me better plan to use some older hardware more effectively and for longer. Appreciate the straight forward and informative piece.
PCIe 2.0 (or 3.0 x8) tests would also be interesting here since a lot of motherboards go down to 8x when multiple M.2 drives are installed. (X470, in particular, does this and plenty of people slotted 5800X3D chips onto those and X370 boards).
@@kaanozkuscu5079 the main question is how much pci-e speed matters? The gpu is not that important. It could be 4090 or future 4050. If gen 3 x8 is enough or performance hit is relatively small I would certainly keep my motherboard.
@@MegaMoonse theyve already done a video on this topic, testing a 3080 on pcie gen 2 3 and 4. short answer yes, you lose a decent chunk of performance on gen 3 x8 (equivalent to gen 2 x16). if youve got 30 series its worth it to give it the most possible bandwidth.
gen 2.0 chipsets provided many more lanes, My AM3+ has 40+4 PCIe lanes and the board provides x16/x16 or x8/x8/x16 or 4 x8 slots plus two x4. (They are all from the chipset, that generation of CPU did not manage the PCIe on-die). Storage drives and all onboard I/O was handled by the southbridge (which is connected with the "+4" lanes) There was also a critical change in the protocol between v2 and v3, there is another major change between v5 and v6 because with 5 and 6 they are getting into frequencies that cause major circuit engineering problems and high error rates in transmission. The original proposed v5 standard was 1.3x v4 not double v4, because there was no proof that the full speed increase could be feasibly obtained in consumer commodity products.
This is a good news for 2x 4090 in ryzen 7000 series motherboard that supports PCIE 5.0 x8 / x8. I was hoping you'd test some rendering / simulations. Thanks for the gaming benchmarks, it tells a lot too! :)
Are you sure that's what you are going to get? I have a 12900k in an Asus MB and even though I can do Gen5@8x the card only supports Gen4@8x. So if you cut the lanes to 8x (I'm using the other 8x for M.2 drives) you only get the Gen4 speeds so only 16GB/sec not 32GB/sec. If your seeing something else let me know I'd be interested in how you got there.
@@МальвинаКотик-л1ъ it is indeed. While the windows drivers require a special version of Windows pro and Windows Server, the Linux drivers are free and old infiniband gear is dirt cheap. I use it as the backbone for my NAS. This leaves me in the ridiculous position of running my Windows machine in a virtual machine where it's basically the only thing on the device so that I can use Linux drivers for my storage. Realistically the speeds are about equivalent to what you get with 10 gigabit Ethernet, but far lower CPU overhead if you do it right. And when you can get a 8-port switch, nominally 40gbs per port, for $89? It's a great way to get performance cheap.
@@marekkovac7058 ZFS on the server, but NFS over RDMA to share it. Network performance exceeds performance of the array, significantly, even with NVME caching.
You guys rock! This was near enough exactly the info I was looking for. I have an older PC which I use as a secondary machine (formerly my main PC) that's all PCIe Gen 3.0, which currently has a pair (SLI) of GeForce 950s in it. I don't remember why I went with that build specifically, but I remember there were reasons. Anyway, now that prices on last-gen GPUs have come down a bit, I've been thinking about giving it an upgrade. The options I've been considering are between a 2080 Ti (Gen 3.0) and a 6900 XT (Gen 4.0). But I wasn't sure how much the PCIe Gen of the MB would negate any advantages the 6900 XT had over the 2080 Ti. Thanks for this info!
PCIe 3.0 x8 would have been interesting to see. While most people buying 4090s are unlikely to still be using PCIe 3.0 motherboards, it'd be interesting and useful to know just in case.
@@12Burton24 Not really, there are test done that show that PCI E 2.0 x16/ 3 x8 has a loss of 3 to 10 fps tops, which honestly is within margin of error.
The real problem with PCI 3.0 occurs when u have a graphics card that only has 4X or 8X bus, like many AMD gpu's. I have some performance loss with my RX6600XT on PCI 3.0.
I believe the reason 1080p might benefit from upgrading from PCIe 3.0 to PCIe 4.0 is because 1080p nowadays is associated with higher framerates, and higher framerates means the CPU needs to send more commands to the GPU per second (and perhaps data as well). It's just a theory, though.
Great video, always nice to test this when new cards come out. I personally use DeckLink 8K (8x PCIe 3.0), and it does not work reliably over shared chipset lanes at all. They need to be on CPU lanes. So unless one is on Threadripper or something with bunch of CPU lanes it is good to know we can run in 8x/8x split mode on cheaper platforms with no real performance hit.
I don't think it's neccesarily the chipset lanes, but more the amount of them. Asus has an x570 x8/x8/x8 board with the 3rd x8 connected to the chipset. The chipset itself is only connected pcie4.0x4 to the CPU, but that still leaves enough bandwidth to throw 4 12G feeds at the Decklink. We have a bunch of them with a mixture of quad2's and 8K's in the chipset slot without any issues.
@@inkprod yea It may work on chipset lanes just fine, thing is dedicated CPU lanes are certainty. Back in PCIe 2.0 days I had Decklink with single HD-SDI strugle on chipset lanes. It might work on chipset lanes or it might not.
Thanks for making this video. I had a suspicion that PCI-Express 3.0 would be enough for modern cards and your video and my other research leads me to believe it will be fine. I have an older PCI-Express motherboard (Asus Rampage V Extreme) in my PC that I built back in 2015. After recently upgrading the CPU to a second-hand 6950X, adding more RAM and M2 drive I'm now looking at new graphics cards. I've been out of the loop for a while, so your comment about SLI being dead was helpful too.
I want PCIE GEN 6 motherboards to change the gpu slots to using/reserving X8 instead of x16, maybe leaving it on extreme high end only for the professionals (for who know what cards) and take those extra 8 lanes into more discrete i/o rather than "shared with"
Strikes me as a good question relevant to this one, if you're on an older platform limited to PCIe 3 there's a good chance you won't have ReBar either.
@@GamersNexus Do you suppose that NVidia will eventually release a driver that turns on resizable bar on older GPU's, like my 2080TI? I am willing to bet that the hardware supports it, it's just turned off.
As someone who really likes vertically mounted graphics cards, I was blown away by the price for a good PCIe 4.0 riser cable. 100+ € compared to the 15 € I paid for my ROG 3.0 riser cable, that's just insane. Board and GPU both support 4.0, but I am not willing to pay that much extra for a 0.5 to 3 percent performance increase at best. Thanks a lot for this very competent video. I will not buy a 4.0 riser before it provides any actual benefits.
Hybrid graphics is definitely something you could explore as a PCIe heavy workload. Even on desktop it's somewhat relevant as decoding videos on your iGPU is more power efficient than the dGPU + less risk of getting into a memory bandwidth fight with a game
16x pcie 5 has more bandwidth than most DDR4 or LPDDR4 dual channel or LPDDR5 single channel laptops or iGpu desktops, nowadays pcie bandwidth is never the bottleneck with things like CXL, in encoding/decoding your SSD will always be bottleneck, not pcie bandwidth, and dGPU encoders and decoders like nvidia's nvenc can do multiple 4k60 decode/encode streams, even in AV1, so your comment is not valid in 2023, maybe 2015? but nowadays amd and nvidia is way ahead of intel "quicksync"..
As someone with a x370 mobo, a 5800x3d, and considering a 4k series card at some point in the future, thanks for this demo. This gives me the confidence to try and take the same mobo for a whole decade - I built this system in 2017.
Thankyou for testing, I game at 4k resolution & I'm currently using an Intel core i5 10600k which is limited to only PCI express 3.0 speed, for gaming it looks like there is no reason for me too upgrade my CPU then.
In the real world this basically confirms for me that you can run close to the best CPU and GPU on an old B350 board that has PCIE 3.0 and only lose single digit percent performance from it. Hardware Unboxed showed that B350 boards can run the 5800X3D with little performance lose, within reason of course, you should pay close attention to your VRM temperatures and don't take it too far. I personally won't be going that far, the 5700x is the best 65W CPU on AM4 to my knowledge so that's what I will be upgrading to from my 1700 and now I know that PCIe 3.0 won't hurt at 1440p. Now I will wait to see how RDNA 3 and the the rest of the 40 series scales with the 5700x to upgrade my 1070, I wish GN did that testing but HU usually does great scaling videos as well, they did an excellent one for the 5000 series CPUS with 30 series and 6000 series GPUs. AM4 was truly a great platform for longevity. Thank you for doing tests that answer important questions.
Can confirm. Upgraded my 1600 for a 5700x on a b350 board. And upgrade my gtx 1070 with a 6700 xt. I actually had worse performance wifh the new setup until I realized that the bios update reset the ram’s clockspeed. Once I got it back up to 3200mhz, it’s been smooth sailing
Indeed. The same can apply all the way back to IB/IB-E CPUs on P67/Z68/X79, infact it was possible to force 3.0 even with SB-E on X79 (such as the 3930K, because it was a XEON in disguise). All one is then limited by for such older setups is overall CPU strength, but that's a whole other thing. I just find it interesting how Gen 3.0 ended up spanning so many years of different boards and sockets (well, mainly Intel), whereas now the industry seems to be skipping through 4.0, 5.0 and beyond a lot faster. People perhaps forget the utility of what once was, eg. my lowly old 4c/8t 4820K (so easy to oc) has 40 lanes of 3.0 from the CPU, so it can actually do multi-device/GPU things which much later SKUs like the 7820X could not (that only had 28), plus relevant mbds had lots of lanes off the chipset aswell, eg. my P9X79-E WS supports x16/x16/x16/x16, half from the CPU, half from the chipset via two PLX chips IIRC; or it can even run x16/x8/x8/x8/x16/x8/x8 using all seven slots. See: www.anandtech.com/show/7613/asus-p9x79e-ws-review It all became horribly complicated when for a time the no. of lanes from the CPU depended on the SKU (really horrible product segmentation, it meant slot device support and even whether some slots could be used at all depended on which CPU was fitted), eg. the original 4c CPU for X79 (i7 3820) had 40, yet the much later 5820K and 6800K only had 28. So glad when all that malarky came to an end.
My B350 board not only got support for Ryzen 5000 but it got a bios update for Re-Bar as well. I'm glad I can stretch out pcie gen 3 for a bit longer and not lose any meaningful performance.
@Gamers Nexus shorts 🅥 @thanks 🅥 This spammer is actually the FieRcE channel on UA-cam, going by "thanks" or Gamers Nexus shorts or Bully Maguire. Don't click it unless you want to give them a free view, very sad way to try to do this. I wouldn't have even cared if you just didn't try to copy and pretend to be Gamers Nexus including using their logo to trick people.
PCIe generations is more relevent to boards that can bifurcate the GPU slot. If there's not much difference between gen 3 and gen 4 on a 4090 at X16, it would be useful information to have if the card is limited to X8 lanes in these scaling tests. One major use for bifurcated slots is direct to CPU m.2 SSD's add in boards, so the X8 bandwidth results would be useful.
Thanks for the continued barrage of extensive testing GN! I'm curious to know whether DLSS 3 with its higher framerates might be more impacted by the bandwidth... likely not considering that the CPU is only required for every other frame but might be interesting to confirm as upscaling becomes more common.
it shouldn't. the bandwidth is mostly needed to communicate with CPU. dlss 3 generation happens all in the GPU. it's infact cpu independent. higher vram speeds could help to access the on GPU data from previous frames
Thanks for the video. I am getting into video editing and am SERIOUSLY needing an upgrade to my graphics card, but wasn't sure how my motherboard would match up with the new cards. Turns out I'm just a dummy because my MOBO has a PCIe 4 x 16 slot and I thought it only had Gen 3. Even so, I can see upgrading my graphics card would've been a win anyway. Thanks again, love the channel and the no-nonsense data and presentation.
Really glad you did this one. There's a bug in Asus 500-series motherboards with 11th gen cpus and the 4090 where it only runs the card as PCI-E 3.0 (despite running the 3090 at 4.0). I've been wondering how much performance I'm losing.
@@BlueBoy0 likewise, no other issues however other than GEN4 causing no display so that leads me to believe everything else is working correctly and not defective. For now, I'm glad the performance difference is negligible.
I appreciate y'all doing this test. I'm still on a ryzen 2700X with pcie3 and will be buying a 4090. Good to know that I'm not gonna be bottlenecked by that and don't need to upgrade the whole PC immediately
I’m surprised that 3.0 isn’t bandwidth limiting still since this is years into cards with 4.0. Makes me feel better about holding on to my x370 and running a 5800X3D with a 4090 I do think this will continue for awhile, pcie4 5 etc are useful in data centers (and empirically, but maybe not practically, nvme)
@@BrickBazookaGN does really good and thorough testing, I trust it. I didn’t want to buy a new board for an EOL socket and my board has plenty strong of a vrm. How are you like the 4090? I’m very impressed with the FE
@@jolness1 yeah, I'll keep my Board also for my 4090fe. The card is great, but rest should be also latest gen.. later next year I should upgrade to raptor lake or amd 7000
I have x370 with 5800x3d and 4090 and going to change board to b550 because of 4.0 as it cost nothing compared to 4090, it is more about work invoved in switching it. Also i will upgrade my main SSD to NVM 2 TB 4.0 (from 3.0 500 GB). I am using that PC mainly for VR and in Flight simulator 2020 in dense city areas i get quite a lot of stutering, this upgrade should quite help with bottlenecks and it should hopefully solve that stutters. I dont think i will much notice improvments somewhere else than in Fsim 2020. Other mainly racing sims are running fine and are mainly gpu limited, but few more FPS could help in some places to be completly without stutters. VR should be about good imersion and stutters are quite ruining it, so this quite cheap upgrade should make it better and would be shame not to use 100% potencial of that expensive 4090. For non VR gaming i dont think that upgrade make sense, also i dont see much benefit using 4090 for playing games at 4k 60 Hz LCD, it is looking and running almost same as with 3080.
Thank you for your constant great scientific method. I shop the back of the wave and always try to keep up on what features matter when I buy last year's model. As a life long R&D lab tech, I think you have great lab skills. After around half a century in the lab, the only advice I can add to your toolkit is NOTES, lots and lots of NOTES. I promise you, in 2 years you will not remember accurately. It is very frustrating to have to recreate an experiment because you forgot to write something down about the setup. It also keeps people from picking your results apart.
A massive thank you for that video! I've just upgraded from my R7 2700 for a 5700x and was wondering if my B450 was bottlenecking my performance. I'm glad to know that I dont need to worry about that!
I love your videos, Im just a little confused that all this work was done to show the performance of 3 games would be nice to see some encoding software and rendering (Davinci Resolve and Blender) as well. None the less, appreciate the free resources. This channel taught a valuable thing like many others, benchmark and research your usecase online before purchases. No more buyers remorse and less ewaste, thanks for the work over the years.
I would like to know this as well. I don't understand PCIe lane allocation and why they compared 3.0 x16 to 4.0 x8 and now I am further confused when he mentioned lanes from the CPU and lanes from the chipset--pulling lanes from the chipset was possible but not often done. Where are the options for that?
I'm not sure how many are gaming with a 4090 at 1080p. More would be gaming at 1440p and the target is 4k I would imagine. I wonder what the difference would be when 4k gaming is factored in.
Nobody who spends that kind of money on a 4090 should game on anything less than 4k especially when the 4090 will barely be utilized in 1080p and 1440p in a lot of games
@@randomyoutubeuser8509 I have a rx 7900xtx and i know that it is slower but not by a lot. I am Playing Escape from Tarkov and Satisfactory (with mods). Escape from Tarkov runs on almoast maxed settings in 1440P with about 55 Fps (On the newest map). Satisfactory runs moast of the time stable at 144fps but has drops to 100. The maximum benefit of the 4090 that i can imagine is 15% more fps becasuse i am not using rt or dlss/fsr. What I want to say is that many games dont even run in 4k. Escape from tarkov could work in 4k with lower settings and dlss 3.0 but this game is so poorly optimized that dlss looks garbage in this game and makes it litterally unplayable
@@dustingarder7409 The 4090 is generally 30-65% faster (depending on settings and features), not 15%. ua-cam.com/video/f1femKu-9BI/v-deo.html I use 7900XTX in 3440x1440 and definitely wouldn't mind a 4090, but that's just too much for a GPU!
So when they tell me i need PCIe 4 for a 4070 ti, and i only have PCIe 3, I dont need to go out and buy a new Motherboard, in fact i will hardly notice the differance!?
Really appreciate this testing and insight! There are still a few things that an RTX 4090 can't quite handle - in my case specific to some PCVR titles at higher resolutions and when relying on super sampling at 4K to address cosmetic issues with games that don't support DLSS. This video was helpful as I was specifically wondering how much the PCIe 3.0 on my older generation motherboard might be a limiting factor and what options (if any) there are for boosting performance a bit more. Thanks again for another great video!
So even with a 4090 the most powerful GPU on the market the difference between 3.0 and 4.0 is marginal. Now that is interesting because it shows that PCIe is very future proof.
Good to see PCIe 3 is still good for the foreseeable future. On AMD B550 though gen 4 is giving me issues. Think B550/X570 motherboards or the chipsets themselves were not really ready for PCIe gen 4 with the issues it had and has with gen 4.
I have a z490 motherboard and read that m.2 speeds don't improve gaming much. It's mainly for transferring files, I also learned that pcie 3.0 and 4.0 isn't a big enough difference for gpu bottleneck. It's mainly bandwith related
You will probably see the benefits of PCIe 4.0 in machine learning applications. Try playing Leela Chess 0, and look at the evaluations per second. The bandwidth plays a huge role there.
At this point, I'm surprised video cards don't just default to using 8 lanes when using PCIe 5.0 (and beyond). PCIe 6.0 is already around the corner, and the 7.0 spec is expected to be finalized in 2025. There really is no need for video cards to be defaulting to 16 lanes at this point.
I feel like this is as good a place as any to ask: on motherboards where the PCIe 5.0 x16 slot goes to x8 with the PCIe 5.0 M.2 occupied, do you only get an effective 8 lanes of PCIe 4.0 with a 4.0 device? That is to say, are you limited by both interface compatibility and lane bifurcation simultaneously?
Thanks for this. For some reason my card is stuck on 4.0 x8 and I don’t want to go through the whole return craziness especially since I have a custom loop. Anyway thanks for these types of videos you never know when it’ll help
Since this testing shows that the bandwidth difference between PCIe gen 3.0& 4.0 is negligible at x16, I'd be interested in seeing how PCIe 4.0 x8 would do... Will fewer lanes have latency issues, or expose driver problems?
PCI E 4.0x8 will have little advantages ovet PCI E 3.0x16 because CPU lanes and m.2 Drive Lanes are also PCI E 4.0 the Bandwidth itself is the same foe 3.0x16 vs 4.0x8
@@12Burton24 Yes, I said the bandwidth was essentially the same, but testing would show any latency or driver issues with just 8 lanes, and that's what I'm curious about.
Thanks for this . It is good that the board speed is a head of the need of the graphics cards, showing that this will not be the bottle neck for years to come :)
I think Steve may be missing the really interesting findings in his test data. He observes that PCIE Gen 3 produces about the same fps as PCIE Gen 4 from most games at 4k. This means that that bandwidth requirement is only about 15 GB/s to support a roughly 200 fps. This means that only about 75 MB of data is being passed from the CPU to GPU in order to build a given frame, way lower than the data output of the GPU. If we scale up to 8k, then about 300 MB of data per frame would need to be sent (rough scaling). PCIE Gen 4 should be able to support about 83 fps at the bus limit for 8k gaming. At these speeds, card performance is the limiting factor, so it'll be less, but this tells us what the bandwidth limit for 8K gaming is. The 4K gaming bandwidth limit for PCIE Gen 4 would be 4 times that or 333 fps. This will vary depending on the game, but this shows rough scaling between PCIE generations and according to display size. So Nvidia was probably right to leave the 40 series at PCIE 4, since the card throughput will limit the fps, not the PCIE throughput.
Over 1000 pounds for 4080 lol...nvidia trying miner prices still. I used to buy every gpu release not now..not at these ridiculous prices. 800 pounds is the top price the 4080 should be...l will keep 3080 another two years at these prices...l really hope massmarket ie outside of enthusiasts...refuseto pay this much..anyway l refuse besides happy with 3080...
3070? Did you typo? I'm pretty sure the covered the 3080 PCIe 3.0 vs 4.0 like 2 years ago and there was no meaningful difference, so there wouldn't be on the 3070.
@@Cruor34 Oh wait, they totally did, completely forgot. But 3080 has 10 GB of VRAM, whereas 3070 only 8, so there'd be much higher usage of PCIE bandwidth, due to constant VRAM data shuffling, once they fill 8 GB of it.
Sincerely, thank you so much for this, I'm currently building an exploded view wall mounted build and I need a 60 cm PCI riser cable which I can only get in PCI Gen 3 so it's really good to know that on my 3090 it will be more than enough
@@BernhardWeber-l5b oh is it? Kinda makes sense, but didn't know for certain whether bandwidth precisely doubled or even if it correlates linearly at all. Thanks for letting me know!
@@BernhardWeber-l5b Im looking at a 4080 for an upgrade from my 2080ti. I have an nvme boot drive which is forcing my top slot into x8. It's an x470 board and gen 3 to boot. I feel a bit silly asking this. But is it a good idea to upgrade to an x570 board with pcie gen 4? Or am I worried about nothing
@@DasFlank It may have some impact but how much I can't say. I would either get a b550 board or move the m.2 SSD to a PCIe X4 adapter bracket to allow the top slot to operate at x16 speeds.
This comment actually has more relevance than you'd think. Lots of z790 boards, such as the z790 Aorus Master, run their PCIe lanes in x8 if you have a m.2 in the top CPU slot. There's going to be a ton of people who want to run Gen5 m.2 storage when it becomes more widely available, how much is this going to impact GPU performance if at all on these boards?
So would it be preferable to plot an upgrade path through the 40 series if im going to stay at 4th gen PCIe for a little while? Or would it be smarter to look into high-end 30 series?
I just watched a PC repair short video from GamerTechToronto where they discovered that the mobo's top pci-e slot was defective. So they plugged the customer's RTX 4080 into the bottom PCI-e 3.0 x4 slot and called it a day :D
I'm not upgrading my PCIe 3.0 PC yet, got a 5950x/3070 and soon going over to AMD. I still game at 1080p with my 360hz panel, I love the high FPS for now later on I'll go 1440p😁 But if the FPS are already in the 100s or 200s like in the benchmarks here at 1080p then we are good for quite a while 👍 What I want to see is how future games will perform like games using UE5 for example.
Already knew that from your previous testing of PCIE Generations before no?? Look forward to seeing the different Gen PCIE versions and Lane allocation add up against one another!!
Steve, just curious if you have ever considered building and selling a "GN Mark" hardware benchmark test suite along the lines of the upcoming LTT "Mark Bench"?
No, we use all our stuff internally only. It's dangerous to build and distribute that type of thing without an insane amount of controls (that we won't have over user setups) just because it can easily produce bad data that runs rampant for comparisons online - e.g. user benchmark. It's possible. FutureMark has done a good job. But we're not going to try and do that as we aren't set up to do it in a way that I think is responsible (e.g. would be too easy to generate bad data that feels like good data to the users)
One correction of an off-hand remark at 4:25 - the RX 6500 XT has 4 PCIe lanes (Gen4), not 8. We forgot how much of a piece of trash that card was, sorry. Our mistake for not double-checking.
Watch our video about the melting RTX 4090 12VHPWR cables here: ua-cam.com/video/EIKjZ1djp8c/v-deo.html
The best way to support our work is through our store: store.gamersnexus.net/
Like our content? Please consider becoming our Patron to support us: www.patreon.com/gamersnexus
If it had 16 lanes I would’ve bought a couple and used them in some low end budget builds for friends.
@@legostarwarsrulez it doesn't have hardware decoding support... or was it encoding?? or was it both?? ... sigh it was so bad i can't even remember how bad it was or if i'm making it even worse (if thats possible)
I know this sounds like it'd be trivial and pointless. But does PCI-E performance differ on platform? Such as AMD vs Intel? I know it seems like a no brainer they shouldn't differ. But never know unless it's tested.
@@wnxdafriz it can do hardware decoding, but not encoding so things like Relive don't actually work (even though the "feature" was still selectable in the software, at least when it released)...
THANK YOU! I was waiting for this test to be done by you guys! Thank you, Thank you!
4:24 as a proud 6500XT owner i must point out that it has 4 PCI-E lanes, not 8
Thanks for that. That's an error, yes.
exactly, thats its biggest flaw, would of been a decent gpu if it had 8 lanes so there would be no bottleneck for people on pci 3.0.
It's like a raccoon defending it's pile of garbage /s
@@budgetking2591 The 5500XT had 8 lanes, and still got bottlenecked on 4 gigs on double the bus size. That card was doomed from the start, and 4 more lanes wouldn't have saved it.
As a owner of a RX 6600 because I was really unhappy with buying a RX 6500 XT or a RX 6400 I can confirm, it's 4 lanes PCI-E gen 4 with memory bandwidth at 64 bits.
Bottom line: Vaseline does not make bits flow smoother, and PCIe 4.0 is more than adequate for today’s applications.
So is PCIe 3 !!!!
- You'll see, for the vast majority of games, little/no difference from PCIe 3 all the way up to PCIe 5
@@ChrisM541 Very true, I am honestly surprised at how well Gen3 keeps up with high framerates at 4K. It just goes to show how little bandwidth graphics cards need
I am staggered that something claiming twice the speed (3-4) makes almost no difference, even on the 4090.
I feel like the current hardware tests don't accurately show performance scaling (and each GPU generation seems better than they are)
Good devs minimize the amount of data actually transfering to the video card, because we're not idiots. It's *always* a bottleneck, making it less bad doesn't make it good.
> "...PCIe 4.0 is more than adequate for today’s applications."
Relative to PCIe 3.0 for sure. But since the card does not support any higher (i.e. PCIe 5.0) it is impossible to say it wouldn't benefit. For sure I would argue it likely doesn't, but in the interests of accuracy, I thought it worth mentioning that no conclusion can be drawn there.
This was really useful for some of the guys who are on am4 with some older motherboards and upgraded to Ryzen 5000. Making a newer GPU purchase will still extend the life of these systems even more.
I just bought a b450 and 5600 6months ago. It's plenty for most any GPU in 2022.
Im in that boat - just popped in a 5600X and now waiting for a 4070 or RDNA3 GPU
Wait isnt the minimum spec for Ryzen 5000 a X470 Chipset? X370 seems a little old, since these boards also were not made for the amount of watts that 5900X and 5950X can pull.
@@TheNerd Some X370 boards did get BIOS updates to support 5000 series. In fact, even as low as A320 have some support, but I definitely wouldn't run an R9 on those.
@@TheNerd I have an Aorus x370 with a 5800X3D. Most major motherboard manufacturers updated bios to support 5000.
Wow this came at a good time! I was just searching about the effects between these. Thanks Steve!
@Gamers Nexus shorts 🅥 This spammer is actually the FieRcE channel on UA-cam, going by "thanks" or Gamers Nexus shorts. Don't click it unless you want to give them a free view, very sad way to try to do this. I wouldn't have even cared if you just didn't try to copy and pretend to be Gamers Nexus including using their logo to trick people.
FFS spam
I think the benefits of pcie gen 5 is more about being able to use less lanes for equal bandwidth. Gpu's who use 16x slots shouldn't see a real difference.
Agreed
The question is "why build a GPU that doesn't use all 16 lanes to begin with?" What's the point? Hobbling the performance with older boards?
That was true of PCIe 3.0 when it first came out, and is still true of 4.0.
But mainboard manufacturers have their heads so far up their asses that they refuse to offer anything beyond a pathetic x8/x8/x4 allocation, and even that's not overly common and often requires forgoing some other feature or port (or set of ports) when it's not really necessary.
@@ffwast cost plus somethings such ad nvme ssds take up pcie lanes so if even if the gpu has less lanes it has the same bandwidth and other devices can use those lanes
@@ffwast because there is many gpu's that dont need more then 8 lanes on pci 4.0, also that way you dont lose performance when you use 2 nvme drives because only 8 pci lanes will be left on a lot boards. im glad my rx6600xt only uses 8 lanes for its max potential, or else i would of lost even more performance, 8 pci lanes is all that is left for the gpu on b450.
I literally just asked myself if my PCIe 3.0 board would be a bottleneck since I also have two NVMe SSDs. And here's GN's video. Perfect timing, as always!
Well, it's not the bandwidth that's gonna be a bottleneck :D
If direct storage starts making an actual difference it might come up.
The CPU on the board might be a limit, though. Keep in mind also that lane count may get thin with multiple devices, depending on how the board assigns lanes and on which CPU you have.
For real ive been wondering aswell.
Ive a 10700k system so PCIE 3.0 and got both nvme ssd slots occupied AND i got a 4090 recently and i can definitely feel the massive bottleneck the 10700k puts on the 4090.
Planning on upgrading when black friday hits though.
If your on a x470 or b450 with a Ryzen 5000 processor, you'll be fine.
This is incredibly useful information to determine whether or not a platform upgrade is necessary if I want a new GPU. The 9900 is still running like a charm and it looks like it will continue to do so. Thanks y’all!
Thank you so much for this testing. I am someone who has a 4090FE in a PCIE3.0 slot, mostly as I was GPU bound running 2k ultrawide. Now I can get my monitors 175hz again, even with DLDSR thrown into the mix, and my 9900k at 5.2 can get another generations worth of use before I upgrade it. :)
I have a 4090 and a 5800X3D on a x370 AM4 board with PCIe 3.0, and 16GB DDR4-3200 RAM. I could not care less about PCIe 5.0 and DDR 5. It does not matter.
@@T.K.Wellington1996 It does matter. DDR5 improves performance
@@De-M-oN Therefore I have the 96MB L3 3DV-Cache auf dem 5800X3D as compensation.
whats your cpu now
Thank you Steve and GN team! I just remembered that my b350 motherboard from 2017 is still on PCIe 3.0 with a 4070 Super on the way. This video alleviated my anxiety around ordering a GPU a bit too early.
One thing I think is interesting about the Warhammer 1080p results is that the lows are basically identical despite the average being lower.
It certainly makes it look like a driver or software issue, since that's the behaviour I'd expect from a frame limit being applied.
@@jacobparis6576 That's how PCie bottleneck can also starts to show. Lows don't get too much effect. Highs get much lower, so average lowers. It's more about when in the benchmark the actual PCIe bottleneck happens.
6:18 In my experience, playing GTA V with the grass setting set to the absolute maximum creates a lot of PCIe traffic. Especially in the area near the "Vinewood" hills. Back when PCIe 3.0 was still recent-ish, I remember taking a huge hit in performance when using PCIe 2.0 around this area.
If I'm not mistaken, HWmonitor has sensor for measuring the PCIe traffic, I believe it's called GPU IO or something.
Any other game or game area with a high number of draw calls should spike PCIe bandwidth usage too.
EDIT: From memory, 3DMark used to have a draw call benchmark and it showed a very clear performance scaling between the PCIe generations.
There is another person in the comments who experienced similar results with grass density, except different game.
Maybe I'm asking out of ignorance, but why DX11 and not DX12 or Vulcan. I run all games that can on Vulcan or at least DX12 on my rig (ASUS z790-E, Intel Core-i9-13000K, 128GB DDR5-6000 RAM, nVidia RTX4090-24GB), and I can run all games with all settings at ULTRA with FPS never dropping below 100 on my 49" Samsung G9 NEO SuperUltraWide 5120x1440px (aka 32:9) GSync (240hz) monitor. I love your mentioning of the final death of SLI. Man the headaches I had finding motherboards with enough room for 3 nvidia 1080gtx in tripple SLI or just running two 2080Ti OC in SLI and a Motherboard running two times genuine PCIe x16 with two GPUs mounted in SLI, not to mention the extreme cooling needed with water pipes running everywhere. IT was a pest! I love plugging in just one mf.....ing brick of a 440mm long RTX4090 and get ten times the performance. And literally no noise at all from any fan.
I do have one question though. I do have 8 SSD in my rig. four of them are normal SATA 8TB Samsung 870QVO, but the other four are identical Corsair MP600 PRO NH PCIe 4.0 NVMe M.2 - 8TB. Should it be a problem that I've used all four NVMe slots on the motherboard when they are all gen4 capable?
I actually thought the RTX4090 Card was the first to take advantage of PCIe gen5 so I set the PCIe x16 slot to gen5 in BIOS. Could thet be the reason I constantly get a startup BIOS error saying that BIOS slot x16 has downgraded to PCI GEN3 (Which is default if I load BIOS default settings) and that I should press F1 if I want to change BIOS settings manually? Should I simply try to set my PCIe x16 to GEN4 in BIOS to avoid the fallback to GEN3?
Thank you for doing this. It really helps make better informed decisions!
I'm so happy you made this. I am temporarily using my 4090 suprim on a pcie 3.0
dang that PCIE test footage nukes youtube's bandwidth
When things get extreme, Vaseline.
It raises some questions
We still talkin' bout graphics cards? 😂
Jk support the GN store!
Like my uncle always said
@@JorgeMartinez-dp3im it's only a matter of time before overclockers start using lube
oh wow I thought this would have mattered more, especially in the times of ReBar and whatnot. Thanks for testing it!
I noticed that the game "Flower" is actually extremely bandwidth sensitiv and also that the grass density setting impacts this a lot.
I was curious so I tested with different pcie bandwidths. I used an RTX 3080.
The frame rate scales almost linearly with the bandwidth!
1.0 = 36.4fps
2.0 = 69.9fps
3.0 = 127.7fps
4.0 = 222.4fps
It's possible that the bottleneck shifts at 5.0, but even then it could be interesting to test for cards that are only x8.
Ffs more spam.
@@adjoho1 I always report these comments, but I'm not sure if UA-cam actually does anything about it.
@@EVPointMaster they don't
Thank you. This helps me better plan to use some older hardware more effectively and for longer. Appreciate the straight forward and informative piece.
Thank you for including Warhammer 3, lots of places don't do RTS/4x titles, which is one of the main reasons to do PC gaming.
PCIe 2.0 (or 3.0 x8) tests would also be interesting here since a lot of motherboards go down to 8x when multiple M.2 drives are installed. (X470, in particular, does this and plenty of people slotted 5800X3D chips onto those and X370 boards).
Watched the video for this answer. Steve, if you are reading please do one test.
@@MegaMoonse people who get 4090s have the money to buy a proper pro mainboard.
why care for "benchmarks" if you wont ever have the hardware?
@@kaanozkuscu5079 the main question is how much pci-e speed matters? The gpu is not that important. It could be 4090 or future 4050. If gen 3 x8 is enough or performance hit is relatively small I would certainly keep my motherboard.
@@MegaMoonse theyve already done a video on this topic, testing a 3080 on pcie gen 2 3 and 4. short answer yes, you lose a decent chunk of performance on gen 3 x8 (equivalent to gen 2 x16). if youve got 30 series its worth it to give it the most possible bandwidth.
gen 2.0 chipsets provided many more lanes, My AM3+ has 40+4 PCIe lanes and the board provides x16/x16 or x8/x8/x16 or 4 x8 slots plus two x4. (They are all from the chipset, that generation of CPU did not manage the PCIe on-die).
Storage drives and all onboard I/O was handled by the southbridge (which is connected with the "+4" lanes)
There was also a critical change in the protocol between v2 and v3, there is another major change between v5 and v6 because with 5 and 6 they are getting into frequencies that cause major circuit engineering problems and high error rates in transmission.
The original proposed v5 standard was 1.3x v4 not double v4, because there was no proof that the full speed increase could be feasibly obtained in consumer commodity products.
Far out Steve, you and the team have been spitting out great in-depth pieces CONSTANTLY for over a month now. Please take a break for your own sanity!
This is a good news for 2x 4090 in ryzen 7000 series motherboard that supports PCIE 5.0 x8 / x8. I was hoping you'd test some rendering / simulations. Thanks for the gaming benchmarks, it tells a lot too! :)
Are you sure that's what you are going to get? I have a 12900k in an Asus MB and even though I can do Gen5@8x the card only supports Gen4@8x. So if you cut the lanes to 8x (I'm using the other 8x for M.2 drives) you only get the Gen4 speeds so only 16GB/sec not 32GB/sec.
If your seeing something else let me know I'd be interested in how you got there.
@@rdiricco I recently upgraded & see my card running at 4.0x8. It’s technically the same as 3.0x16. Will there be a difference?
I'm fairly new to your vids but I love how thorough and articulate you are. Thanks for all the work you guys do!
You guys keep on pumping out content. Really gratefull for everything you do. So much to learn from your videos!
i was just debating upgrading from gen 3 to gen4/5 with a full mobo and cpu upgrade. thanks for saving me some money for the time being.
Always interesting. I still run my GPU, a 3070 at 3.0x8 because I use the other x8 for an infiniband card. The performance hit is about 5%.
Very interested on why do you need an infiniband card. Isn't that a server thing?
@@МальвинаКотик-л1ъ it is indeed. While the windows drivers require a special version of Windows pro and Windows Server, the Linux drivers are free and old infiniband gear is dirt cheap. I use it as the backbone for my NAS. This leaves me in the ridiculous position of running my Windows machine in a virtual machine where it's basically the only thing on the device so that I can use Linux drivers for my storage. Realistically the speeds are about equivalent to what you get with 10 gigabit Ethernet, but far lower CPU overhead if you do it right. And when you can get a 8-port switch, nominally 40gbs per port, for $89? It's a great way to get performance cheap.
I also have to run my gpu at 8x pci, because both nvme slots are occupied, so im actually glad the rx6600xt only uses 8 lanes.
@@edwardallenthree imteresting.. what kind of files system are you using? is cpu offload due to RDMA ?
@@marekkovac7058 ZFS on the server, but NFS over RDMA to share it. Network performance exceeds performance of the array, significantly, even with NVME caching.
You guys rock! This was near enough exactly the info I was looking for. I have an older PC which I use as a secondary machine (formerly my main PC) that's all PCIe Gen 3.0, which currently has a pair (SLI) of GeForce 950s in it. I don't remember why I went with that build specifically, but I remember there were reasons. Anyway, now that prices on last-gen GPUs have come down a bit, I've been thinking about giving it an upgrade. The options I've been considering are between a 2080 Ti (Gen 3.0) and a 6900 XT (Gen 4.0). But I wasn't sure how much the PCIe Gen of the MB would negate any advantages the 6900 XT had over the 2080 Ti. Thanks for this info!
PCIe 3.0 x8 would have been interesting to see. While most people buying 4090s are unlikely to still be using PCIe 3.0 motherboards, it'd be interesting and useful to know just in case.
Well its the same as gen 2 x16 so if someone tests that scenario at some point, you can go off that.
PCI E 3.0 x 8 can already be overloaded by i think it was 1080ti.
@@12Burton24 Not really, there are test done that show that PCI E 2.0 x16/ 3 x8 has a loss of 3 to 10 fps tops, which honestly is within margin of error.
I was super curious about this, thanks for covering it!
The real problem with PCI 3.0 occurs when u have a graphics card that only has 4X or 8X bus, like many AMD gpu's. I have some performance loss with my RX6600XT on PCI 3.0.
Thank you very much! Thanks to your testing, I bought a PCI 4 motherboard for ~$200 instead of paying almost ~$600 for a PCI 5 motherboard.
I believe the reason 1080p might benefit from upgrading from PCIe 3.0 to PCIe 4.0 is because 1080p nowadays is associated with higher framerates, and higher framerates means the CPU needs to send more commands to the GPU per second (and perhaps data as well).
It's just a theory, though.
This test 100% eased my concern about buying a pcie gen4 board for my new r7 7700x pc
Great video, always nice to test this when new cards come out.
I personally use DeckLink 8K (8x PCIe 3.0), and it does not work reliably over shared chipset lanes at all. They need to be on CPU lanes. So unless one is on Threadripper or something with bunch of CPU lanes it is good to know we can run in 8x/8x split mode on cheaper platforms with no real performance hit.
I don't think it's neccesarily the chipset lanes, but more the amount of them. Asus has an x570 x8/x8/x8 board with the 3rd x8 connected to the chipset. The chipset itself is only connected pcie4.0x4 to the CPU, but that still leaves enough bandwidth to throw 4 12G feeds at the Decklink.
We have a bunch of them with a mixture of quad2's and 8K's in the chipset slot without any issues.
@@inkprod yea It may work on chipset lanes just fine, thing is dedicated CPU lanes are certainty. Back in PCIe 2.0 days I had Decklink with single HD-SDI strugle on chipset lanes. It might work on chipset lanes or it might not.
Thanks for making this video. I had a suspicion that PCI-Express 3.0 would be enough for modern cards and your video and my other research leads me to believe it will be fine. I have an older PCI-Express motherboard (Asus Rampage V Extreme) in my PC that I built back in 2015. After recently upgrading the CPU to a second-hand 6950X, adding more RAM and M2 drive I'm now looking at new graphics cards. I've been out of the loop for a while, so your comment about SLI being dead was helpful too.
I want PCIE GEN 6 motherboards to change the gpu slots to using/reserving X8 instead of x16, maybe leaving it on extreme high end only for the professionals (for who know what cards) and take those extra 8 lanes into more discrete i/o rather than "shared with"
Thank you for testing this because I’ve been wondering about this
I wonder if Resizable Bar is making a bigger difference now with the 4090 than with 3000 series. Could be an interesting video Steve.
Banned the spam account. As for ReBAR - good question. We test with ReBAR on, so it might.
@@GamersNexus Glad you caught them quick, appreciate everything you do! Cheers
Strikes me as a good question relevant to this one, if you're on an older platform limited to PCIe 3 there's a good chance you won't have ReBar either.
Nvidia a cinderblock gpu.. runs at much higher Temps at cost of longevity
@@GamersNexus Do you suppose that NVidia will eventually release a driver that turns on resizable bar on older GPU's, like my 2080TI? I am willing to bet that the hardware supports it, it's just turned off.
As someone who really likes vertically mounted graphics cards, I was blown away by the price for a good PCIe 4.0 riser cable. 100+ € compared to the 15 € I paid for my ROG 3.0 riser cable, that's just insane. Board and GPU both support 4.0, but I am not willing to pay that much extra for a 0.5 to 3 percent performance increase at best. Thanks a lot for this very competent video. I will not buy a 4.0 riser before it provides any actual benefits.
Hybrid graphics is definitely something you could explore as a PCIe heavy workload. Even on desktop it's somewhat relevant as decoding videos on your iGPU is more power efficient than the dGPU + less risk of getting into a memory bandwidth fight with a game
16x pcie 5 has more bandwidth than most DDR4 or LPDDR4 dual channel or LPDDR5 single channel laptops or iGpu desktops, nowadays pcie bandwidth is never the bottleneck with things like CXL, in encoding/decoding your SSD will always be bottleneck, not pcie bandwidth, and dGPU encoders and decoders like nvidia's nvenc can do multiple 4k60 decode/encode streams, even in AV1, so your comment is not valid in 2023, maybe 2015? but nowadays amd and nvidia is way ahead of intel "quicksync"..
As someone with a x370 mobo, a 5800x3d, and considering a 4k series card at some point in the future, thanks for this demo. This gives me the confidence to try and take the same mobo for a whole decade - I built this system in 2017.
I have the same except for a 5900X. I built the original system in 2017 as well with a X370 Taichi. I'm now running a 7900XT and works perfectly good.
Thankyou for testing, I game at 4k resolution & I'm currently using an Intel core i5 10600k which is limited to only PCI express 3.0 speed, for gaming it looks like there is no reason for me too upgrade my CPU then.
It's not margin or error, if it always happens. That's a statistically significant result.
THIS is what I’ve been waiting for! THANKS!
Thank you for this video. I have been waiting for it.
In the real world this basically confirms for me that you can run close to the best CPU and GPU on an old B350 board that has PCIE 3.0 and only lose single digit percent performance from it. Hardware Unboxed showed that B350 boards can run the 5800X3D with little performance lose, within reason of course, you should pay close attention to your VRM temperatures and don't take it too far. I personally won't be going that far, the 5700x is the best 65W CPU on AM4 to my knowledge so that's what I will be upgrading to from my 1700 and now I know that PCIe 3.0 won't hurt at 1440p. Now I will wait to see how RDNA 3 and the the rest of the 40 series scales with the 5700x to upgrade my 1070, I wish GN did that testing but HU usually does great scaling videos as well, they did an excellent one for the 5000 series CPUS with 30 series and 6000 series GPUs. AM4 was truly a great platform for longevity. Thank you for doing tests that answer important questions.
Can confirm. Upgraded my 1600 for a 5700x on a b350 board. And upgrade my gtx 1070 with a 6700 xt. I actually had worse performance wifh the new setup until I realized that the bios update reset the ram’s clockspeed. Once I got it back up to 3200mhz, it’s been smooth sailing
Indeed. The same can apply all the way back to IB/IB-E CPUs on P67/Z68/X79, infact it was possible to force 3.0 even with SB-E on X79 (such as the 3930K, because it was a XEON in disguise). All one is then limited by for such older setups is overall CPU strength, but that's a whole other thing.
I just find it interesting how Gen 3.0 ended up spanning so many years of different boards and sockets (well, mainly Intel), whereas now the industry seems to be skipping through 4.0, 5.0 and beyond a lot faster. People perhaps forget the utility of what once was, eg. my lowly old 4c/8t 4820K (so easy to oc) has 40 lanes of 3.0 from the CPU, so it can actually do multi-device/GPU things which much later SKUs like the 7820X could not (that only had 28), plus relevant mbds had lots of lanes off the chipset aswell, eg. my P9X79-E WS supports x16/x16/x16/x16, half from the CPU, half from the chipset via two PLX chips IIRC; or it can even run x16/x8/x8/x8/x16/x8/x8 using all seven slots. See:
www.anandtech.com/show/7613/asus-p9x79e-ws-review
It all became horribly complicated when for a time the no. of lanes from the CPU depended on the SKU (really horrible product segmentation, it meant slot device support and even whether some slots could be used at all depended on which CPU was fitted), eg. the original 4c CPU for X79 (i7 3820) had 40, yet the much later 5820K and 6800K only had 28. So glad when all that malarky came to an end.
My B350 board not only got support for Ryzen 5000 but it got a bios update for Re-Bar as well. I'm glad I can stretch out pcie gen 3 for a bit longer and not lose any meaningful performance.
Larger margin than i expected was expecting 1-2% difference.
@Gamers Nexus shorts 🅥 @thanks 🅥 This spammer is actually the FieRcE channel on UA-cam, going by "thanks" or Gamers Nexus shorts or Bully Maguire. Don't click it unless you want to give them a free view, very sad way to try to do this. I wouldn't have even cared if you just didn't try to copy and pretend to be Gamers Nexus including using their logo to trick people.
1% is pretty much error/variance in most instances. 2% is real, but still has +/- a bit of range.
Sweet! I was hoping to see this from a reputable channel 👍🏼😁
PCIe generations is more relevent to boards that can bifurcate the GPU slot.
If there's not much difference between gen 3 and gen 4 on a 4090 at X16, it would be useful information to have if the card is limited to X8 lanes in these scaling tests.
One major use for bifurcated slots is direct to CPU m.2 SSD's add in boards, so the X8 bandwidth results would be useful.
This is exactly my curiosity!!
Thanks for the continued barrage of extensive testing GN!
I'm curious to know whether DLSS 3 with its higher framerates might be more impacted by the bandwidth... likely not considering that the CPU is only required for every other frame but might be interesting to confirm as upscaling becomes more common.
it shouldn't. the bandwidth is mostly needed to communicate with CPU. dlss 3 generation happens all in the GPU. it's infact cpu independent. higher vram speeds could help to access the on GPU data from previous frames
"Joe Rules, Steve Drools."
How long has that sign been there in the background?
Thanks for the video. I am getting into video editing and am SERIOUSLY needing an upgrade to my graphics card, but wasn't sure how my motherboard would match up with the new cards. Turns out I'm just a dummy because my MOBO has a PCIe 4 x 16 slot and I thought it only had Gen 3. Even so, I can see upgrading my graphics card would've been a win anyway. Thanks again, love the channel and the no-nonsense data and presentation.
Really glad you did this one. There's a bug in Asus 500-series motherboards with 11th gen cpus and the 4090 where it only runs the card as PCI-E 3.0 (despite running the 3090 at 4.0). I've been wondering how much performance I'm losing.
I'm having a similar issue with a b550i itx mobo from Asus. I'm using a 5800x3d and with gen4 I can't get a display, only when it's set to gen3.
@@NaanStop96 Definitely a BIOS issue. I hope Asus still updates these motherboards...
@@BlueBoy0 likewise, no other issues however other than GEN4 causing no display so that leads me to believe everything else is working correctly and not defective. For now, I'm glad the performance difference is negligible.
I appreciate y'all doing this test. I'm still on a ryzen 2700X with pcie3 and will be buying a 4090. Good to know that I'm not gonna be bottlenecked by that and don't need to upgrade the whole PC immediately
I’m surprised that 3.0 isn’t bandwidth limiting still since this is years into cards with 4.0. Makes me feel better about holding on to my x370 and running a 5800X3D with a 4090
I do think this will continue for awhile, pcie4 5 etc are useful in data centers (and empirically, but maybe not practically, nvme)
I also have an x370 with an 5800x3d and 4090 is coming. Is this really only 1-2% loss compaired to a pcie gen4 board?
@@BrickBazookaGN does really good and thorough testing, I trust it. I didn’t want to buy a new board for an EOL socket and my board has plenty strong of a vrm. How are you like the 4090? I’m very impressed with the FE
@@jolness1 yeah, I'll keep my Board also for my 4090fe. The card is great, but rest should be also latest gen.. later next year I should upgrade to raptor lake or amd 7000
I have x370 with 5800x3d and 4090 and going to change board to b550 because of 4.0 as it cost nothing compared to 4090, it is more about work invoved in switching it. Also i will upgrade my main SSD to NVM 2 TB 4.0 (from 3.0 500 GB). I am using that PC mainly for VR and in Flight simulator 2020 in dense city areas i get quite a lot of stutering, this upgrade should quite help with bottlenecks and it should hopefully solve that stutters. I dont think i will much notice improvments somewhere else than in Fsim 2020. Other mainly racing sims are running fine and are mainly gpu limited, but few more FPS could help in some places to be completly without stutters. VR should be about good imersion and stutters are quite ruining it, so this quite cheap upgrade should make it better and would be shame not to use 100% potencial of that expensive 4090. For non VR gaming i dont think that upgrade make sense, also i dont see much benefit using 4090 for playing games at 4k 60 Hz LCD, it is looking and running almost same as with 3080.
@@kecimalah why b550 , sell 5800x3d and just upgrade everything. That 1-3% from upgrading your pcie Standard isn't worth your nerves
I'm happy to see these results. I'm on a z590 and was going to be pretty bummed if the performance was drastically different.
Please cover VR gaming performance too. 🙏🏽
Thank you for your constant great scientific method. I shop the back of the wave and always try to keep up on what features matter when I buy last year's model.
As a life long R&D lab tech, I think you have great lab skills.
After around half a century in the lab, the only advice I can add to your toolkit is NOTES, lots and lots of NOTES.
I promise you, in 2 years you will not remember accurately. It is very frustrating to have to recreate an experiment because you forgot to write something down about the setup. It also keeps people from picking your results apart.
The difference between PCIe 4.0 x16 x8 x4 would be interesting
A massive thank you for that video! I've just upgraded from my R7 2700 for a 5700x and was wondering if my B450 was bottlenecking my performance. I'm glad to know that I dont need to worry about that!
I feel like this will matter more if you have ANY other pcie card in your system and are running x8. Would love to see those.
It doesn't matter. You either have enough lanes total for all the cards or something doesn't work.
Period….Well said. ✅
I would drop the coin on a 4090, but at this point my entire computer is a bottle neck.
I love your videos, Im just a little confused that all this work was done to show the performance of 3 games would be nice to see some encoding software and rendering (Davinci Resolve and Blender) as well. None the less, appreciate the free resources.
This channel taught a valuable thing like many others, benchmark and research your usecase online before purchases. No more buyers remorse and less ewaste, thanks for the work over the years.
Explain why the quantity of PCIe lanes a device uses are important, and which devices use them, and how many can be used at once.
I would like to know this as well. I don't understand PCIe lane allocation and why they compared 3.0 x16 to 4.0 x8 and now I am further confused when he mentioned lanes from the CPU and lanes from the chipset--pulling lanes from the chipset was possible but not often done. Where are the options for that?
Wow, actually relevant in-video product advertisements... THAT'S refreshing!
I'm not sure how many are gaming with a 4090 at 1080p. More would be gaming at 1440p and the target is 4k I would imagine. I wonder what the difference would be when 4k gaming is factored in.
Nobody who spends that kind of money on a 4090 should game on anything less than 4k especially when the 4090 will barely be utilized in 1080p and 1440p in a lot of games
Still, 3.0 vs 4.0 question clearly is an optimization issue rather than a bottle-neck one.
@@randomyoutubeuser8509 I have a rx 7900xtx and i know that it is slower but not by a lot. I am Playing Escape from Tarkov and Satisfactory (with mods). Escape from Tarkov runs on almoast maxed settings in 1440P with about 55 Fps (On the newest map). Satisfactory runs moast of the time stable at 144fps but has drops to 100. The maximum benefit of the 4090 that i can imagine is 15% more fps becasuse i am not using rt or dlss/fsr. What I want to say is that many games dont even run in 4k. Escape from tarkov could work in 4k with lower settings and dlss 3.0 but this game is so poorly optimized that dlss looks garbage in this game and makes it litterally unplayable
@@dustingarder7409 The 4090 is generally 30-65% faster (depending on settings and features), not 15%.
ua-cam.com/video/f1femKu-9BI/v-deo.html
I use 7900XTX in 3440x1440 and definitely wouldn't mind a 4090, but that's just too much for a GPU!
Side comment. I appreciate these shorter videos once in a while. Makes it easier to fit in a break.
So when they tell me i need PCIe 4 for a 4070 ti, and i only have PCIe 3, I dont need to go out and buy a new Motherboard, in fact i will hardly notice the differance!?
Really appreciate this testing and insight! There are still a few things that an RTX 4090 can't quite handle - in my case specific to some PCVR titles at higher resolutions and when relying on super sampling at 4K to address cosmetic issues with games that don't support DLSS. This video was helpful as I was specifically wondering how much the PCIe 3.0 on my older generation motherboard might be a limiting factor and what options (if any) there are for boosting performance a bit more. Thanks again for another great video!
So even with a 4090 the most powerful GPU on the market the difference between 3.0 and 4.0 is marginal.
Now that is interesting because it shows that PCIe is very future proof.
Good to see PCIe 3 is still good for the foreseeable future.
On AMD B550 though gen 4 is giving me issues. Think B550/X570 motherboards or the chipsets themselves were not really ready for PCIe gen 4 with the issues it had and has with gen 4.
I have a z490 motherboard and read that m.2 speeds don't improve gaming much. It's mainly for transferring files, I also learned that pcie 3.0 and 4.0 isn't a big enough difference for gpu bottleneck. It's mainly bandwith related
And I have the z390. And using 2 nvme drives downs affect the 16 lanes on the first piece port.
@@paullasky6865 what?
@@Skippernomnomnom people keep saying that if you use both m.2 drives you lose pice lanes on the main slot. It isn't true.
@@paullasky6865 Yeah. I have two nvme drives and 4 HDD drives on pcie 3.0 and have no issues
You will probably see the benefits of PCIe 4.0 in machine learning applications. Try playing Leela Chess 0, and look at the evaluations per second. The bandwidth plays a huge role there.
Gen 3 is legendary!
At least you watched the video.
I did not expect there to be almost no difference. PCIe 3 is perfectly ok for even the best of the best.
At this point, I'm surprised video cards don't just default to using 8 lanes when using PCIe 5.0 (and beyond). PCIe 6.0 is already around the corner, and the 7.0 spec is expected to be finalized in 2025. There really is no need for video cards to be defaulting to 16 lanes at this point.
True, might have some problems on 3.0 or 4.0 if they start using 4 lanes on 5.0 and above
when will you guys post 2022 best cases and coolers? maybe also best fans if you guys have time? looking forward. love what you do
I feel like this is as good a place as any to ask: on motherboards where the PCIe 5.0 x16 slot goes to x8 with the PCIe 5.0 M.2 occupied, do you only get an effective 8 lanes of PCIe 4.0 with a 4.0 device? That is to say, are you limited by both interface compatibility and lane bifurcation simultaneously?
Thanks for this. For some reason my card is stuck on 4.0 x8 and I don’t want to go through the whole return craziness especially since I have a custom loop. Anyway thanks for these types of videos you never know when it’ll help
Since this testing shows that the bandwidth difference between PCIe gen 3.0& 4.0 is negligible at x16, I'd be interested in seeing how PCIe 4.0 x8 would do... Will fewer lanes have latency issues, or expose driver problems?
PCI E 4.0x8 will have little advantages ovet PCI E 3.0x16 because CPU lanes and m.2 Drive Lanes are also PCI E 4.0 the Bandwidth itself is the same foe 3.0x16 vs 4.0x8
@@12Burton24 Yes, I said the bandwidth was essentially the same, but testing would show any latency or driver issues with just 8 lanes, and that's what I'm curious about.
@@jgorres And i said that everythingnis faster so even drivers are the same you should see a difference 😉
@@12Burton24 ??? I'm not understanding what you're saying.
Thanks for this . It is good that the board speed is a head of the need of the graphics cards, showing that this will not be the bottle neck for years to come :)
Wouldn't the main difference be in loading the game? Like how fast the memory gets populated?
thankyou for this video, I have a 10980XE and not planing to change platform for a few years, so I will run my next graphics card at Gen 3 x16
you get cpu bottleneck with older cpus before you hit pcie limitation
I think Steve may be missing the really interesting findings in his test data. He observes that PCIE Gen 3 produces about the same fps as PCIE Gen 4 from most games at 4k.
This means that that bandwidth requirement is only about 15 GB/s to support a roughly 200 fps. This means that only about 75 MB of data is being passed from the CPU to GPU in order to build a given frame, way lower than the data output of the GPU.
If we scale up to 8k, then about 300 MB of data per frame would need to be sent (rough scaling). PCIE Gen 4 should be able to support about 83 fps at the bus limit for 8k gaming. At these speeds, card performance is the limiting factor, so it'll be less, but this tells us what the bandwidth limit for 8K gaming is. The 4K gaming bandwidth limit for PCIE Gen 4 would be 4 times that or 333 fps.
This will vary depending on the game, but this shows rough scaling between PCIE generations and according to display size. So Nvidia was probably right to leave the 40 series at PCIE 4, since the card throughput will limit the fps, not the PCIE throughput.
Over 1000 pounds for 4080 lol...nvidia trying miner prices still. I used to buy every gpu release not now..not at these ridiculous prices. 800 pounds is the top price the 4080 should be...l will keep 3080 another two years at these prices...l really hope massmarket ie outside of enthusiasts...refuseto pay this much..anyway l refuse besides happy with 3080...
Do the same now for 3070, differences should be much higher in contrast to 4090, due to much smaller VRAM, similar to what happens with 6500 XT.
3070? Did you typo? I'm pretty sure the covered the 3080 PCIe 3.0 vs 4.0 like 2 years ago and there was no meaningful difference, so there wouldn't be on the 3070.
6500XT is 4 lanes, thats what tanks it
@@Cruor34 Oh wait, they totally did, completely forgot. But 3080 has 10 GB of VRAM, whereas 3070 only 8, so there'd be much higher usage of PCIE bandwidth, due to constant VRAM data shuffling, once they fill 8 GB of it.
We already did that on the 6500 XT. And we did that on the 3080.
nah, i dont think so. the 3070 is simply to weak, to make a difference.
Sincerely, thank you so much for this, I'm currently building an exploded view wall mounted build and I need a 60 cm PCI riser cable which I can only get in PCI Gen 3 so it's really good to know that on my 3090 it will be more than enough
Would have loved to see what happens to fps, when limited to x8 lanes.
With m.2 becoming more widespread, many boards limit from x16 to x8.
Pcie 4.0 x8 is equal to Pcie 3.0 x16, so according to these video's charts, nothing changes
@@BernhardWeber-l5b oh is it? Kinda makes sense, but didn't know for certain whether bandwidth precisely doubled or even if it correlates linearly at all. Thanks for letting me know!
@@BernhardWeber-l5b Im looking at a 4080 for an upgrade from my 2080ti. I have an nvme boot drive which is forcing my top slot into x8. It's an x470 board and gen 3 to boot. I feel a bit silly asking this. But is it a good idea to upgrade to an x570 board with pcie gen 4? Or am I worried about nothing
@@DasFlank It may have some impact but how much I can't say. I would either get a b550 board or move the m.2 SSD to a PCIe X4 adapter bracket to allow the top slot to operate at x16 speeds.
This comment actually has more relevance than you'd think.
Lots of z790 boards, such as the z790 Aorus Master, run their PCIe lanes in x8 if you have a m.2 in the top CPU slot. There's going to be a ton of people who want to run Gen5 m.2 storage when it becomes more widely available, how much is this going to impact GPU performance if at all on these boards?
Nice one Steve,👍 I have a Z690i ITX Gigabyte lite motherboard on order & only supports up to PCI-E 3.0 x 16 slot
So would it be preferable to plot an upgrade path through the 40 series if im going to stay at 4th gen PCIe for a little while? Or would it be smarter to look into high-end 30 series?
Um watch the video? You will get your answer...
I just watched a PC repair short video from GamerTechToronto where they discovered that the mobo's top pci-e slot was defective. So they plugged the customer's RTX 4080 into the bottom PCI-e 3.0 x4 slot and called it a day :D
So its about a 1% to 3% difference. Is it worth not slotting an m.2 into the same lane to keep your 4090 GPU on x16 lanes gen4 then?
Difference is minimal, you can just use M.2 on the same lane
I'm not upgrading my PCIe 3.0 PC yet, got a 5950x/3070 and soon going over to AMD. I still game at 1080p with my 360hz panel, I love the high FPS for now later on I'll go 1440p😁 But if the FPS are already in the 100s or 200s like in the benchmarks here at 1080p then we are good for quite a while 👍 What I want to see is how future games will perform like games using UE5 for example.
The 7000 series AMD cards will support PCIe 5. This testing would be cool to see with those cards.
Correction. They too are only PCIe 4 cards. Bummer.
🤣
Already knew that from your previous testing of PCIE Generations before no?? Look forward to seeing the different Gen PCIE versions and Lane allocation add up against one another!!
Steve, just curious if you have ever considered building and selling a "GN Mark" hardware benchmark test suite along the lines of the upcoming LTT "Mark Bench"?
No, we use all our stuff internally only. It's dangerous to build and distribute that type of thing without an insane amount of controls (that we won't have over user setups) just because it can easily produce bad data that runs rampant for comparisons online - e.g. user benchmark. It's possible. FutureMark has done a good job. But we're not going to try and do that as we aren't set up to do it in a way that I think is responsible (e.g. would be too easy to generate bad data that feels like good data to the users)
As a long-time fan of the channel, I'd just like to say this: *Joe rules, Steve drools.*
Back to you, Steve.