I'm now a pure Linux user on my desktop. Gaming with Bazzite. Best experience since Windows XP. 😁 I say "desktop" because I use a MacBook Pro as well. The macOS we know today was born from Linux (NeXTSTEP), so I guess I've just favoured that sort of environment over the past few years.
@@eQui253 nice, I’m using Debian 12. Arch Linux is pretty nice although I wanted to slow down the updates especially since I wanted to use ZFS and not have an update break anything. LFS is also a bunch of fun to compile. :)
@@andljoy Doesn't really help when your productivity software is Windows-only. I'll stand by this: asymmetrical CPUs where simple core scheduling can cause a massive negative impact is simply terrible CPU design. If you want to save power or costs then just buy a cheaper CPU - and you also get consistent performance on top for free as well.
As a game dev I can say that by default, windows can schedule the “main thread aka mainloop” multiple times per frame on multiples cores, same for any worker thread. Moving to another core is either costly directly : you have to copy all the registers, internal registers, stack, and the cache l1 , partial or total of l2 and l3,(and caches size is getting bigger and bigger, that’s a lot to copy), or do it “fast” and let the process ask for memory : copy all the registers, internal registers , stack, do not copy the cache, letting the process ask for a memory address get a cache miss, filling the caches for that call, making the first execution of the process on a new core, very costly than stay at least one frame, and better many frames on the same core. On console I always pinned all my threads to a specific cpu, on windows, yes you can see some gain, by pinning a thread to a cpu, but you could also see that the lowest fps is also getting lower, in some case much lower. The problem, windows is not a console. On a console we get a guarantee that a list of core are for the game, and that no system thread would run on those cores. On windows, you can pin your thread to a core, and run your mainloop as fast as you could, it works, for some time, until windows decide that your thread did run long enough, and interrupt you as soon as you make a system call , and that call, instead of lasting 80ns as usual can last 100, 400 ms , because windows decided to use “your core” for something else, trashing the cache at the same time. Cohabitation is possible, but it’s very very hard, and when you think you have the solution, you find that sometimes, when you start a thread, sometimes it start only 100ms later… those are the usual lags and stuttering in windows game. So thread pinning = good, on windows, it give some gain, but also some loss, that could be bigger. This is the reason most musicians prefer macOS, the thread scheduler, not having those 100ms lag than an app under windows could have. 100ms for a game or for midi sounds or wave, it’s a lot. And the cure, it to had a one or multiple frame delay, fixing frame rate but introducing lag. TLDR windows thread scheduler = bad
PS: The "solution" on Windows is to not CPU starve the OS So, give the OS time to do whatever he wants, but at a time that "suits" the game engine. Usually by adding a simple sleep(1); The 1ms, in this case, is not guaranteed; it depends on the OS, The more starved the OS is, the longer the sleep(1) is. So, the game manages the OS by giving it a percentage of its time. (upside down world) It's not perfect or "efficient," as you lose "compute time," but it works. I have yet to try "for science." There are system functions to change if a core can be used by Win! What happens if I transform my PC into a quad-core for Windows and all the rest just for my code to profile actual performance?
@@tiedye001 Same as CPU Affinity; you get more time, but you "piss off" more threads with an "important" job, so when the OS can interrupt you, the lag can be violent. Thread priority is suitable for a 3/4 ms task; for example, if you want it done fast, it's not ideal for constant performance 100% of the time. This is why a console can beat a more expensive PC by 20% to 30% between a similar CPU and the same frequency (Rainbow 6 Siege), between Windows and a PS4, for example.
Linux Reviews from this channel and Phoronix Review clearly show a major difference between the 9950X and 7750X, where as tech sites and channels using windows to review the CPU's show the gains to be marginal or even to have regressed in some cases. This sounds like a windows issue more than anything. Windows really sucks.
SO, the most widely used platform thats been out for years.. and a new piece of hardware comes out where the hardware company makes the drivers.. and its the OS fault?
A couple of my audio plugins can use AVX-512. For example, 2CAudio Breeze which is a nice reverb. I hope Zen 5 encourages audio developers to start using AVX-512 more.
@@HolarMusic Good to know. The only application I have for AVX-512 right now is the Topaz AI upscaler and GPU performance is an order of magnitude better than CPU.
@@budthecyborg4575As far as I'm aware AVX 512 workloads are still very much in the high complexity territory that CPUs beat GPUs at. GPUs tend to be best for stuff that can be broken down into a lot of very, very small tasks (or AI, but that's in no small part because of dedicated AI accelerator hardware)
@@bosstowndynamics5488AI inference is for the most part just multiplying your input with all the weights and biases in the model and AI training is just fancy matrix calculus, which both runs really nicely in parallel
Phoronix just released his review and the numbers on Linux are just incredible. Too bad the only thing we are going to hear is how bad those are for gaming... He came with a Geomean of around 17.5% over the 7950x.
That's the thing, the average results are heavily carried by avx 512. Nothing to do with linux/windows. I can flip it and say that 20 to 40% perf increase from double execution units width is just pathetic... Should be at least 70 to 90%
@@panjak323 you clearly didn't read the review and are trying to spread FUD, and you don't understand geomean. The graphs include AVX workloads removed and it's still way up. God the comment section is so brain dead.
Blame HUB and Gamers Nexus and all of their cronie tag along creators that dote on their every word, for that kind of rubbish. It's not all about benchmarks, it's about real world utility and performance which they fail to understand for a 12-16+ core CPU. Reviews from people with real expertise are much better.
One of these days people maybe get they are just amateurs wannabe standard just because one has a lab and another overbenchmarks like there is no tommorow.@@moonstomper68
@@tuckerhiggins4336 The reason you'd benchmark a $200 CPU using a rig with a 4090 is because you don't want it (and all the other CPU's as part of the testing to compare it to) to be constrained by the GPU. If you tried benchmarking a $200 CPU in a game and used a $200 GPU as well, all the CPU results on the higher end of the chart are going to cap out at the same framerate because the GPU is the bottleneck. The benchmarks are meant to compare one CPU to another using the same environment variables in order to show relative performance and value, not to show "how fast does this game run on my PC if I buy that CPU?"
I am so here for your analysis of this scheduling issue, i would really like to know wtf is up with windows scheduler and this seems an excellent case in point. All my machines run ProcessLasso, because i have to tbh for audio Take your time on it, ima watch the whole thing XD
Michael Larabel at Phoronix loves them. Particular attn, the avg scores, like avg of all creator, avg of all database, avg of all games ... About the only thing the 9950 doesn't mop the floor with, is power efficiency. The lower 9ooo series do rather better there. There are a few outliers where the intel 14th gen does stunningly well, but those are outliers. Compiler performance is stunning!
I'm guessing the 9950x can shine on power efficiency if power limited, maybe set tdp to 105W which would effectively be an eco mode on a 170W default tdp. I want to see those benchmarks. The main issue is memory bandwidth. If your workload uses avx512 in particular an all core workload probably easily saturates 16 cores, it likely saturates 8 cores, it's funny that 6 core zen5 may be the most sensible way to run these types of workload.
I’m not a gamer, just a photographer and fine art printer. So, even with all the bad news and confusion re the Ryzen 9000 series, I decided to go ahead and replace my 7900 with a 9900X anyway. I have an MSI MPG B650i EDGE WiFi motherboard in an open case with an ID-Cooling SE-207-XT air cooler to which I have attached a second fan. The 7900 runs with DDR5-6000 memory, Game-Boost and the TDP elevated to 105W. It pulls 142W running Cinebench R23 with a high temperature of 79C for a multi-core score of 27550. When I replaced the 7900 with the 9900X, with the BIOS cleared except for EXPO, Cenibench R23 scored it at 32214 with a high temp of 81C. That’s a 17% improvement, much better than I expected. Think I’ll be keeping the 9900X and seeing how much I can wring out of it with a better cooler. I have no intention of playing the core parking game. BTW, B&H is selling the 9900X for $50 less than everybody is reporting. They also pay the tax if you use their PayBoo card. No connection to B&H, just a happy customer for many years. Some good news re the 9000 series is overdue. I appreciate that gaming drives the technology, but other users make up a large part of the marketplace, so it seems wrong for gaming to influence the whole picture. Even though AMD fumbled the rollout, the 9900X is still a great CPU for non-gamers.
@ - Thernmalright Phantom Spirit 120, dual tower 120mm. Good up to 180W. I’m overclocking to that power level and getting 34000 multicore on Cinebench R23, 2222 single-core.
Agreed. As much as I love GN and HUB I'm getting tired of them ignoring Linux. (GN shows Chrome compilation is a good step in the right direction.) When SOME people such as Wendell or Phoronix ARE seeing a performance uplift in Linux then *we should be asking:* _Is the Windows task scheduler problematic for Zen 5?_
Exactly, I mentionned it on hub and their response was to re test on Windows not changing shit or bothering to investigate the issue at all. Like guys, isn't this your job?
Microsoft is more concerned with making money off their customers than optimizing performance on so many fronts. Gaining market share and taking the hit to their brand in windows is not on their radar when Ai and azure is making the shareholders purr.
@grimfist79 He literally called out that, as a gamer, it's not wise to upgrade from Zen 4 to Zen 5. But there are more workloads than just gaming. You need to stop pretending that everyone who disagrees with you is a fanboy. He said he found some performance discrepancies that he needs to investigate further. If anything, he's more anti windows than pro AMD
Here's something we're seeing in a test of the 9950X compared to 7950X, which we did as a ProxMox test. Inside of ProxMox hosts, we're seeing a significant improvement vs. at the wall power utilization. But in Windows itself, when non-virtualized (not using a QEMU CPU) the benefits are not being felt in any real mode; we don't really test for gaming. Now, obviously, we're comparing a point that is below straight to hardware to hardware virtualized. But the benefit does show up, and that is interesting.
For couple of months now I play on Windows with SRV-IO and IOMMU turned OFF, along with Memory Integrity and Core Isolation also turned off. The difference in 3DMark results and gaming experience are confirmed for the better. I would like to note, this is my gaming only machine, I do have another one for AI LLM testing with 7900 (no X) where all these options (in BIOS at least since there I run Ubuntu) are turned on. But it surprised me that turning those OFF for gaming, made by 3DMark results better, and my gaming with Freesync enabled monitor with vertical sync limit to 120HZ on 5120x1440 rarely drop in CoD MW3 multiplayer (even for a second or two). My GPU is 7900XTX from Sapphire.
This is interesting, I haven't heard about SRIOV and IOMMU affecting performance. You're not running Windows in a hypervisor? Do you have a ballpark estimate of how much impact it had?
@@DanielKennedyM1I suspect that the entirety of the difference came from the software side - SR-IOV isn't even available on consumer hardware and the UEFI option is really just whether or not to let network cards report to the OS as multiple separate devices, and enabling the IOMMU doesn't change any of the performance characteristics of the chip. On the software side though, core isolation and related features mean that W11 kind of sort of does run on a hypervisor, by default, because it uses Hyper-V to sandbox some of the drivers and other system components to prevent privilege escalation. It's honestly one of the few ideas in W11 I actually like (assuming it's actually securely implemented), brings some of the security by virtualisation stuff from niche systems like Qubes to mainstream users, but naturally that does come with a bit of a performance overhead and that difference is enough that gamers who don't understand the security implications wind up disabling it (or of course power users who don't run sensitive workloads on their gaming system like the above commenter who knowingly take on the risk)
great review, thanks. very level headed and no hyperbole and unnecessary drama. Too many youtubers are concentrating purely on gaming and not as a whole package. I agreed with your conclusion, spot on.
I believe some reviewers cater to a younger audience, and I find the lack of professionalism off-putting. Being on UA-cam doesn't necessitate clickbait for reviews. This is why I've mostly returned to reading website reviews, such as those on guru3d.
@@ThaexakaMavro Chips and cheese should give a more in depth architectural analysis of zen 5 and shows regressions in several instructions as well as memory. Memory regressions alone would explain the lower than expected gaming performance.
Zen5 may well have potential, but I chose to follow Wendell's advice to buy a CPU based on how it performs _now_ rather than on some nebulous possible future capability. This morning, after checking the last round of Zen5 reviews, I bought a 7800X3D. It will anchor my main gaming PC for the next 2-3 years. Maybe Zen6 will make good on Zen5's unfulfilled promises.
Yeah. I bought a threadripper 7970X after zen 5 was delayed because at that point zen 4 was rock solid, I'd been on a 7950X for 18 months and I just needed more for my work.
Thank you Wendell for all the work. Great review and personally I love to see someone who's leaning more on the relative than the absolute side of conclusions.
I can't believe after all this years Windows scheduler still does not fully support AMD's dual CCD layout so that AMD has to resort to crazy hacks with game bar and drivers to basically shut down half of the CPU you paid for to avoid performance degradation caused by suboptimal scheduling.
All these years? All these 1 years? Zen 5 is the first time AMD has had any need for special scheduling for symmetric dual CCD chips, prior to this it was only the 7900x3d and 7950x3d that needed advanced scheduling and most x3d buyers were going for the 7800x3d anyway. Yes, Microsoft absolutely should have fixed it (particularly since Linux already accounts for CCDs apparently) but it's not like it's been many years of wide deployment of the core parking stuff
@@bosstowndynamics5488 "symmetric" does not equal monolithic die where all cores have uniform access to caches and memory. As each CCD carries its own caches having threads reassigned between them randomly, or threads of a single process running on multiple dies incur significant penalties due to cache misses and synchronization. Dual CCD design first debuted in Ryzen 3000 series, or even earlier if you count 1st gen Threadrippers. So yes, all these years and Windows still suffers from these problems. I'm not sure how Intel does it with their non-uniform core designs, but it seems they were able to convince MS to care at least somewhat. While AMD's best effort is to hook up into game bar logic and disable half of your CPU.
@@MikeKrasnenkov Saying Windows "still" suffers from these problems is misleading - the performance penalty from communication between CCDs in everything up to Zen 4 for symmetric configurations was so small that pretty much no one noticed, so it really shouldn't be a surprise that Microsoft didn't bother to fix it. Intel forced their hands because all of their parts had 2 radically different types of core in them, whereas even AMD's heterogenous designs with Vcache (which have only been around for less than 18 months and are far less common since most gamers are going for single CCDs) will work just fine with scheduling misses, they'll just be slower. And the issues with the scheduler for symmetric dual CCD designs have only just now come to light pretty much today as far as the public is concerned.
@@bosstowndynamics5488 Why was the penalty so low before Zen 5 and suddenly so high now? Zen 5 should be pretty much the same what you call "symmetric" design as previous Zen architectures. What changed?
@@bosstowndynamics5488 All these years. Even first Zen had two CCXes. Now CCX is 8 core CCD but not always. Ryzen 7 370HX has 2 CCXes, one is 4 core Zen 5, second is 8 core Zen 5C. First Ryzens had 2 four core CCXes in one CCD. Ryzen 2000 was the same. Ryzen Threadrippers of 1st and 2nd gen not only had 2 CCXes per CCD but also used multiple CCDs. And ever since Ryzen 3000 launched, the basic configuration of desktop CPUs wasn't changed. It's still 1 IOD and 1 or 2 CCDs. TIt's been almost 5 years since 3950X was launched and like 7 or 8 since first Ryzens debuted. Intel launched Alder Lake three and a half years ago. And it's not the first time both AMD and Intel suffered because of M$. BOTH companies had to write their own drivers to improve scheduling on their CPUs because despite their work and continuous push, M$ refuses to make WIndows scheduler better. Even after publicly committing to making a fix. AMD integrated their driver within chipset driver and uses GameBar to assign game to the correct cores. Intel wrote APO doing just the same thing but, afaik just without the help of GameBar. If a Linux shows improvements and Windows doesn't while Linux doesn't even need the software hackery, it's safe to say the hardware is not at fault, nor is hardware vendor.
I'm wondering if windows is simply using Intel code hence why everything seems to run faster on Intel? I mean, Intel and MS have always worked together (wintel name exist for a reason) and the OS is favoring Intel . Time to recreate all these benches using Linux!
I'm not really a huge fan of tin foil heading but I do think this is somewhat true. Intel and Nvidia have worked many times together with Microsoft. Especially Nvidia has the reputation to be in frequent contact. Meanwhile AMD has quiet the reputation of treating their software partners very poorly, so this might actually be a thing.
Great video and a great technical review, not just running benchmarks, the majority of review channels are more superficial and less technical. I remember when you said the performance of 1st gen threadripper may not be a problem of AMD but a Windows problem because in Linux it was ripping everything.
Good stuff! I'm still on an Intel i7-6700 PC build, so anything would be a nice upgrade at this point. I do video/photo editing, streaming, and adjacent creative stuff more than gaming. So far the Ryzen 7900x price wise looks a lot more appealing and I hope it continues to get cheaper.
Here's an idea for benchmark: Game + OBS streaming (or recording). Because many workloads today aren't just the game running by itself so I wonder how would it all look like in scenario like the one above.
First! And the first ryzen9 reviews I've seen! That admin/windows trick thing makes me think in a couple months from now these processors will be a little bit better once either people or amd/windows figures their stuff out. Also still makes me excited for the x3d variants
Almost certainly, and arguably was with the previous gen CPUs as well, though it'll always come down to Linux compatibility for any particular application. I wish Wendell would test Factorio. It's one of the few games that is heavily bottlenecked by memory performance, much more than CPU/GPU. Its (native!) Linux build has always run better than Windows, for a variety of reasons - the most compelling being Linux's support for Large Memory Pages.
After jumping ship from MS advertising and spying platform I was shocked to see that Linux Gaming is now not only viable but often on par with Windows running through proton. I find it not at all surprising that Windows gets outclassed by Linux on these new processors, as Linux is an actual operating system with a strong focus on performance as opposed to tricking the user into tracking its users and sending them ads. Where you put your dev efforts matters. (Also Windows is a house of fucking cards)
Still rocking a 12-core 1920X Threadripper in my homeserver. I love that platform. Seems like yesterday, like you said. If AMD ever launch a 24 or even a 32-core desktop processor, I'll make the jump. But for now the Threadripper lives on.
If you want to max out the RAM on AMD platform, going with the ECC is probably the best way to do so. 48Gb ECC UDIMMs are available and the prices are decent at $200-250/ea. I mean, with that much RAM, random bit flips from the background radiation are inevitable.
Depends on what you're doing, the extra bit flips from large DDR5 sticks are supposed to be handled by the ubiquitous on die ECC anyway. Of course, not many workloads actually benefit from that much RAM so you're already selecting for things like home servers so full ECC is probably still a good call, but it's not mandatory for all high memory systems by any stretch
I still side with with Wendell on this one. There's something errantly wrong with windows . Even the x-elite is a letdown mostly because of Microsoft . Only time will tell lol i did the right thing and bought the R9 12cores for 280$ brand new one month ago.
I've got a hunch that the improvements that have been realised on the ARM side in terms of efficiency are probably at least partially due to Microsoft having to strip out a lot of 30 year old junk and could potentially be realised on x86 too if they got their act together and made Windows work properly instead of focusing on creative ways to juice their metrics and shove ads in users' faces
For the memory, I had problems with 2x2 sticks on my AM4 with my 5800X3D, the system would not boot AT ALL, if a specific stick was not in the right slot. Once booted, I could apply the 3200Mhz without problems. They layout is like you show, each pair on the same channel ( and there's a specific order, i lost hours trying to understand, and just bruteforce all the possibilities until it worked), unlike what's specified in the manual.
interesting comparison at the beginning, Zen 5 really starting to feel like Zen 1 from the past, it's something that will be better in future generations, Zen 6, 7, etc.
If you want to run very high memory clock/timings, keep in mind that your memory will degraded overtime if you running at high voltage (even it stamped to your EXPO and XMP profiles that they designed to run on that voltage). You can run DDR5-6000-8000 at 1.35 or1.4v but it may not able to run at that speed after 2 years because it degrading very fast. Some Mainboards also inject higher voltage than you set in the BIOS. I found some ASUS boards giving DIMM 1.38v instead of 1.35v setpoint.
I find it interesting that the 9950x beats almost everything else in minimum (0.1%, 1%) frames in most benchmarks. That's interesting, and something I would like to understand more. I wonder how much smoother that feels.
@@calldeltosellSteve @ GN & Wendell are friends. They view CPU & GPU from different view points. GN is a gaming channel. Level1Tech is an all around performance channel.
Thank you for the comprehensive look-see, Wendell. 🙏🏼 Your views on these new parts (together with those of _Hardware Busters'_ Aris) are a *refreshing* (see what I did there?) departure from all the _weeping & gnashing of teeth_ that I've borne witness to on UA-cam lately. 👍🏼 P/S: The check is in the mail. 🫰
I'm still slumming it on an X99 platform, running everything, all those games plus more just fine. What helps is a 4K TV as you can use a desktop window mode for gaming, the size of the desktop window is similar to a large monitor (eg. 1920x1080p window is 27 inches).
I do agree that Intel giving up on AVX-512 was a big shame, I rarely give AMD credit but I do commend them for keeping it around. What many dont realize is that AVX-512 isnt just the extended registers and vector length, but it also comes with strong optimizations for previous instruction sets like AVX2 or older, which many apps use. Sadly AVX10 will just be a band aid.
The reason that 4K is not commonly tested in CPU reviews is because 4K is primarily GPU-bound and GPU limited. You want to create a CPU bottleneck at 1080p with the GPU maxed out with no upscaling which allows different CPUs to be easily compared
Upgrading from zen 2 is exactly what I’m considering, I also had my eyes on the 48Gb DIMMS. My current PC will probably be used to upgrade my main server which is a zen 3 ryzen 5 on a ‘B’ board which I came to discover was a major mistake. I want to virtualize and consolidate all the random stuff I’m hosting on RasPis so I tend to think the zen 3 > zen 2 downgrade will be worth it for double the cores, and I’ll happily use the extra PCIe I lack with the ‘B’ board. Intel GPUs are also dirt cheap right now so I’ll likely grab an A380 for plex encoding.
Thanks alot for the unbiased review. You brought up alot important subjects, us mostly none gamers realy likes hearing about. Like what to expect on DDR5 on 4 channel setup. I am a developer, so i use my system mostly as a server. Currently i have a 5950x, which is a fine little beast for this. But i see it being constrained on memory (DDR4) bandwith, when i start to load the 16 cores up, in my applications. I tested with a 7950x3D, and the way my programs allocate mem/cpu cores, doesnt benefiths this alot vs the 5950x. Therefor i hoped the 9950x woulld be a worthy upgrade. I had hoped we had seen a 8xZen5+16xZen5D cores edit, but seems that was a dream not coming true, before Zen6. Likewise i had hoped for a IO die with 1GB+ L4 cache to be shared with gpu/cpu, bit like broadwell, intel cpu... but no. So even Zen5 is both faster and cheaper than Zen4, i guess its going to be another wait untill Zen6 gets out... Again, thx for review.
this channel brings meaningful review for me.. 1. linux based testing for PRODUCTIVITY use case. 2. non-gaming and balanced summary to tell us if it's something worth considering. I'm planning to upgrade from 3700x to the 9700x
5:32 "PBO doesn't add anything" this is not true. You have simply toggled it on without tuning it. using Curve Optimizer will result in the same watts as stock and yet better performance and lower temperatures than just PBO. I wish people would use the tools given by AMD to benefit themselves. It's just odd to shoot yourself in the foot especially as an enthusiast. Leo from Kitguru has demonstrated this.
Thanks for another look at the zen 5 :-). The nice follow up would be to dive deeper into new zen5 microarch Vs win 11 basically to show/explain the current seemingly strange benchmarking results ... and to see what could be done to win 11 to use zen5 architecture fully.
Hey Wendell, after reading about the "Administrator" account performance gains, my thoughts went directly to the large page support. It's locked behind a security group policy AND requires to run the exe elevated. The GPO is under Computer configuration -> Windows settings -> security settings -> Local policies -> user rights assignment, called "Lock pages in memory". Linux iirc has large/huge pages support by default This was/is commonly used for mining applications where the gains are pretty similar, now I know there are some stability implications with memory allocation, I can't confirm if that's the main difference from the disabled Administrator account Since I don't own a 9xxx series, maybe you could test if this is the setting that gives that extra performance?
I'm running a 2950X as a streaming PC, so I have some PCIe cards in it for capture, and there might be other things soon. The 5600X in my gaming PC isn't THAT far away from it in processing power, and it's fascinating. For processing power, I could definitely go with many AM4 and AM5 CPUs for the streaming PC, but I'd lack the PCIe lanes, so any upgrade will be very expensive.
If you ask me, the most valuable info in this video was the little "RAM break" talking about how and which ram runs, since Ryzen CPUs basically "stand or fall" with the RAM choice.
amd did in purpose in their board partner bios CPPC removal on certain boards , there was a function on my gigabyte board ...but that was on f5 bios is last thing i seen before SOC issue for all boards partners starting re-tuning and i did notice majority partners just remove it altogether and i really not sure its really working on my 7900x...its either on frequency or driver control... ?? its why linux running faster cause its schedular is different than windows nt scheduler runs on different layer by driver mode...microkernel runs like that give more user contol over linux kernel does not gives user more control on hardware . I just hope linux users don't screw up since we have no control what they do in adjustment and we linux users don't have any freedom to control it by software ourselves
I bought 7950x, but 9950 looking like it might have an advantage, though unsure on stability/balance/cores where 7950 is at top. Both 9900 and 9950 are showing mixed results on single-core results, so 7950x is definitely a sweet spot. Not sure if it's worth upgrading yet.
@@aelderdonian that statement is generally true if the update is promised by the same company that sold you the hardware, where the company is not financially incentivized to provide you software updates down the road, but we are talking about Microsoft that needs to fix their OS and specifically their scheduler, AMD has already done their job and their hardware works fine on Linux, it's 100% Microsoft's responsibility to fix their shit here
but microsoft would have several antiviruses runing on each core - the searchengine in windows 10 are as slow as running cyberpunk on a singlecore cpu and has been the last 8 years - I guess it has a conference with microsof if im legit to search on the harddrive
With these prices in mind ( 25% tax included (EU)), what would you pick for gaming AAA online sandbox shooting games, and 3D modeling ? 9950X = 800€, 7950X = 620€, 9900X = 560€, 7900X = 430€. I planned to have single 34"inch QHD IPS 1440p monitor, 64GB 6000 CL 30 ram, and possibly pair with 4070, or 4080 GPU. ❤ I'm building new pc from scratch, after 7 years. I would like to keep CPU air-cooled as well. ❤
Just bought 9950X and an MSI X870E MPG Carbon WiFi motherboard and the USB disconnecting issue still exists/has come back, meaning the platform is unusable.....yes, the one that surfaced over 3 years ago when X570 was launched! Such a pain, an no influencers/tech sites are covering or mentioning this so would be good if influencers got together and held AMD and motherboard manufacturers accountable for this!
Something is very wrong with your Cinebench results on 7950X (1:26). Steve from HUB got 2201, my own system tuned for efficiency (manual 5.0/4.85 GHz @ 1.075 V) got 2062 against 1706 here. In Steve's video, 7900X gets 1697
I got Rocket Lake because of AVX-512 for RPCS3. Love my 11400, but even it can light on fire with a Z board and full limits removed, especially with AVX-512. 150W limit for that chip, 175W burst for my twelve year old Hyper 212 Evo (with new fan) before I get uneasy with temps.
I appreciate this review. I'm someone who just started buying my parts in preparation for this CPU. I'm coming from the Intel i7-7800X and desperately need a new CPU, but when the rumors started circling around about the 9000's I decided to delay building a new PC. Seeing all the doom and gloom around it was a little concerning, but I mostly use Blender/productivity software so seeing the boosts there had me pretty happy, gaming is secondary, so as long as the game runs better than what I currently am dealing with then it's good for me. Though I gotta say I've been thinking of checking out Linux for a while now since I don't agree with the crap they are pushing into Windows. Maybe I'll do a dual boot so I can check it out.
12:24 Is the same RAM channel being A1+B1 or A1+A2? My gigabyte board says place RAMs at like... 1st slot, skip 2nd, 3rd slot, skip 4th; if only using two sticks. Which if I recall was labaled A1+B1. Soooo I should actually be using A1+A2 for better performance?
a1+a2 for first kit b1+b2 for second kit. if you have only one kit, 2 dimms total not 4, then one dimm per channel is correct, probably a1+b1 in that case.
It's a great take with solid hypothesis on why Zen 5 isn't performing as it should. Anandtech did a great job at analyzing inter-core latency, maybe it's not the fact that they used the same cIOd they used on Zen 4, maybe it's windows 11 scheduler that isn't supporting this new uArch properly yet?
In regards to administrator providing better performance, this has been known for many years amongst the game cracking scene. Many cracked game installers will run the game as admin by default. There's also other reasons for this, like trying to reduce problems people may encounter. But this can obviously be abused by dodgy files. Running as admin certainly wont provide performance benefits with every single game, just occasionally. I've no idea why this happens, but it's been a thing for a very long time.
finally a comprehensive review. Gen-on-gen improvement is unimpressive, but fingers crossed it's a stepping stone to something better (10050X perhaps?)
One thing I haven't seen: If you use something like Proxmox, NUMA correctly configured to use a correct CCD and no SMT passed on to the Windows VM, how does the game work? What about Linux? I find that sometimes you can tweak KVM to do stuff that can get pretty close to optimal performance in baremetal. Thanks for the great work!
Bit out of context (or maybe not) but after some recent windows updates I have noticed that my python scripts take way too long to start running, probably due to windows defender even though I have all my development folders in the exception list in Bitdefender. However, windows defender appears to be turned off in the security settings so, I don't really know what is going on. Maybe other people have similar issues a well. I don't recall having such issues a few months ago.
I am on Zen+ so based on your conclusions I definitely need to upgrade :D my problem is to decide between 7950x3d and 9950x , I would run 4 desktop VM's during the day in the evenings I need something to game on with the occasional Finite element alalysis and CT image processing for work. I am wondering if the 3d v-cashe could justify the older generation for my mixed bag usecase.
yes u should get a:1 ccd chip aka 9700x or b:9800x3d bcos u dont really need tons of cores they just need to be fast and 9950x still has latencys between ccds and one is actually running I think 300,400 mhz higher than other..in short its waste and useless for gaming I mean its ok but ther would be no difference using 9700x in fact with 350€ left in wallet u could get better gpu witch would get u more...so yeah 9700x but if I think more nah..on long run pay 150 more and get 9800x3d
Hmmm. I’ve been running 64 GB DDR5 with 4 DIMMs at 6000, just using the EXPO1 profile since day one with my 7950X. Never had a problem. Programmer and gaming in 4K.
Thank you for the nuanced review of Zen 5, I really appreciate the Linux experience in addition to the Windows experience since I dual boot both.
Did I mention that I use arch linux ?
I'm now a pure Linux user on my desktop. Gaming with Bazzite. Best experience since Windows XP. 😁
I say "desktop" because I use a MacBook Pro as well. The macOS we know today was born from Linux (NeXTSTEP), so I guess I've just favoured that sort of environment over the past few years.
@@eQui253 Nowadays NixOS is the new Arch Linux. I use NixOS, btw.
@@eQui253 nice, I’m using Debian 12. Arch Linux is pretty nice although I wanted to slow down the updates especially since I wanted to use ZFS and not have an update break anything.
LFS is also a bunch of fun to compile. :)
@@Fractal_32 I don't use Linux.. it's a meme.
Windows is doing something strange. It's busy trying to load adverts into everything.
Everyone needs a threadripper to load all the ads !
I can just de-bloat and strip down Win11 when update support ends for Win10 next year, right?
@@handlemonium Why bother , just run a decent OS.
windows what's to see where you are going in the games to advertise they plan on putting ads in the new games.
@@andljoy Doesn't really help when your productivity software is Windows-only.
I'll stand by this: asymmetrical CPUs where simple core scheduling can cause a massive negative impact is simply terrible CPU design. If you want to save power or costs then just buy a cheaper CPU - and you also get consistent performance on top for free as well.
As a game dev I can say that by default, windows can schedule the “main thread aka mainloop” multiple times per frame on multiples cores, same for any worker thread. Moving to another core is either costly directly : you have to copy all the registers, internal registers, stack, and the cache l1 , partial or total of l2 and l3,(and caches size is getting bigger and bigger, that’s a lot to copy), or do it “fast” and let the process ask for memory : copy all the registers, internal registers , stack, do not copy the cache, letting the process ask for a memory address get a cache miss, filling the caches for that call, making the first execution of the process on a new core, very costly than stay at least one frame, and better many frames on the same core.
On console I always pinned all my threads to a specific cpu, on windows, yes you can see some gain, by pinning a thread to a cpu, but you could also see that the lowest fps is also getting lower, in some case much lower.
The problem, windows is not a console. On a console we get a guarantee that a list of core are for the game, and that no system thread would run on those cores. On windows, you can pin your thread to a core, and run your mainloop as fast as you could, it works, for some time, until windows decide that your thread did run long enough, and interrupt you as soon as you make a system call , and that call, instead of lasting 80ns as usual can last 100, 400 ms , because windows decided to use “your core” for something else, trashing the cache at the same time. Cohabitation is possible, but it’s very very hard, and when you think you have the solution, you find that sometimes, when you start a thread, sometimes it start only 100ms later… those are the usual lags and stuttering in windows game. So thread pinning = good, on windows, it give some gain, but also some loss, that could be bigger. This is the reason most musicians prefer macOS, the thread scheduler, not having those 100ms lag than an app under windows could have. 100ms for a game or for midi sounds or wave, it’s a lot. And the cure, it to had a one or multiple frame delay, fixing frame rate but introducing lag.
TLDR windows thread scheduler = bad
Generally speaking, do you think there's a notable advantage/disadvantage to scheduling in Win11 vs Win10?
@@JJFX- I still need to try Win11.
PS: The "solution" on Windows is to not CPU starve the OS
So, give the OS time to do whatever he wants, but at a time that "suits" the game engine.
Usually by adding a simple sleep(1); The 1ms, in this case, is not guaranteed; it depends on the OS,
The more starved the OS is, the longer the sleep(1) is.
So, the game manages the OS by giving it a percentage of its time. (upside down world) It's not perfect or "efficient," as you lose "compute time," but it works.
I have yet to try "for science." There are system functions to change if a core can be used by Win! What happens if I transform my PC into a quad-core for Windows and all the rest just for my code to profile actual performance?
Does thread priority have any affect on this problem? Does the realtime priority fix it at all?
I know nothing about this stuff just curious
@@tiedye001 Same as CPU Affinity; you get more time, but you "piss off" more threads with an "important" job, so when the OS can interrupt you, the lag can be violent. Thread priority is suitable for a 3/4 ms task; for example, if you want it done fast, it's not ideal for constant performance 100% of the time.
This is why a console can beat a more expensive PC by 20% to 30% between a similar CPU and the same frequency (Rainbow 6 Siege), between Windows and a PS4, for example.
Linux Reviews from this channel and Phoronix Review clearly show a major difference between the 9950X and 7750X, where as tech sites and channels using windows to review the CPU's show the gains to be marginal or even to have regressed in some cases.
This sounds like a windows issue more than anything. Windows really sucks.
Most Benchmarks here come from Windows too, as it seems, its all mixed uo unluckily
SO, the most widely used platform thats been out for years.. and a new piece of hardware comes out where the hardware company makes the drivers.. and its the OS fault?
@@1DigitalFlowno it's not the OS' fault. It's AMDs fault
@@InfernoTrees I know that.. that's why I am replying to a comment that says "windows sucks"
@@1DigitalFlow oh I know I'm trying to back u up xD. Probably could've come off better but, windows does suck ass, but jfc AMD did NOT cook LOL
Thank you Wendell for an amazing and truly honest review.
A couple of my audio plugins can use AVX-512. For example, 2CAudio Breeze which is a nice reverb. I hope Zen 5 encourages audio developers to start using AVX-512 more.
The problem with AVX-512 on the CPU is how likely is it for a GPU to not run AVX-512 infinitely better than the CPU?
@@budthecyborg4575 GPU isn't great for audio because of latency
@@HolarMusic Good to know.
The only application I have for AVX-512 right now is the Topaz AI upscaler and GPU performance is an order of magnitude better than CPU.
@@budthecyborg4575As far as I'm aware AVX 512 workloads are still very much in the high complexity territory that CPUs beat GPUs at. GPUs tend to be best for stuff that can be broken down into a lot of very, very small tasks (or AI, but that's in no small part because of dedicated AI accelerator hardware)
@@bosstowndynamics5488AI inference is for the most part just multiplying your input with all the weights and biases in the model and AI training is just fancy matrix calculus, which both runs really nicely in parallel
Phoronix just released his review and the numbers on Linux are just incredible. Too bad the only thing we are going to hear is how bad those are for gaming...
He came with a Geomean of around 17.5% over the 7950x.
The loudest crowd most of the time is the smallest (Gamers)
@@ThaexakaMavroand most are gamers that wouldn’t have even bought one of these in the first place, but le circlejerk
That's the thing, the average results are heavily carried by avx 512. Nothing to do with linux/windows.
I can flip it and say that 20 to 40% perf increase from double execution units width is just pathetic... Should be at least 70 to 90%
@@panjak323 Which makes me wonder, was zen 4 just a homerun or zen 5 is unmatured/poor architecture?
@@panjak323 you clearly didn't read the review and are trying to spread FUD, and you don't understand geomean. The graphs include AVX workloads removed and it's still way up. God the comment section is so brain dead.
Best review I’ve seen on this, answering real questions about utilization and optimization. Not creating a story out of a handful of benchmarks
Blame HUB and Gamers Nexus and all of their cronie tag along creators that dote on their every word, for that kind of rubbish. It's not all about benchmarks, it's about real world utility and performance which they fail to understand for a 12-16+ core CPU. Reviews from people with real expertise are much better.
I'm tired of people thinking benchmarks matter when someone pairs a 4090 with a $200 cpu
One of these days people maybe get they are just amateurs wannabe standard just because one has a lab and another overbenchmarks like there is no tommorow.@@moonstomper68
@@tuckerhiggins4336 The reason you'd benchmark a $200 CPU using a rig with a 4090 is because you don't want it (and all the other CPU's as part of the testing to compare it to) to be constrained by the GPU.
If you tried benchmarking a $200 CPU in a game and used a $200 GPU as well, all the CPU results on the higher end of the chart are going to cap out at the same framerate because the GPU is the bottleneck.
The benchmarks are meant to compare one CPU to another using the same environment variables in order to show relative performance and value, not to show "how fast does this game run on my PC if I buy that CPU?"
I am so here for your analysis of this scheduling issue, i would really like to know wtf is up with windows scheduler and this seems an excellent case in point. All my machines run ProcessLasso, because i have to tbh for audio
Take your time on it, ima watch the whole thing XD
Michael Larabel at Phoronix loves them. Particular attn, the avg scores, like avg of all creator, avg of all database, avg of all games ... About the only thing the 9950 doesn't mop the floor with, is power efficiency. The lower 9ooo series do rather better there. There are a few outliers where the intel 14th gen does stunningly well, but those are outliers.
Compiler performance is stunning!
I'm guessing the 9950x can shine on power efficiency if power limited, maybe set tdp to 105W which would effectively be an eco mode on a 170W default tdp. I want to see those benchmarks.
The main issue is memory bandwidth. If your workload uses avx512 in particular an all core workload probably easily saturates 16 cores, it likely saturates 8 cores, it's funny that 6 core zen5 may be the most sensible way to run these types of workload.
I’m not a gamer, just a photographer and fine art printer. So, even with all the bad news and confusion re the Ryzen 9000 series, I decided to go ahead and replace my 7900 with a 9900X anyway. I have an MSI MPG B650i EDGE WiFi motherboard in an open case with an ID-Cooling SE-207-XT air cooler to which I have attached a second fan. The 7900 runs with DDR5-6000 memory, Game-Boost and the TDP elevated to 105W. It pulls 142W running Cinebench R23 with a high temperature of 79C for a multi-core score of 27550. When I replaced the 7900 with the 9900X, with the BIOS cleared except for EXPO, Cenibench R23 scored it at 32214 with a high temp of 81C. That’s a 17% improvement, much better than I expected. Think I’ll be keeping the 9900X and seeing how much I can wring out of it with a better cooler. I have no intention of playing the core parking game. BTW, B&H is selling the 9900X for $50 less than everybody is reporting. They also pay the tax if you use their PayBoo card. No connection to B&H, just a happy customer for many years. Some good news re the 9000 series is overdue. I appreciate that gaming drives the technology, but other users make up a large part of the marketplace, so it seems wrong for gaming to influence the whole picture. Even though AMD fumbled the rollout, the 9900X is still a great CPU for non-gamers.
@JEHendrix what cooler are you using for the 9900x ?
@ - Thernmalright Phantom Spirit 120, dual tower 120mm. Good up to 180W. I’m overclocking to that power level and getting 34000 multicore on Cinebench R23, 2222 single-core.
This is Linux vs Windows on Threadripper 2990WX all over again. And every outlet other than Wendell or Phoronix fail to consider a software issue.
Agreed. As much as I love GN and HUB I'm getting tired of them ignoring Linux. (GN shows Chrome compilation is a good step in the right direction.) When SOME people such as Wendell or Phoronix ARE seeing a performance uplift in Linux then *we should be asking:*
_Is the Windows task scheduler problematic for Zen 5?_
My 3950x still does me pretty good. But I don't do much with it.
Exactly, I mentionned it on hub and their response was to re test on Windows not changing shit or bothering to investigate the issue at all. Like guys, isn't this your job?
@@putneg97 100%.
It is almost ironic that Linux (open source) is keeping Windows (closed source) benchmarking honest. =P
Considering Linux performance is a great idea, but AMD should not release a desktop product unoptimized for Windows without warning.
Microsoft is more concerned with making money off their customers than optimizing performance on so many fronts. Gaining market share and taking the hit to their brand in windows is not on their radar when Ai and azure is making the shareholders purr.
Microsoft doesn't write chipset drivers, Microsoft doesn't write the microcode for AMD CPU.. so why are you talking about Microsoft?
@@1DigitalFlowBecause Linux does a better job, Zen 5 performance is better on Linux
Because as those chips are dissapointong this youtuber had to blame it on windows and this is now narrative for all the fanboys.
@grimfist79 He literally called out that, as a gamer, it's not wise to upgrade from Zen 4 to Zen 5. But there are more workloads than just gaming. You need to stop pretending that everyone who disagrees with you is a fanboy. He said he found some performance discrepancies that he needs to investigate further. If anything, he's more anti windows than pro AMD
He thinks the user is the customer 😂. It's the product!
you're my hero. finally a proper benchmark not focused entirely on gaming. would have loved to see some efficency benchmarks
I would've liked to see 7800X3D benchmarks in the gaming section. It is the gaming king after all.
Spoiler it's still the gaming king, you wanna game on ryzen still buy a 7800x3d.
They're probably waiting for the 9x3d to come out to make those comparisons
This CPU doesn't replace the 7950x3D.
just imagine it above whatever is on top in any of the gaming charts.
To be fair, this isn't really a video about gaming CPUs, that section is more about gaming on server-capable and workstation CPUs.
Here's something we're seeing in a test of the 9950X compared to 7950X, which we did as a ProxMox test. Inside of ProxMox hosts, we're seeing a significant improvement vs. at the wall power utilization. But in Windows itself, when non-virtualized (not using a QEMU CPU) the benefits are not being felt in any real mode; we don't really test for gaming. Now, obviously, we're comparing a point that is below straight to hardware to hardware virtualized. But the benefit does show up, and that is interesting.
Thank you Wendel . Really looking forward to Productivity review of these CPU's
Java don't have print thing some random glich appears if i solve those issues that the same
Thanks for going over the 4x dimm stuff. Very very helpful.
For couple of months now I play on Windows with SRV-IO and IOMMU turned OFF, along with Memory Integrity and Core Isolation also turned off. The difference in 3DMark results and gaming experience are confirmed for the better. I would like to note, this is my gaming only machine, I do have another one for AI LLM testing with 7900 (no X) where all these options (in BIOS at least since there I run Ubuntu) are turned on. But it surprised me that turning those OFF for gaming, made by 3DMark results better, and my gaming with Freesync enabled monitor with vertical sync limit to 120HZ on 5120x1440 rarely drop in CoD MW3 multiplayer (even for a second or two). My GPU is 7900XTX from Sapphire.
This is interesting, I haven't heard about SRIOV and IOMMU affecting performance. You're not running Windows in a hypervisor? Do you have a ballpark estimate of how much impact it had?
@@DanielKennedyM1I suspect that the entirety of the difference came from the software side - SR-IOV isn't even available on consumer hardware and the UEFI option is really just whether or not to let network cards report to the OS as multiple separate devices, and enabling the IOMMU doesn't change any of the performance characteristics of the chip. On the software side though, core isolation and related features mean that W11 kind of sort of does run on a hypervisor, by default, because it uses Hyper-V to sandbox some of the drivers and other system components to prevent privilege escalation. It's honestly one of the few ideas in W11 I actually like (assuming it's actually securely implemented), brings some of the security by virtualisation stuff from niche systems like Qubes to mainstream users, but naturally that does come with a bit of a performance overhead and that difference is enough that gamers who don't understand the security implications wind up disabling it (or of course power users who don't run sensitive workloads on their gaming system like the above commenter who knowingly take on the risk)
@@DanielKennedyM1 Yes, no hypervisor, disbled srv-io and iommu along with windows security options mentioned…
great review, thanks. very level headed and no hyperbole and unnecessary drama. Too many youtubers are concentrating purely on gaming and not as a whole package. I agreed with your conclusion, spot on.
I believe some reviewers cater to a younger audience, and I find the lack of professionalism off-putting. Being on UA-cam doesn't necessitate clickbait for reviews. This is why I've mostly returned to reading website reviews, such as those on guru3d.
@@ThaexakaMavro Chips and cheese should give a more in depth architectural analysis of zen 5 and shows regressions in several instructions as well as memory. Memory regressions alone would explain the lower than expected gaming performance.
Zen5 may well have potential, but I chose to follow Wendell's advice to buy a CPU based on how it performs _now_ rather than on some nebulous possible future capability. This morning, after checking the last round of Zen5 reviews, I bought a 7800X3D. It will anchor my main gaming PC for the next 2-3 years. Maybe Zen6 will make good on Zen5's unfulfilled promises.
Yeah. I bought a threadripper 7970X after zen 5 was delayed because at that point zen 4 was rock solid, I'd been on a 7950X for 18 months and I just needed more for my work.
4 months of no Windows on my gaming pc. Good to see the performance is doing well and will be interesting to see how your testing goes.
Thank you Wendell for all the work. Great review and personally I love to see someone who's leaning more on the relative than the absolute side of conclusions.
Had no idea you had a Linux specific channel. Glad I UA-cam just happened to show it under this video. I was confused where the Linux video was.
I can't believe after all this years Windows scheduler still does not fully support AMD's dual CCD layout so that AMD has to resort to crazy hacks with game bar and drivers to basically shut down half of the CPU you paid for to avoid performance degradation caused by suboptimal scheduling.
All these years? All these 1 years? Zen 5 is the first time AMD has had any need for special scheduling for symmetric dual CCD chips, prior to this it was only the 7900x3d and 7950x3d that needed advanced scheduling and most x3d buyers were going for the 7800x3d anyway. Yes, Microsoft absolutely should have fixed it (particularly since Linux already accounts for CCDs apparently) but it's not like it's been many years of wide deployment of the core parking stuff
@@bosstowndynamics5488 "symmetric" does not equal monolithic die where all cores have uniform access to caches and memory.
As each CCD carries its own caches having threads reassigned between them randomly, or threads of a single process running on multiple dies incur significant penalties due to cache misses and synchronization.
Dual CCD design first debuted in Ryzen 3000 series, or even earlier if you count 1st gen Threadrippers.
So yes, all these years and Windows still suffers from these problems. I'm not sure how Intel does it with their non-uniform core designs, but it seems they were able to convince MS to care at least somewhat.
While AMD's best effort is to hook up into game bar logic and disable half of your CPU.
@@MikeKrasnenkov Saying Windows "still" suffers from these problems is misleading - the performance penalty from communication between CCDs in everything up to Zen 4 for symmetric configurations was so small that pretty much no one noticed, so it really shouldn't be a surprise that Microsoft didn't bother to fix it. Intel forced their hands because all of their parts had 2 radically different types of core in them, whereas even AMD's heterogenous designs with Vcache (which have only been around for less than 18 months and are far less common since most gamers are going for single CCDs) will work just fine with scheduling misses, they'll just be slower. And the issues with the scheduler for symmetric dual CCD designs have only just now come to light pretty much today as far as the public is concerned.
@@bosstowndynamics5488 Why was the penalty so low before Zen 5 and suddenly so high now? Zen 5 should be pretty much the same what you call "symmetric" design as previous Zen architectures. What changed?
@@bosstowndynamics5488 All these years. Even first Zen had two CCXes. Now CCX is 8 core CCD but not always. Ryzen 7 370HX has 2 CCXes, one is 4 core Zen 5, second is 8 core Zen 5C. First Ryzens had 2 four core CCXes in one CCD. Ryzen 2000 was the same. Ryzen Threadrippers of 1st and 2nd gen not only had 2 CCXes per CCD but also used multiple CCDs. And ever since Ryzen 3000 launched, the basic configuration of desktop CPUs wasn't changed. It's still 1 IOD and 1 or 2 CCDs. TIt's been almost 5 years since 3950X was launched and like 7 or 8 since first Ryzens debuted. Intel launched Alder Lake three and a half years ago. And it's not the first time both AMD and Intel suffered because of M$.
BOTH companies had to write their own drivers to improve scheduling on their CPUs because despite their work and continuous push, M$ refuses to make WIndows scheduler better. Even after publicly committing to making a fix. AMD integrated their driver within chipset driver and uses GameBar to assign game to the correct cores. Intel wrote APO doing just the same thing but, afaik just without the help of GameBar. If a Linux shows improvements and Windows doesn't while Linux doesn't even need the software hackery, it's safe to say the hardware is not at fault, nor is hardware vendor.
My favorite review. For my uses (with "gaming" about #9 on my list), this review was what I was looking for. Keep up the good work!
I'm wondering if windows is simply using Intel code hence why everything seems to run faster on Intel?
I mean, Intel and MS have always worked together (wintel name exist for a reason) and the OS is favoring Intel .
Time to recreate all these benches using Linux!
I'm not really a huge fan of tin foil heading but I do think this is somewhat true. Intel and Nvidia have worked many times together with Microsoft. Especially Nvidia has the reputation to be in frequent contact. Meanwhile AMD has quiet the reputation of treating their software partners very poorly, so this might actually be a thing.
Great video and a great technical review, not just running benchmarks, the majority of review channels are more superficial and less technical. I remember when you said the performance of 1st gen threadripper may not be a problem of AMD but a Windows problem because in Linux it was ripping everything.
longest week of my life, waiting for this review
Good stuff! I'm still on an Intel i7-6700 PC build, so anything would be a nice upgrade at this point. I do video/photo editing, streaming, and adjacent creative stuff more than gaming. So far the Ryzen 7900x price wise looks a lot more appealing and I hope it continues to get cheaper.
Here's an idea for benchmark: Game + OBS streaming (or recording).
Because many workloads today aren't just the game running by itself so I wonder how would it all look like in scenario like the one above.
only good reviewer alive now a days! great work.
Looks like Zen 5 is alive like Johnny 5! Haven't thought about Johnny 5 in decades. Nice to see it on the desk today.
First! And the first ryzen9 reviews I've seen! That admin/windows trick thing makes me think in a couple months from now these processors will be a little bit better once either people or amd/windows figures their stuff out. Also still makes me excited for the x3d variants
Microsoft is always late to do anything for amd i know conspiracy lol
As usual Wendell is right 👍... I just got back from the future & the memory anomalies are resolved...
Conclusion: Windows is a mess. Is gaming on Linux advantageous over Windows with these new processor in total?
Almost certainly, and arguably was with the previous gen CPUs as well, though it'll always come down to Linux compatibility for any particular application.
I wish Wendell would test Factorio. It's one of the few games that is heavily bottlenecked by memory performance, much more than CPU/GPU. Its (native!) Linux build has always run better than Windows, for a variety of reasons - the most compelling being Linux's support for Large Memory Pages.
@@DanielKennedyM1 interesting. Thank you.
They smoke on well optimized proton games from what I hear.
@@DanielKennedyM1 GLIBC_TUNABLES=glibc.malloc.hugetlb=2 got me 20% extra UPS in factorio lol.
After jumping ship from MS advertising and spying platform I was shocked to see that Linux Gaming is now not only viable but often on par with Windows running through proton.
I find it not at all surprising that Windows gets outclassed by Linux on these new processors, as Linux is an actual operating system with a strong focus on performance as opposed to tricking the user into tracking its users and sending them ads. Where you put your dev efforts matters.
(Also Windows is a house of fucking cards)
Given the title was hoping for some benchmark comparisons to the threadripper chips, 3970X etc
same
Still rocking a 12-core 1920X Threadripper in my homeserver. I love that platform. Seems like yesterday, like you said.
If AMD ever launch a 24 or even a 32-core desktop processor, I'll make the jump. But for now the Threadripper lives on.
Except that thing gets destroyed by both Intel and AMDs current mainstream desktop flagship chips
If you want to max out the RAM on AMD platform, going with the ECC is probably the best way to do so. 48Gb ECC UDIMMs are available and the prices are decent at $200-250/ea. I mean, with that much RAM, random bit flips from the background radiation are inevitable.
Depends on what you're doing, the extra bit flips from large DDR5 sticks are supposed to be handled by the ubiquitous on die ECC anyway. Of course, not many workloads actually benefit from that much RAM so you're already selecting for things like home servers so full ECC is probably still a good call, but it's not mandatory for all high memory systems by any stretch
Thank you for this insightful and nuanced review. Truly in a class of its own.
Damn havent seen this Channel for a while. Makes me happy to see that Wendell has alsmost half a million followers now.
Amazing work!
I still side with with Wendell on this one. There's something errantly wrong with windows . Even the x-elite is a letdown mostly because of Microsoft . Only time will tell lol i did the right thing and bought the R9 12cores for 280$ brand new one month ago.
I've got a hunch that the improvements that have been realised on the ARM side in terms of efficiency are probably at least partially due to Microsoft having to strip out a lot of 30 year old junk and could potentially be realised on x86 too if they got their act together and made Windows work properly instead of focusing on creative ways to juice their metrics and shove ads in users' faces
For the memory, I had problems with 2x2 sticks on my AM4 with my 5800X3D, the system would not boot AT ALL, if a specific stick was not in the right slot. Once booted, I could apply the 3200Mhz without problems.
They layout is like you show, each pair on the same channel ( and there's a specific order, i lost hours trying to understand, and just bruteforce all the possibilities until it worked), unlike what's specified in the manual.
interesting comparison at the beginning, Zen 5 really starting to feel like Zen 1 from the past, it's something that will be better in future generations, Zen 6, 7, etc.
If you want to run very high memory clock/timings, keep in mind that your memory will degraded overtime if you running at high voltage (even it stamped to your EXPO and XMP profiles that they designed to run on that voltage).
You can run DDR5-6000-8000 at 1.35 or1.4v but it may not able to run at that speed after 2 years because it degrading very fast.
Some Mainboards also inject higher voltage than you set in the BIOS. I found some ASUS boards giving DIMM 1.38v instead of 1.35v setpoint.
I find it interesting that the 9950x beats almost everything else in minimum (0.1%, 1%) frames in most benchmarks. That's interesting, and something I would like to understand more. I wonder how much smoother that feels.
Good review, but the title pic wasn't covered - maybe a future video could compare these to Threadrippers? Thanks!
Wow you’re early Wendell - faster than GN and HUB! 🎉
By a couple minutes. GN went up right after.
He's more credible than GN to me. Far more authoritative and 1,000 times less self-impressed than GN. L1 talks to you. GN tries to talk down to you.
@@calldeltosell well said. Steve seems to be on his Louis arc for some unfathomable reason.
@@calldeltosellSteve @ GN & Wendell are friends. They view CPU & GPU from different view points. GN is a gaming channel. Level1Tech is an all around performance channel.
Well, note how tired he looks... I bet the Windows shananigans kept him from sleeping...
thanks for the info Wendell
Thank you for the comprehensive look-see, Wendell. 🙏🏼 Your views on these new parts (together with those of _Hardware Busters'_ Aris) are a *refreshing* (see what I did there?) departure from all the _weeping & gnashing of teeth_ that I've borne witness to on UA-cam lately. 👍🏼
P/S: The check is in the mail. 🫰
I'm still slumming it on an X99 platform, running everything, all those games plus more just fine.
What helps is a 4K TV as you can use a desktop window mode for gaming, the size of the desktop window is similar to a large monitor (eg. 1920x1080p window is 27 inches).
I do agree that Intel giving up on AVX-512 was a big shame, I rarely give AMD credit but I do commend them for keeping it around. What many dont realize is that AVX-512 isnt just the extended registers and vector length, but it also comes with strong optimizations for previous instruction sets like AVX2 or older, which many apps use. Sadly AVX10 will just be a band aid.
I love Wendel's videos. They are excellent. He is awesome.
I use Gentoo and have a 12600k. I'll be upgrading to the 9900x! Can't wait
Great video
The only channel that covers 4k resolutions
The reason that 4K is not commonly tested in CPU reviews is because 4K is primarily GPU-bound and GPU limited. You want to create a CPU bottleneck at 1080p with the GPU maxed out with no upscaling which allows different CPUs to be easily compared
Upgrading from zen 2 is exactly what I’m considering, I also had my eyes on the 48Gb DIMMS. My current PC will probably be used to upgrade my main server which is a zen 3 ryzen 5 on a ‘B’ board which I came to discover was a major mistake. I want to virtualize and consolidate all the random stuff I’m hosting on RasPis so I tend to think the zen 3 > zen 2 downgrade will be worth it for double the cores, and I’ll happily use the extra PCIe I lack with the ‘B’ board. Intel GPUs are also dirt cheap right now so I’ll likely grab an A380 for plex encoding.
"Dr Su, a third video has hit the Hardware Unboxed channel"
Hardware unboxed is run by a bunch of histrionic clickbait prostitutes ...
Thanks alot for the unbiased review. You brought up alot important subjects, us mostly none gamers realy likes hearing about. Like what to expect on DDR5 on 4 channel setup. I am a developer, so i use my system mostly as a server. Currently i have a 5950x, which is a fine little beast for this. But i see it being constrained on memory (DDR4) bandwith, when i start to load the 16 cores up, in my applications. I tested with a 7950x3D, and the way my programs allocate mem/cpu cores, doesnt benefiths this alot vs the 5950x. Therefor i hoped the 9950x woulld be a worthy upgrade. I had hoped we had seen a 8xZen5+16xZen5D cores edit, but seems that was a dream not coming true, before Zen6. Likewise i had hoped for a IO die with 1GB+ L4 cache to be shared with gpu/cpu, bit like broadwell, intel cpu... but no. So even Zen5 is both faster and cheaper than Zen4, i guess its going to be another wait untill Zen6 gets out... Again, thx for review.
Just installed the Ryzen 9950X into my board its insane 😊
this channel brings meaningful review for me.. 1. linux based testing for PRODUCTIVITY use case. 2. non-gaming and balanced summary to tell us if it's something worth considering. I'm planning to upgrade from 3700x to the 9700x
5:32 "PBO doesn't add anything" this is not true. You have simply toggled it on without tuning it. using Curve Optimizer will result in the same watts as stock and yet better performance and lower temperatures than just PBO. I wish people would use the tools given by AMD to benefit themselves. It's just odd to shoot yourself in the foot especially as an enthusiast. Leo from Kitguru has demonstrated this.
Thanks for another look at the zen 5 :-). The nice follow up would be to dive deeper into new zen5 microarch Vs win 11 basically to show/explain the current seemingly strange benchmarking results ... and to see what could be done to win 11 to use zen5 architecture fully.
Hey Wendell,
after reading about the "Administrator" account performance gains, my thoughts went directly to the large page support. It's locked behind a security group policy AND requires to run the exe elevated.
The GPO is under Computer configuration -> Windows settings -> security settings -> Local policies -> user rights assignment, called "Lock pages in memory".
Linux iirc has large/huge pages support by default
This was/is commonly used for mining applications where the gains are pretty similar, now I know there are some stability implications with memory allocation, I can't confirm if that's the main difference from the disabled Administrator account
Since I don't own a 9xxx series, maybe you could test if this is the setting that gives that extra performance?
Is there a better indication that there is a software problem in Windows than 10 extra frames by running a game as administrator.
I'm running a 2950X as a streaming PC, so I have some PCIe cards in it for capture, and there might be other things soon. The 5600X in my gaming PC isn't THAT far away from it in processing power, and it's fascinating. For processing power, I could definitely go with many AM4 and AM5 CPUs for the streaming PC, but I'd lack the PCIe lanes, so any upgrade will be very expensive.
my twin 1950x/2080FTW3 still going 24/7 years later, still dont regret building rigs with it.
I think you are one of the best youtube tech gods, you and Steve (Tech Jesus) from Gamers nexus the goverment of the us should pay you two guys.
If you ask me, the most valuable info in this video was the little "RAM break" talking about how and which ram runs, since Ryzen CPUs basically "stand or fall" with the RAM choice.
amd did in purpose in their board partner bios CPPC removal on certain boards , there was a function on my gigabyte board ...but that was on f5 bios is last thing i seen before SOC issue for all boards partners starting re-tuning and i did notice majority partners just remove it altogether and i really not sure its really working on my 7900x...its either on frequency or driver control... ?? its why linux running faster cause its schedular is different than windows nt scheduler runs on different layer by driver mode...microkernel runs like that give more user contol over linux kernel does not gives user more control on hardware . I just hope linux users don't screw up since we have no control what they do in adjustment and we linux users don't have any freedom to control it by software ourselves
I bought 7950x, but 9950 looking like it might have an advantage, though unsure on stability/balance/cores where 7950 is at top.
Both 9900 and 9950 are showing mixed results on single-core results, so 7950x is definitely a sweet spot. Not sure if it's worth upgrading yet.
If Microsoft decide to optimize their OS and vendors decide to do it as well, as they should, these CPUs will age like fine wine.
Huh? You want Microsoft to add more useless features that bloats the system and nobody uses? Don't worry, they're on it.
But never buy something based on potential future updates. More often than not the wine just becomes vinegar.
@@aelderdonian that statement is generally true if the update is promised by the same company that sold you the hardware, where the company is not financially incentivized to provide you software updates down the road, but we are talking about Microsoft that needs to fix their OS and specifically their scheduler, AMD has already done their job and their hardware works fine on Linux, it's 100% Microsoft's responsibility to fix their shit here
but microsoft would have several antiviruses runing on each core - the searchengine in windows 10 are as slow as running cyberpunk on a singlecore cpu and has been the last 8 years - I guess it has a conference with microsof if im legit to search on the harddrive
If ..if..if...we buy products based on their current performance or features not something fanboys hope will happen in the future
With these prices in mind ( 25% tax included (EU)), what would you pick for gaming AAA online sandbox shooting games, and 3D modeling ? 9950X = 800€, 7950X = 620€, 9900X = 560€, 7900X = 430€. I planned to have single 34"inch QHD IPS 1440p monitor, 64GB 6000 CL 30 ram, and possibly pair with 4070, or 4080 GPU. ❤ I'm building new pc from scratch, after 7 years. I would like to keep CPU air-cooled as well. ❤
Damn it was actually 9:30 PM when I started this video, you creeped me out 😮💨
i love the johnny5 reference, was one of my favorite movies as a kid
You've got some crazy bags under your eyes, I hope you will be able to rest now :)
Thank for your hard work
Just bought 9950X and an MSI X870E MPG Carbon WiFi motherboard and the USB disconnecting issue still exists/has come back, meaning the platform is unusable.....yes, the one that surfaced over 3 years ago when X570 was launched! Such a pain, an no influencers/tech sites are covering or mentioning this so would be good if influencers got together and held AMD and motherboard manufacturers accountable for this!
Something is very wrong with your Cinebench results on 7950X (1:26). Steve from HUB got 2201, my own system tuned for efficiency (manual 5.0/4.85 GHz @ 1.075 V) got 2062 against 1706 here. In Steve's video, 7900X gets 1697
I got Rocket Lake because of AVX-512 for RPCS3. Love my 11400, but even it can light on fire with a Z board and full limits removed, especially with AVX-512. 150W limit for that chip, 175W burst for my twelve year old Hyper 212 Evo (with new fan) before I get uneasy with temps.
Another fine review. 👍🏻
But I’ll properbly stay on my 3900x.
I appreciate this review. I'm someone who just started buying my parts in preparation for this CPU. I'm coming from the Intel i7-7800X and desperately need a new CPU, but when the rumors started circling around about the 9000's I decided to delay building a new PC. Seeing all the doom and gloom around it was a little concerning, but I mostly use Blender/productivity software so seeing the boosts there had me pretty happy, gaming is secondary, so as long as the game runs better than what I currently am dealing with then it's good for me. Though I gotta say I've been thinking of checking out Linux for a while now since I don't agree with the crap they are pushing into Windows. Maybe I'll do a dual boot so I can check it out.
I'm definitely not upgrading right now but I'm excited for when I eventually do.
12:24 Is the same RAM channel being A1+B1 or A1+A2? My gigabyte board says place RAMs at like... 1st slot, skip 2nd, 3rd slot, skip 4th; if only using two sticks. Which if I recall was labaled A1+B1. Soooo I should actually be using A1+A2 for better performance?
a1+a2 for first kit b1+b2 for second kit. if you have only one kit, 2 dimms total not 4, then one dimm per channel is correct, probably a1+b1 in that case.
It's a great take with solid hypothesis on why Zen 5 isn't performing as it should. Anandtech did a great job at analyzing inter-core latency, maybe it's not the fact that they used the same cIOd they used on Zen 4, maybe it's windows 11 scheduler that isn't supporting this new uArch properly yet?
It would be interesting to see if running games off a Dev Drive in Windows would somehow improve performance skmilar to running them as Admin.
I have a 5900x and i just don't see it yet. I was hoping for a bigger step up this generation. Most really intensive tasks i do on GPU.
Have you observed a similar performance difference when running games as Administrator on a Intel system?
In regards to administrator providing better performance, this has been known for many years amongst the game cracking scene. Many cracked game installers will run the game as admin by default. There's also other reasons for this, like trying to reduce problems people may encounter. But this can obviously be abused by dodgy files. Running as admin certainly wont provide performance benefits with every single game, just occasionally. I've no idea why this happens, but it's been a thing for a very long time.
finally a comprehensive review. Gen-on-gen improvement is unimpressive, but fingers crossed it's a stepping stone to something better (10050X perhaps?)
One thing I haven't seen: If you use something like Proxmox, NUMA correctly configured to use a correct CCD and no SMT passed on to the Windows VM, how does the game work? What about Linux? I find that sometimes you can tweak KVM to do stuff that can get pretty close to optimal performance in baremetal. Thanks for the great work!
I'll wait for a 16 core (of higher) with 3D v-cache on both CCDs.
won't happen
@@CornBreadMan264 or 16 cores on one CCD with v-cache.
@@Fractal_32 As long as the 3D v-cache size is 96MB or 128MB, I'll take it.
Bit out of context (or maybe not) but after some recent windows updates I have noticed that my python scripts take way too long to start running, probably due to windows defender even though I have all my development folders in the exception list in Bitdefender. However, windows defender appears to be turned off in the security settings so, I don't really know what is going on. Maybe other people have similar issues a well. I don't recall having such issues a few months ago.
Yeah time to upgrade...next year , from tr 3970x to tr 7970x. ❤
I have a 7700x I do a lot of fusion 360 Photoshop and Lightroom. Do you think I should just get a 7950x or go for the 9950x
Stick with what you got, save money
I am on Zen+ so based on your conclusions I definitely need to upgrade :D my problem is to decide between 7950x3d and 9950x , I would run 4 desktop VM's during the day in the evenings I need something to game on with the occasional Finite element alalysis and CT image processing for work. I am wondering if the 3d v-cashe could justify the older generation for my mixed bag usecase.
Let's go!
i want to stream valorant, black ops 6, and gta 6 in the future, and price is no option. I ordered the 9950x today, did I make a mistake?
yes u should get a:1 ccd chip aka 9700x or b:9800x3d bcos u dont really need tons of cores they just need to be fast and 9950x still has latencys between ccds and one is actually running I think 300,400 mhz higher than other..in short its waste and useless for gaming I mean its ok but ther would be no difference using 9700x in fact with 350€ left in wallet u could get better gpu witch would get u more...so yeah 9700x but if I think more nah..on long run pay 150 more and get 9800x3d
Hmmm. I’ve been running 64 GB DDR5 with 4 DIMMs at 6000, just using the EXPO1 profile since day one with my 7950X. Never had a problem. Programmer and gaming in 4K.
R5 3600 still running strong here. At 1440×3440, it probably still has headroom (paired with a 3070 tie)