Watch our livestream overclocking the AMD Threadripper 7995WX CPU! Crazy educational too as the engineers who joined us shared a lot of detail: ua-cam.com/video/vU179_czCnU/v-deo.html Error/correction - There is a typo in some charts where the 3960X's name is next to an erroneous thread count. This does not affect testing or results. Our apologies for the error. Correct count is 24C/48T. Watch our coverage of the AMD Threadripper TRX50 & WRX90 motherboards here: ua-cam.com/video/NTnVBIEPz1w/v-deo.html Find the AMD Threadripper CPU specs here: gamersnexus.net/news/new-amd-threadripper-7980x-7970x-7960x-threadripper-pro-cpus-announced Support our testing and grab a solder & project mat, modmat, or toolkit on the GN store! store.gamersnexus.net/ (currently 10% off at time of posting!)
I seriously wish it would be closer to 7950 in gaming, rather than my current 5950. If that was the case, i'd change platform simply to install more GPUs for other tasks while i'd still be able to game at pretty much best performance. But unfortunately current results show it's still more sensible to run a different PC for more GPUs. It's a shame, i was ready to shill out a stack of cash! 🥲
@@AyoKeito If you just increase the graphics you don't even need to bother about that, and remember the why it runs like that is because of the frecuency. Anyway, it's waaay cheaper to just have multiple Desktop systems if you aren't a server company struggling to fit more computers on the same building.
Threadripper has always suffer from higher memory latency than the consumer range Ryzen. Even in Zen 2 Threadripper, they are always +8ns memory latency from regular Zen 2 Ryzen with the same memory timings in AIDA64. Games love low latency memory. And mandatory use of Registered DIMM on TR 7000 series hurts memory latency even more.
@@Splarkszter "If you just increase the graphics" Sorry, this is a fallacy. GPU performance will always top out at something; you can't possibly increase it any further. And if you do (ie; 8k downsampling), you're just dragging the overall performance down from something it could've been instead (ie 60 fps vs 120fps). This is even more important in emulation, which is extremely CPU dependent for good performance. As someone with a 4090 and 5950x (waiting for 8000x3d), I am constantly CPU limited while trying to reach 120Hz, irrespective of graphics options. And RT just drops it down even further because it still depends a lot on the CPU. If all you have is a 15 year old 60Hz monitor, then sure, that's a different discussion. But anyone playing modern PC games in an era of 500Hz monitors is probably expecting 120Hz+, and both the GPU and CPU need to be strong for this to happen. All of GN's CPU benchmark graphs show this issue very clearly. Wish they'd run some emulation tests though; no outlet I know of does this.
and thats why nobody in server market seeks Epyc as an Option. Because what matter efficiency when you need to pay also penalty fees for power overconsuption on at least 20 states
@@xXFlameHaze92Xx EPYC has literally been AMDs largest commercial success outside of their playstation deal...... Server share has went up 10% in the last 3 years alone....
I'm in the world of molecular dynamics and I regret to inform you that NAMD is unfortunately pronounced "NAM-Dee". Major props for running these simulations, they are not easy to setup. Extremely helpful to me as I plan out lab computers.
I couldn't keep saying "N-A-M-D" and had to put my foot down! haha, thanks for the info, will use for next time. Are those tests useful in figuring things out for your field? It'll help us determine if we keep running them!
@@GamersNexus Totally! Took me about 2 years to finally figure out the "correct" pronunciation. And absolutely! These are ridiculously helpful as documentation and testing of these programs on different hardware is at best limited and mostly nonexistent. You all are the only ones putting up bench marks with these tools, and for that I am extremely grateful!
@@brandoncoventry5662Do you know how these metrics relate to real-world benefit to you/your field, since you said they are hard to come by? Like is it a 1:1 if its 30% better in a benchmark you can expect somewhere around 30% gains in real world application? Interested to know how much one metric is more important than another!
@@GamersNexusLots of thanks for including these tests. I also work daily with molecular simulation software and can vouch to these tests being super useful for planning upgrades to workstations and compute clusters. Would it be possible to have some similar tests in the future for GPUs as well? Consumer grade GPUs perform extremely well in software such as GROMACS, but finding benchmarks for it is extremely difficult.
@@justahologram2230 Nope, he said the horrible gaming performance discarded it for Skunkworks, especially since it's his daily driver at home for gaming/streaming
In the very least, we know how to run a test even if the software is foreign to us. Hoping the community lets us know which of those are useful so we can incorporate them fully!
I'm part of the fantasy world of Finance, and I sincerely hope the Science guys know what they're doing. We definitely do not. The FSI bench is measuring HFT in equity markets, particularly derivatives, I suppose. HFT strategy and algorithmic implementation must be unique for each operator for it to work. I'm not even remotely qualified to give a ten minutes power-point presentation about it, but I don't see any practical reasons for this test, at least in theory. I'm probably missing something obvious here, so please feel free to correct me.
Being able to turn something you know nothing about into KPIs you can discuss confidently sounds like Steve missed his calling in marketing, but then I realised he's just too honest for it
@@Azureskies01 Wow I feel pity for intel trying all that finicky E core stuff and then AMD performs better in low thread tasks as they have the better performing cores and then again AMD performs better in high thread tasks as all their threads are all full performance cores...
@@blar2112yeah, it seems silly, they just can't compete with the TSMC cutting-edge processes. And I see it as a generally bad direction of things, because consumers benefits from a healthy competition. Otherwise it's just a monopoly. I really want Intel to succeed in their ventures and be competitive. But it's seems like cutting-edge electronics fabrication is so expensive process, that it's will be impossible to push progress 'forever' without some sort of technology consolidation in a single entity.
ARM is pretty interesting by itself. You wouldn't want ARM to die tho, there is a reason the x86 platform is focusing on efficiency. DON'T LET COMPETITION DIE, BE STRATEGIC.
I hope that in the future we see some games optimized to lots of cores. Would be interesting, for example, to have a massive grid on a racing game where each AI has it's own dedicated thread.
@@hueanaoor using the efficiency core concept from Intel to add some sort of parallel computing ability to your cpu. Just imagine what graphics would be possible if you'd be able to outsource sth like ray tracing to mostly underused parts. I don't know enough about this stuff for more in depth ideas, but it seems like a logical point for improvement
"I know this product is not for my use case, but I want to signal to corporations that I'll buy it even though the value proposition is objectively bad"
That's an interesting thought. Yes. It would definitely change the dynamic: If you were bound to 32 cores max in some application, you could theoretically run 2 instances of it and increase the throughput. Handbrake is a good example of this: You could spawn multiple Handbrake instances with more cores. Great question - thanks for posting!
@@GamersNexuscould you set core affinity in widows and run two or four "normal" all core benchmarks? Would be interesting to see if tasks are across chiplets etc
Yup, virtualization is a very cool thing. Sometimes there may be some schenanigans but if you run a Linux based OS made for that you could run multiple single-core-heavy tasks at the same time for example.
It's not really applicable to workstation CPU's, I'm not 100% sure why such high core count workstation CPU's even exist, but in servers virtualization is a prime use for such high core count parts. There are a couple of virtualization benchmark suites out there that might be relevant, assuming virtualization isn't disabled on these CPU's.
Main benefit of Threadripper is the substantial bump in pci lanes and no longer having to deal with that "if I want to use 3x nvme drives the pci slot gets disabled" nonsense.
23:59 I am technically part of this industry as a quant. In this industry the models are sometimes by firms who need the calculations faster than other firms so they can arbitrage first. But more often than not other models are calculated with similar methods so I imagine these benchmarks are useful. Also, although there are many types of Monte-Carlo simulations, there is really only one type used for time series so the benchmark is probably using that one. However, I am not too familiar with the details of the benchmark you provided so I couldn't say anything for certain about it. Lmk if ya'll have any questions
As a molecular simulation PhD, I also loved seeing these workstation tests included. Especially Lammps is really relevant for me. I actually think that the testcase in lammps is also molecular dynamics (just like namd). They just implemented some of the calculations/parallizing different. What might be important to note is that in my field we topically run simulation 3 till 5 times with identical settings, just different initial configurations (results here have statistical value). So to me seeing the results of the 32cores compared to the 64 cores set up as 2 simulations next to each other would be very informative as well. (Or both running 16 thread simulations up till the cpu is 100% loaded would be good too). I would be interested in what the CFD people here would think of that.
For 3D rendering tests, and since nearly all 3D renderers these days support network rendering, it would be interesting to see how one of these systems compares (in terms of price, performance and power consumption) with two or four render nodes using regular AM5 CPUs.
A 96 core cpu would be pretty incredible for server purposes, yeah? Like for Eve Online, you could host 2 solar systems on a thread each on one core for 192 systems on a cpu with tons of headrooms for having many players connected?
As Gentoo user (and developer).. many compilation tasks are not well parallelized. For many projects a lot of time is spent in configure phase that is sequential. And packages alone are merged sequentially. I would say compilation tests are enough. Whole distro compilation is maybe use case for distribution maintainers but it's really very rare use case for typical user. Some really large projects that can benefit, like Chromium, require also lots of RAM for that many parallel compilation tasks. (however I don't want to be gatekeeping)
@@reav3rtm to be honest I was just thinking about how funny it would be to have that as a benchmark, since yeah it's pretty much a long compilation test that can take days on some less powerful/mid range cpus and depending on packages installed, so I shared that thought. You don't want to know how shocked I was that gn actually responded and even said they'd look into it!
@reav3rtm while it might be true, I know from the Internet what's the usual Gentoo specialist outfit. And I really want to see Steve in knee high socks..
I think Photoshop realized the power and speed of 64 Zen 4 cores and said: "Not even Chuck Norris has this much Power" and closed itself in an early defeat.
Thank you for three excellent video! It's super exciting to see the return of HEDT CPUs from AMD. I really miss the 3990X on these charts.. 3970X vs 7980X is not exactly the right comparison generation-wise... I appreciate the highlight of the 3970X v 7970X, which kind of captures what could be expected. Fantastic video!
As an Intel fanboy over the years I’ve woken up. Looks like AMD will be taking the cake for future proof CPUs for gaming, ect. My next build will be AMD after my 13900k becomes obsolete
Considering how the prices of the 3960x and 3970x have come down and how remarkably efficient they still are, getting yourself a used Threadripper has insane value proposition right now!
As someone who purchased the 3970X and is planning on building a 7970X machine, one thing that was interesting to me was the inflation comparison between the two parts. I know it was only a couple of years, but I purchased my 3970X in March 2020. The comparative purchasing power today would be about $2,370. So, yes it's more expensive, but so is everything I guess. Just an interesting comparison to make when you're looking at high value parts where the small amount of inflation actually shows up in a reasonable way. Thanks for the great content as always!
One thing I've been wondering is why the 7950X3D isn't on the blender efficiency charts, especially when it appears that it would be as efficient if not more than the 7800X3D. It pulls ~62% the power of the 7950X and is about 7% slower than it in Cinebench. Using the 7950X3D review video for efficiency scaling, it would sit at basically 17.6Wh which is insanely efficient for consumer parts.
I was going to ask about the same thing. It looks like it was omitted unintentionally, but would otherwise be the top scorer for efficiency in that test after the new Threadripper parts. I'd love to see where it officially ends up with that test metric.
Amazing tests, it's interesting to see how well those CPUs perform on code compilation, because despite being a highly “parallelizable” workload, at the end of the compilation pipeline, in which you have to assemble a single artifact (or a couple of individual artifacts), compilers and linkers are very reliant on the performance of individual cores. And those CPUs are clearly worse at single-core performance (not only because of power and thermal concerns, but scheduling gets harder), but they save so much time by going through the parallel bits extremely fast that, even if those CPUs take longer to go through the non-parallel bits, it's still faster and more efficient on the job than the lower core-count counterparts. It still like you said, you have to evaluate whether your workload takes advantage of it or not, and also be aware that there's a difference of highly multithreaded workload and highly “parallelizable” workload, the latter benefits way more from high core counts than the former, which commonly has more interdependency between the threads, so one thread stalling may negatively affect the others. Also I'm a little curious to whether those CCDs are as powerful as the customer lineup or not, how the gaming performance fare if you were up to have only one CCD enabled. Yes, it's crazy to buy this monster and proceed to disable all but one CCD, however the question is whether those cores can hold higher clocks for longer if they were not power and thermal constrained.
It might be worthwhile to include Apple systems in benchmark showdowns of this type. Creative and high end workflows are one of the areas Apple targets with their chips, and seeing how they compare to threadripper in various tasks, especially Photoshop and Lightroom, would be really instructive in determining the real value proposition of their systems.
I'm not sure it matters that much since the takeaway from any Photoshop chart is always that Photoshop is a monster of bloated spaghetti code that always scales poorly.
Please keep these Threadripper reviews in. Can't wait for 7995WX review and to see what benefits those higher grade chips actually provide and what influence memory channels have. At $11,000, that's a steal.
GN : Guys! Stop looking at gaming benchmarks timestamps for these workstation workloads. Us : Don't mind us. We just want to know the bare minimum standard we need while we game, render 3D animations, compile handbrake videos, run OBS in the background, etc etc, ALL AT THE SAME TIME.
Cities Skylines 2 seems to love higher core count CPUs, there was a post where they had a city with 600k pop with a 7950x3d with almost no slowdown in the simulation even at 3x speed
Thanks for the detailed review as always! Are you planning on a thorough review of the 7960X as well? Especially if you're interested in video production workloads, the 7960X looks on paper like it might hit a sweet spot: you get the lanes of Threadripper for a second GPU, dedicated video output, high speed networking or storage, and higher clocks than the 7970X or 7980X, at a much lower cost (according to Puget, the 5965WX was almost tied with the higher tier 5th gen Threadripper for content creation). Those features would make it the ideal CPU for a lot of video professionnals. The inclusion of Resolve benchmarks would also be really cool, although your test suite is already a lot of work, it might help out quite a bit of people
AMD's server products are the most efficient chips to have ever been made. They are burning clean and running on all cylinders. Now if only their GPU division could be as amazing.
All of the money are being gobbled up by the Ryzen team. The Radeon team is so incompetent they couldn’t even market a main seller feature right which is fking sad😢
@@dogdie147 They are doing what they need to do on the hardware side even with RDNA3 not being as proficient as they might have projected. Their driver (and whole software side) is the real lacking division. My 7900XT shouldn't be pulling over 100 watts when I wanna load up Oblivion without mods and it is only doing that because the card ....for whatever reason needs to run its VRAM at full frequency (which causes the card to draw ~90-100 watts itself). Hell the reason why the the idle power consumption was an issue (and still kind of is) was because of the cards not clocking down the VRAM lower than 909 (or max frequency) mhz
AMD's GPU division could be running on all cylinders and most _gamers will still not buy anything but Nvidia_ because Nvidia's manipulative marketing is very strong. The fear of missing out is too strong for gamers to resist even at the lowest end where Nvidia fails to stand out.
@@sammiller6631 Radeon didn't have to release FSR3 and could have put all that time and energy into making FSR2 a better DLSS. They didn't because they are chasing nvidia again. Radeon didn't have to come out with chiplets when they clearly weren't ready (idle power draw, multi monitor power draw, crashing in 10 year old games like in FFXIV-something i personally had happen for the first 5 months of owning my 7900XT). They did anyway. Radeon never misses their feet when they shoot for the moon.
7980X is clearly memory bandwidth limited in many of these benchmarks. You can see sometimes zero scaling, despite the workload being scalable well (things like parallel decompression, rendering), in places where there is less memory bandwidth need, like compilation or compression, it scales way better (and remaining difference can be explained due to lower base/boost clocks, and not perfect scaling of the benchmark itself - i.e. linking during compilation)
I'd really love to see the 7960X benchmarked with these per-core numbers broken out like you did here, and an idle wattage for all of these chips. I run a gaming VM home server setup so energy efficiency is good to know.
I'd love to see benchmarking on VMs running on the threadripper chips. i can image another use-case for them is running small Slimclient servers for small companies who don't wanna dish out 100k for an enterprise solution, but who still need decent processing power for say, 8 different stations simultaneously. with 64 cores you could be running 9 different "6 core" MVs natively off hardware with a whole 10 cores leftover for background tasks. not to mention all the PCIE lanes that could be running storage in RAID to making disk writing for the VMs redundant and instantaneous. that could be a gamechanger for small businesses.
Wonder how this stacks up to high core count ARM CPUS like the Ampere Altra since they've started experimenting with making workstation stuff based around it as demoed by Jeff Geerling (and for that matter Apple's M3 Ultra whenever that comes out)
I know I'm late to the show but I really appreciate these reviews. Many of us that do software development (compiling) or other types of productive work in addition to gaming need the benchmarks like Chromium compile. Even if we don't compile, it is representative of a class of workloads that isn't just gaming. Same with blender and 7zip. Thanks for doing all the work and cheers from a copper cup.
I do find that the 12 cores in my Ryzen 5900x are barely enough now. Some games are starting to utilize 8-cores, and the remaining cores have to handle a lot from various other background software I use. 16-core will be the minimum next time I upgrade. A low end Threadripper still can be an option.
Another great and neutral informative review. I'm one of the people needing these beasts for daily computations, so I'm excited to learn everything before eventually ordering one. Would you have the possibility to make some Passmark tests with the 7980x and 7970 as these scale very well how my own workload will fare against other cpu's. Doing a lot of 'monte carlo' style based works, so if it's similar to the financial monte carlo you ran, it's super promising!
Can you give some more background on why/how Passmark represents your work? That'd help us in planning (and explaining it if we introduce it). Can you give an example of how your work relates to some kind of real world "output" or result? What does it mean for you if a CPU is faster? That'd help a ton. Thank you!
@@GamersNexus Myself and a lot of 'pokercolleagues' run a lot of computerized simulations, some kind of Monte Carlo algorithms to build our strategies. When we look at the internal benchmarks solves we run in our software, we see that they relate very much to the passmark scores in general. Off course there are some small exceptions (as always) - usually related to the number of cores/threads. ("Generalized" our simulation softwares run faster the more cores as a very simplified explanation but the scalability relates very much to what we see in passmark - besides a few exceptions)
Adobe Try To Be Efficient Challenge (Impossible). Honestly, it boggles my mind to see how efficient this CPU for production workloads right out of the box.
Please, GN, compile and post your complete benchmark charts on your new website! Every review shows partial charts that include only a (random) selection of models that you've reviewed, which makes sense for presentation purposes but users want to have a complete reference SOMEWHERE that they can use for their own comparisons. This is especially important for PC cases and CPU coolers, where some very old models still outperform many new ones. At times I've had to open several different old reviews to look at partial charts in order to get a sense of how one model compares to another, and it can be frustrating. 🥺
@@pachete. Are you sure? I know they'll post their old reviews, but that's not what i'm talking about. I've never heard them say they'll be posting complete charts, but maybe I've missed something.
G'day Steve, I watched the Livestream first, not only was it lots of fun but also BIG THANKS to Amit & Bill for their time answering technical questions. Also as you mentioned it in the livestream it would be really cool if you did make the GN Logo Blender test available for us to use at home so then we can test our CPUs that are not on your list (like my Athlon 200GE) to see how terrible they are at rendering for a laugh😁.
I must say, it is nice to see those units tested with more "industrial" software, like FEM etc. The regular test suite just does not show what are these CPUs capable of in my opinion.
Hmm, I'm wondering how these would perform in games if you split them into groups of 8 cores with a video card. In say a really niche case of workstation by day, entire family gaming rig by night.
Until there's a reliable and fast x86 emulation layer for ARM then that's not really possible since the vast majority of software used in these tests are native x86 applications.
CPU review with a WHITEBOARD and a LN2 OC livestream in the same day??? Y'all are crazy. Love the content, to everyone over at Gamer's Nexus: thank you!
A Ryzen 7 1700x is worth about $50, while a similarly-aged Threadripper costs about $30 more _and_ requires a special motherboard. In conclusion, you might expect that a Threadripper part would be good for gaming after a few years, but given the actual cost to get one, you'll always be better off with a newer Ryzen 5 or i5.
Well, not always. For mainstream use, definitely. But there are professional use cases where current-gen R5/i5 parts just won't do what you need in heavy enough workloads.
For some people, nah Those PCIE is really real estate, you can load fuk ton of PCIE device without worrying about which one goes which and gets disable or whatever
@@GamersNexusdo wish those PCIE gen 5 motherboard can do super-split PCIE that turn into lower PCIE version but double the lane. Like, you got cheap HEDT there
@@GamersNexus That clarification is important. I meant for gaming, of course, and made that comparison because only older Threadrippers would have a similar platform cost to an i5 or Ryzen 5. I guess I didn't make that entirely clear.
@@GamersNexus Yes but : you need more compute power? Buy anything - including EPYC or XEON - for pro use that matches your compute need and fits your price range, and get a mainstream CPU - Ryzen, i5, whatever - for everyday purpose. It will be both cheaper and a better experience. HEDT is dead, and while these TR 7000 and 7000 PRO themselves are good, the cost AND lack of versatility of the platform, mostly through the absurdity that TRX50 and WRX90 are compared to any decent EPYC 9004 mobo for instance, render the platform a basic scam. I consider HEDT dead for now, and if it weren't for the scam that DDR5 currenly is (come on, it's been 2 years and we don't even have 64 GB mainstream udimms) and if we at least had these dimms, where you could enable 256 or 512 GB standard DDR5 in 4 slots, this would not even be a discussion : you need real professional feats like tons of PCIE lanes, compute power and at least 1 TB ram : get a server CPU and board. You're a prosumer doing rendering, huge labs and prototypes? Get a Ryzen 9 or an i7 / i9 and 256 / 512 GB DDR5. Let's be real : with CPUs normally progressing in terms of compute power, by the time Zen 5 / Arrow Lake or Zen 6 / Nova Lake are out, if 64 / 128 GB DDR5 udimms are out, this pseudo HEDT is probably dead. People who need more of everything will go to server chips. People who only need more ram will stick to mainstream.
Correction. "Before VRM efficiency losses" should be "Before VRM inefficiency losses". The 1920 buck converter is about 60% efficient. That means the VRM generates almost as much heat as the CPU. You should look into water-cooling your VRM. Of course, the 2011 Ćuk-buck2 has an efficiency of around 99%, is smaller and costs less than the 1920 design. But that would require the board designers to learn some of the 100 years of improvements in power conversion.
Small point: The TR parts don't have more cache than the regular 7000 series, the total cache doesn't matter. What matters is cache per core, and that is identical except for the X3D parts. If they decide to release X3D TR... yeah lol. But those won't have more cache than the 7800 X3D per core either.
You are correct. I wonder if the reduced cores per CCD models (or if disabling cores in each CCD manually) would make a difference. The 24 core part has the same L3 as the 32 core part, granted that's still far from the cache per core of the X3D CPUs. I doubt AMD will sell X3D threadrippers since productivity workloads don't tend to benefit from it. Then again, AMD does sell a few EPYCs with the extra cache, so who knows? Maybe someone will at least try to get one of those EPYCs into a board that allows overclocking. Some people did it with the older EPYCs.
I confess. I watch the gaming segments for Threadripper reviews because I think it's fun to see how a CPU not meant to handle these kind of workloads performs. But it would be amazing to get to work on something that can use a Threadripper lol.
"Roofline model" is a better way to think about benchmarking. FMA is the speed limit Everything boils down to a ratio of how many arithmetic operations you do per byte transferred. All the different cache architectures are just an attempt to deal with the transfer side of that equation Also, your charts would get MUCH more interesting if you compiled for the the specific CPUs as there are certain computations that would expose orders of magnitude differences between products an example would be: multiply and add half floats (f16) and see what happens when you compile for different targets as the buffer size increases. (xeonW vs TR gets interesting..)
There is no way to leverage more than 1 GPU for gaming besides the odd accelerator here and there. Its a dead concept. Now if you are looking to run some parallel workstation workloads over multiple GPUs, now we are talking options.
Those app that utilize full GPU computing actually don't really care about PCIE speed since it doesn't need to communicate that much with CPU during process, and subsequently CPU load Because of that, the performance is just scale of those GPU performance anyway.
Desire to use server or workstation boards and CPUs for gaming is courtesy of EVGA when they made their SR-2 Classified dual 1366 socket server board capable of using dual Intel XEON processors, 12 dimms of memory, and up to 4x graphics cards. The other cool dream for cool-factor at that time was the Supermicro quad socket AMD Opteron server board. For gaming, my i5-6600 mini-ITX with GTX 1080 Ti still runs most games at high settings on 4k, and zero issues with 3D design rendering. The idea of having a T-rex monster system is always going to be way more awesome than a little gecko, though.
Please can you guys figure out a viewport test for these CPU's for blender. No one in their right mind spends this type of money on a CPU and doesn't have a 4090 to render on. People who use Blender want to know VIEWPORT performance: Can the CPU playback an animation in solid mode at the required fps. How are fluid bake times? How are other physics bake times, etc. I really appreciated your reviews but render times are useless.
I think it would be cool to compare that to M3 Ultra or whatever the best apple silicon thing is right now - kinda like best of what prosumers can expect on both ends :)
Love it that you include Stellaris in the Gaming Test. Thats the only game that constantly forces me to upgrade my CPU. I really hope a Ryzen 8000X3D will bring great improvements
24:05 the financial and probability simulations are very important for machine learning - whether that's training or inference, depending on the model architecture. It's a very big deal for ML but even bigger for time series data analysis in market dynamics when you assess a large system of signals and indicators which compare to one another and are compared across several time scales in addition to many different products whose price action is being recorded. Log complexity on millisecond updates across dozens or hundreds of items needs a ton of processing if you don't want to miss out on an arbitrage or trade intraday algorithmically.
AMD has made no claims about socket support. Expect to retire the motherboard when you need something faster. If you don't like that then you're SOL as Intel's upcoming Fishhawk Falls Refresh will also be EOL after this "generation" (it's basically "14th gen Xeon W").
"Everybody jumps to the gaming [section of the video]" Well who'd have believed that people subscribed to a channel called GAMERS Nexus would have done that? 😁🤣
At least for me, the interest in "checking the gaming scores" for these cpus is the desire for a single computer that can handle heavy workloads while also still being great for gaming. I'd throw a decent chunk of cash at something that can do both without compromise. These new threadripper cpus had seemed interesting in that they might have opened up a new pricier option for people trying to get both gaming and workstation performance, but it looks like you give up a lot on the gaming side of things here despite them being very impressive workstation cpus. At some point it starts to make sense just to build two completely separate machines dedicated for each use case, but, aside from that, the sweet spot for a single machine option is likely still the 7950x3d or 7950x (depending on your tolerance for tinkering with core affinity settings for more gaming perf).
You can pour LN on bare skin as long as it runs straight off. LN causes problems when its held against the skin, as in submerging a hand or soaking your clothes.
I think people are jumping to the gaming benchmarks because this is much of your audience. Your channel is called "Gamers Nexus" for a reason. :) These people just don't need or buy workstations, but they are curious about technology and what it can do.
I'm going to buy a 7960X and TRX50 board for gaming and watching UA-cam just to spite Steve :D I'd love to see some VM/Hypervisor and database workloads for ThreadRipper! I miss the glory days of HEDT gaming, my i7 920 D0 stepping and OCed samsung DDR3 half-height RAM was a BEAST, my 3930K and TR 1920X less so but we don't talk about those. I'd say ThreadRipper is the competitor for Xeon W-2000 series and TR Pro competes with W-3000 series, especially looking at specs and pricing.
Wow, that efficiency on the TR 7980X. That low v-core helps alot here and it is very, very good. I hope you're going to be overclocking that particular chip later today as that looks like a good sample. Why were some games ran with ECC disabled? I see them on the charts but didn't hear them mentioned. There was some talk of the how these are high cache count CPUs but the lower core count V-cache enabled parts, the Ryzen 7800X3D in particular, were able to trounce them in gaming. I would be that you can get V-cache like gaming performance on the TR 7980X parts but you would need to disable 56 of the 64 cores and in a way where only one core per CCD is active. That'd actually give the TR 7980X more cache per core than even the Ryzen 7800X3D. Really curious how would compare as there would be the oddity that the L3 cache is not directly shared on the TR 7980X in that configuration (cache data duplication, more die-to-die coherency traffic etc.). I'm really curious how the TR 7960X fairs. I'd be a nice jump to the higher PCIe lane platform without the premiums CPU price (premium being compared to the TR 7970X and 7980X parts). As for testing, I am curious what the TH Pro 7995X could do on this platform. Granted that is an 8 memory channel part on a 4 memory channel board but it is a valid and supported upgrade path. So while AMD only had a single generation of parts for TRX40, the TRX50 platform will have an upgrade path due to being able to run a WRX90 chips in 4 channel configurations. As for the future, I'm optimistic that there will be a second or even a third generation of WRX90 parts. I suspect there will be a second generation of TRX50 parts. The main thing I'm hoping for are V-cache enabled models as the performance uplift it brings is generally stronger than additional cores, especially for more day-to-day tasks. HEDT and workstations are performance monsters when loaded but v-cache gives a system that extra snappiness when creating the models, moving the data around prior to hitting run on a long simulation.
Thanks Steve n team, this helps heaps with my cost benefit forms for management. they want to see number/bar go up mean workers work harder/less downtime
Appreciated the HPC (high-performance compute*) benchmarks being added to the mix for the high-core CPUs. I, too, have no idea what most of them mean in real-world terms, but it's cool to see some of the numbers these CPUs might actually be crunching. If you come across someone that _does_ know about these things, is doing something cool, and wants to show off the compute, I wouldn't object to seeing that. HPC is cool. (*-defined because too many things use the letters HP)
Watch our livestream overclocking the AMD Threadripper 7995WX CPU! Crazy educational too as the engineers who joined us shared a lot of detail: ua-cam.com/video/vU179_czCnU/v-deo.html
Error/correction - There is a typo in some charts where the 3960X's name is next to an erroneous thread count. This does not affect testing or results. Our apologies for the error. Correct count is 24C/48T.
Watch our coverage of the AMD Threadripper TRX50 & WRX90 motherboards here: ua-cam.com/video/NTnVBIEPz1w/v-deo.html
Find the AMD Threadripper CPU specs here: gamersnexus.net/news/new-amd-threadripper-7980x-7970x-7960x-threadripper-pro-cpus-announced
Support our testing and grab a solder & project mat, modmat, or toolkit on the GN store! store.gamersnexus.net/ (currently 10% off at time of posting!)
Fingers crossed from across the pond that "later today" doesn't mean _too_ much later, because this is definitely something I want to catch
This gonna be fun!
DO you think that Adobe still gives preferential treatment to Intel CPU's over AMD?
YES!!!
What kind of nonsense statement is time tbd. At least give a range?
I wonder if Adobe will ever write something that threads efficiently
That'd certainly be nice for us. Resolve apparently does well here, though we haven't used it yet!
nah, they pretty much moved their focus to hardware acceleration w/ quick sync support 3 or 4 years ago.
lol. lmao, even.
Adobe sucks anyway, i don't know why people keep using that overpriced crap.
Maybe when Intel stops paying them...
Finally, some new gaming benchmarks
I seriously wish it would be closer to 7950 in gaming, rather than my current 5950. If that was the case, i'd change platform simply to install more GPUs for other tasks while i'd still be able to game at pretty much best performance. But unfortunately current results show it's still more sensible to run a different PC for more GPUs. It's a shame, i was ready to shill out a stack of cash! 🥲
@@AyoKeito If you just increase the graphics you don't even need to bother about that, and remember the why it runs like that is because of the frecuency.
Anyway, it's waaay cheaper to just have multiple Desktop systems if you aren't a server company struggling to fit more computers on the same building.
Threadripper has always suffer from higher memory latency than the consumer range Ryzen. Even in Zen 2 Threadripper, they are always +8ns memory latency from regular Zen 2 Ryzen with the same memory timings in AIDA64. Games love low latency memory. And mandatory use of Registered DIMM on TR 7000 series hurts memory latency even more.
@@Splarkszter "If you just increase the graphics"
Sorry, this is a fallacy. GPU performance will always top out at something; you can't possibly increase it any further. And if you do (ie; 8k downsampling), you're just dragging the overall performance down from something it could've been instead (ie 60 fps vs 120fps). This is even more important in emulation, which is extremely CPU dependent for good performance.
As someone with a 4090 and 5950x (waiting for 8000x3d), I am constantly CPU limited while trying to reach 120Hz, irrespective of graphics options. And RT just drops it down even further because it still depends a lot on the CPU.
If all you have is a 15 year old 60Hz monitor, then sure, that's a different discussion. But anyone playing modern PC games in an era of 500Hz monitors is probably expecting 120Hz+, and both the GPU and CPU need to be strong for this to happen.
All of GN's CPU benchmark graphs show this issue very clearly. Wish they'd run some emulation tests though; no outlet I know of does this.
@@AceStrifeIt's funny how 120+ is very little to ask from some games while for others it's near impossible to reach with reasonable settings
AMD's focus on efficiency has been insane when you think about how they're punching chips with 2,3,4 times the power draw.
and thats why nobody in server market seeks Epyc as an Option.
Because what matter efficiency when you need to pay also penalty fees for power overconsuption on at least 20 states
@@xXFlameHaze92XxYou don't make any sense. Intel Xeons consume even more power, so companies choosing them will be penalized even more.
@@xXFlameHaze92Xx EPYC has literally been AMDs largest commercial success outside of their playstation deal...... Server share has went up 10% in the last 3 years alone....
Not too long ago it was Intel that was more efficient. Somehow AMD has managed to become the king of everything in the CPU world.
@@xXFlameHaze92Xx sorry what? do you know what efficiency means
I'm in the world of molecular dynamics and I regret to inform you that NAMD is unfortunately pronounced "NAM-Dee". Major props for running these simulations, they are not easy to setup. Extremely helpful to me as I plan out lab computers.
I couldn't keep saying "N-A-M-D" and had to put my foot down! haha, thanks for the info, will use for next time. Are those tests useful in figuring things out for your field? It'll help us determine if we keep running them!
@@GamersNexus Totally! Took me about 2 years to finally figure out the "correct" pronunciation. And absolutely! These are ridiculously helpful as documentation and testing of these programs on different hardware is at best limited and mostly nonexistent. You all are the only ones putting up bench marks with these tools, and for that I am extremely grateful!
@@brandoncoventry5662Do you know how these metrics relate to real-world benefit to you/your field, since you said they are hard to come by? Like is it a 1:1 if its 30% better in a benchmark you can expect somewhere around 30% gains in real world application? Interested to know how much one metric is more important than another!
@@GamersNexusLots of thanks for including these tests. I also work daily with molecular simulation software and can vouch to these tests being super useful for planning upgrades to workstations and compute clusters.
Would it be possible to have some similar tests in the future for GPUs as well? Consumer grade GPUs perform extremely well in software such as GROMACS, but finding benchmarks for it is extremely difficult.
@@radutazu oh god yes, this would be amazing. I run AMBER mostly in my day job but I'll take any GPU benchmarks for MD.
You know it's serious when Steve whips out the whiteboard.
Who is going to be the first lollipop 🍭 sucker that buys a threadripper cpu just for gaming only?
@@lldjslimthat might be jaystwocents plan for the next iteration of skunkworks
Thanks Steve!
@@justahologram2230 nah Phil might get it so that video editing/rendering time goes down
@@justahologram2230 Nope, he said the horrible gaming performance discarded it for Skunkworks, especially since it's his daily driver at home for gaming/streaming
I really loved the part "we don't know what the numbers mean but they're here". Honesty.
In the very least, we know how to run a test even if the software is foreign to us. Hoping the community lets us know which of those are useful so we can incorporate them fully!
I'm part of the fantasy world of Finance, and I sincerely hope the Science guys know what they're doing. We definitely do not.
The FSI bench is measuring HFT in equity markets, particularly derivatives, I suppose. HFT strategy and algorithmic implementation must be unique for each operator for it to work. I'm not even remotely qualified to give a ten minutes power-point presentation about it, but I don't see any practical reasons for this test, at least in theory. I'm probably missing something obvious here, so please feel free to correct me.
The numbers Mason, what do they mean!
Being able to turn something you know nothing about into KPIs you can discuss confidently sounds like Steve missed his calling in marketing, but then I realised he's just too honest for it
full size performance cores running at 3.5w each is extremely impressive, this is borderline high performance big ARM cores
Wendell (level 1 techs) found the EPYC 128 core chip was using just over 1w per core. ARM is dead as long as AMD keeps this up.
@@Azureskies01 Wow
I feel pity for intel trying all that finicky E core stuff and then AMD performs better in low thread tasks as they have the better performing cores and then again AMD performs better in high thread tasks as all their threads are all full performance cores...
@@blar2112yeah, it seems silly, they just can't compete with the TSMC cutting-edge processes. And I see it as a generally bad direction of things, because consumers benefits from a healthy competition. Otherwise it's just a monopoly. I really want Intel to succeed in their ventures and be competitive. But it's seems like cutting-edge electronics fabrication is so expensive process, that it's will be impossible to push progress 'forever' without some sort of technology consolidation in a single entity.
ARM is pretty interesting by itself. You wouldn't want ARM to die tho, there is a reason the x86 platform is focusing on efficiency.
DON'T LET COMPETITION DIE, BE STRATEGIC.
@@Splarkszter Agree, ARM is cool
Sorry Steve, but the channel is called Gamer's Nexus instead of Productivity Nexus, so naturally we are drawn to gaming benchmarks :)
Channel rename on April 1?!
@@GamersNexus Nerds Nexus would be funny. Bonus points if you add this thing 🤓 lol
Steve: I’m gonna call you all out…
Me: Hold that thought GN, I need to skip to the gaming section
The disclaimer doesn't stop me from wanting an 64 core threadripper for an gaming first Pc
to run Flight Simulator below 2% cpu usage.
I hope that in the future we see some games optimized to lots of cores.
Would be interesting, for example, to have a massive grid on a racing game where each AI has it's own dedicated thread.
@@hueanaoor using the efficiency core concept from Intel to add some sort of parallel computing ability to your cpu.
Just imagine what graphics would be possible if you'd be able to outsource sth like ray tracing to mostly underused parts.
I don't know enough about this stuff for more in depth ideas, but it seems like a logical point for improvement
Or try an Epyx-X server CPU, has the extra L3 cache! It's gotta pull big gaming numbers, right? Right?
"I know this product is not for my use case, but I want to signal to corporations that I'll buy it even though the value proposition is objectively bad"
Hey GN, noob thought here, if you ran 2 tests at the same time, would that give different insight into the 32 to 64 core scaling?
That's an interesting thought. Yes. It would definitely change the dynamic: If you were bound to 32 cores max in some application, you could theoretically run 2 instances of it and increase the throughput. Handbrake is a good example of this: You could spawn multiple Handbrake instances with more cores. Great question - thanks for posting!
@@GamersNexuscould you set core affinity in widows and run two or four "normal" all core benchmarks? Would be interesting to see if tasks are across chiplets etc
Yup, virtualization is a very cool thing. Sometimes there may be some schenanigans but if you run a Linux based OS made for that you could run multiple single-core-heavy tasks at the same time for example.
@@Splarkszter
That would change boost behavior
It's not really applicable to workstation CPU's, I'm not 100% sure why such high core count workstation CPU's even exist, but in servers virtualization is a prime use for such high core count parts. There are a couple of virtualization benchmark suites out there that might be relevant, assuming virtualization isn't disabled on these CPU's.
Main benefit of Threadripper is the substantial bump in pci lanes and no longer having to deal with that "if I want to use 3x nvme drives the pci slot gets disabled" nonsense.
This. The ONLY reason I would want the TR is for the lanes.
Seems like there’s a hole in the market for “lane ripper”.
That's a hefty price to pay for PCI lanes.
I love how in less than 30 seconds you went from “wow it’s so efficient” to the LN2 tank.
It’s a long way down that efficiency curve isn’t it lol
But there's so much room to make it faster and pull 1200W! We have to do it!
@@GamersNexus finally a competitor to Intel's chilled 5GHz 28-Core CPU
@@GamersNexus will be looking forward to this those numbers are going to be insane
@@GamersNexusmake sure you guys try out cities skylines 2 for maximum efficiency benchmarking
@@GamersNexus Predictions?
Thanks for bringing back the code compilation benchmark, there are huge number of programmers who like their binaries served quickly.
23:59 I am technically part of this industry as a quant. In this industry the models are sometimes by firms who need the calculations faster than other firms so they can arbitrage first. But more often than not other models are calculated with similar methods so I imagine these benchmarks are useful.
Also, although there are many types of Monte-Carlo simulations, there is really only one type used for time series so the benchmark is probably using that one. However, I am not too familiar with the details of the benchmark you provided so I couldn't say anything for certain about it. Lmk if ya'll have any questions
As a molecular simulation PhD, I also loved seeing these workstation tests included. Especially Lammps is really relevant for me. I actually think that the testcase in lammps is also molecular dynamics (just like namd). They just implemented some of the calculations/parallizing different.
What might be important to note is that in my field we topically run simulation 3 till 5 times with identical settings, just different initial configurations (results here have statistical value). So to me seeing the results of the 32cores compared to the 64 cores set up as 2 simulations next to each other would be very informative as well. (Or both running 16 thread simulations up till the cpu is 100% loaded would be good too). I would be interested in what the CFD people here would think of that.
Amazing video as always. AMD’s work per watt improvement has been amazing. I can’t wait to see what Zen 5 does next year!
For 3D rendering tests, and since nearly all 3D renderers these days support network rendering, it would be interesting to see how one of these systems compares (in terms of price, performance and power consumption) with two or four render nodes using regular AM5 CPUs.
Glad to see the compilation benchmark.
A 96 core cpu would be pretty incredible for server purposes, yeah? Like for Eve Online, you could host 2 solar systems on a thread each on one core for 192 systems on a cpu with tons of headrooms for having many players connected?
precisely why epyc cpus are so wildly popular for servers with high compute requirements 🙂
Thanks! Have been waiting on the new Threadrippers for a while now.
You know maybe the best benchmark for these cpus would be compiling gentoo.
We can look into that!
@@GamersNexusyes, please!
As Gentoo user (and developer).. many compilation tasks are not well parallelized. For many projects a lot of time is spent in configure phase that is sequential. And packages alone are merged sequentially. I would say compilation tests are enough. Whole distro compilation is maybe use case for distribution maintainers but it's really very rare use case for typical user. Some really large projects that can benefit, like Chromium, require also lots of RAM for that many parallel compilation tasks.
(however I don't want to be gatekeeping)
@@reav3rtm to be honest I was just thinking about how funny it would be to have that as a benchmark, since yeah it's pretty much a long compilation test that can take days on some less powerful/mid range cpus and depending on packages installed, so I shared that thought. You don't want to know how shocked I was that gn actually responded and even said they'd look into it!
@reav3rtm while it might be true, I know from the Internet what's the usual Gentoo specialist outfit. And I really want to see Steve in knee high socks..
I think Photoshop realized the power and speed of 64 Zen 4 cores and said: "Not even Chuck Norris has this much Power" and closed itself in an early defeat.
Adobe holding back the industry for decades now and going strong!
Woah this is the first time I’ve seen a corrections section integrated into the UA-cam description
Thank you for three excellent video! It's super exciting to see the return of HEDT CPUs from AMD.
I really miss the 3990X on these charts.. 3970X vs 7980X is not exactly the right comparison generation-wise...
I appreciate the highlight of the 3970X v 7970X, which kind of captures what could be expected.
Fantastic video!
As an Intel fanboy over the years I’ve woken up. Looks like AMD will be taking the cake for future proof CPUs for gaming, ect. My next build will be AMD after my 13900k becomes obsolete
Considering how the prices of the 3960x and 3970x have come down and how remarkably efficient they still are, getting yourself a used Threadripper has insane value proposition right now!
As someone who purchased the 3970X and is planning on building a 7970X machine, one thing that was interesting to me was the inflation comparison between the two parts. I know it was only a couple of years, but I purchased my 3970X in March 2020. The comparative purchasing power today would be about $2,370. So, yes it's more expensive, but so is everything I guess. Just an interesting comparison to make when you're looking at high value parts where the small amount of inflation actually shows up in a reasonable way. Thanks for the great content as always!
One thing I've been wondering is why the 7950X3D isn't on the blender efficiency charts, especially when it appears that it would be as efficient if not more than the 7800X3D. It pulls ~62% the power of the 7950X and is about 7% slower than it in Cinebench.
Using the 7950X3D review video for efficiency scaling, it would sit at basically 17.6Wh which is insanely efficient for consumer parts.
I was going to ask about the same thing. It looks like it was omitted unintentionally, but would otherwise be the top scorer for efficiency in that test after the new Threadripper parts. I'd love to see where it officially ends up with that test metric.
The Level one Jab is the best part of the video, we all love you Wendell
my god TSMC and AMD are really killing it
Amazing tests, it's interesting to see how well those CPUs perform on code compilation, because despite being a highly “parallelizable” workload, at the end of the compilation pipeline, in which you have to assemble a single artifact (or a couple of individual artifacts), compilers and linkers are very reliant on the performance of individual cores.
And those CPUs are clearly worse at single-core performance (not only because of power and thermal concerns, but scheduling gets harder), but they save so much time by going through the parallel bits extremely fast that, even if those CPUs take longer to go through the non-parallel bits, it's still faster and more efficient on the job than the lower core-count counterparts.
It still like you said, you have to evaluate whether your workload takes advantage of it or not, and also be aware that there's a difference of highly multithreaded workload and highly “parallelizable” workload, the latter benefits way more from high core counts than the former, which commonly has more interdependency between the threads, so one thread stalling may negatively affect the others.
Also I'm a little curious to whether those CCDs are as powerful as the customer lineup or not, how the gaming performance fare if you were up to have only one CCD enabled. Yes, it's crazy to buy this monster and proceed to disable all but one CCD, however the question is whether those cores can hold higher clocks for longer if they were not power and thermal constrained.
It might be worthwhile to include Apple systems in benchmark showdowns of this type. Creative and high end workflows are one of the areas Apple targets with their chips, and seeing how they compare to threadripper in various tasks, especially Photoshop and Lightroom, would be really instructive in determining the real value proposition of their systems.
I'm not sure it matters that much since the takeaway from any Photoshop chart is always that Photoshop is a monster of bloated spaghetti code that always scales poorly.
Please keep these Threadripper reviews in. Can't wait for 7995WX review and to see what benefits those higher grade chips actually provide and what influence memory channels have. At $11,000, that's a steal.
GN : Guys! Stop looking at gaming benchmarks timestamps for these workstation workloads.
Us : Don't mind us. We just want to know the bare minimum standard we need while we game, render 3D animations, compile handbrake videos, run OBS in the background, etc etc, ALL AT THE SAME TIME.
I do think that the 24 cores will be usefull for cities:skyline2 But yeah, the others games, the 7800x3D is enough or better.
Better. 7800X3D is king.
@@trucid2thread ripper is awesome but this chart just reinforces how amazing the 7800x3d is for simulation games. It's really it's own class.
Cities Skylines needs an RTX 7090!
Cities Skylines 2 seems to love higher core count CPUs, there was a post where they had a city with 600k pop with a 7950x3d with almost no slowdown in the simulation even at 3x speed
I thought that game was GPU-intensive and heavily single-threaded. Or so I heard
Thanks for the detailed review as always! Are you planning on a thorough review of the 7960X as well?
Especially if you're interested in video production workloads, the 7960X looks on paper like it might hit a sweet spot: you get the lanes of Threadripper for a second GPU, dedicated video output, high speed networking or storage, and higher clocks than the 7970X or 7980X, at a much lower cost (according to Puget, the 5965WX was almost tied with the higher tier 5th gen Threadripper for content creation).
Those features would make it the ideal CPU for a lot of video professionnals.
The inclusion of Resolve benchmarks would also be really cool, although your test suite is already a lot of work, it might help out quite a bit of people
6:59 Look how excited Steve looks when he pulls out the whiteboard LOL
AMD's server products are the most efficient chips to have ever been made. They are burning clean and running on all cylinders.
Now if only their GPU division could be as amazing.
All of the money are being gobbled up by the Ryzen team. The Radeon team is so incompetent they couldn’t even market a main seller feature right which is fking sad😢
@@dogdie147 They are doing what they need to do on the hardware side even with RDNA3 not being as proficient as they might have projected. Their driver (and whole software side) is the real lacking division.
My 7900XT shouldn't be pulling over 100 watts when I wanna load up Oblivion without mods and it is only doing that because the card ....for whatever reason needs to run its VRAM at full frequency (which causes the card to draw ~90-100 watts itself).
Hell the reason why the the idle power consumption was an issue (and still kind of is) was because of the cards not clocking down the VRAM lower than 909 (or max frequency) mhz
Those RDNA GPU is actually CRAZY efficient if you know how to do undervolting
If
AMD's GPU division could be running on all cylinders and most _gamers will still not buy anything but Nvidia_ because Nvidia's manipulative marketing is very strong. The fear of missing out is too strong for gamers to resist even at the lowest end where Nvidia fails to stand out.
@@sammiller6631 Radeon didn't have to release FSR3 and could have put all that time and energy into making FSR2 a better DLSS. They didn't because they are chasing nvidia again.
Radeon didn't have to come out with chiplets when they clearly weren't ready (idle power draw, multi monitor power draw, crashing in 10 year old games like in FFXIV-something i personally had happen for the first 5 months of owning my 7900XT). They did anyway.
Radeon never misses their feet when they shoot for the moon.
3:35 We have been called out!! No, but seriously. He has a point.
how to club intel over the head in the most brute force way possible
For $5000, you can't just buy any computer for that cost, you can buy a full, top of the line gaming setup.
7980X is clearly memory bandwidth limited in many of these benchmarks. You can see sometimes zero scaling, despite the workload being scalable well (things like parallel decompression, rendering), in places where there is less memory bandwidth need, like compilation or compression, it scales way better (and remaining difference can be explained due to lower base/boost clocks, and not perfect scaling of the benchmark itself - i.e. linking during compilation)
Here’s hoping you get to review the 7995wx Threadripper PRO, after the overclocking sessions!
3.5w per core? I'd love to have a 56w 16-core on desktop without manually stepping down the (poorly documented) volt-frequency curve in bios
Just buy a 7950X and activate the 65W-ECO mode.
I'd really love to see the 7960X benchmarked with these per-core numbers broken out like you did here, and an idle wattage for all of these chips. I run a gaming VM home server setup so energy efficiency is good to know.
I'd love to see benchmarking on VMs running on the threadripper chips. i can image another use-case for them is running small Slimclient servers for small companies who don't wanna dish out 100k for an enterprise solution, but who still need decent processing power for say, 8 different stations simultaneously.
with 64 cores you could be running 9 different "6 core" MVs natively off hardware with a whole 10 cores leftover for background tasks. not to mention all the PCIE lanes that could be running storage in RAID to making disk writing for the VMs redundant and instantaneous. that could be a gamechanger for small businesses.
Wonder how this stacks up to high core count ARM CPUS like the Ampere Altra since they've started experimenting with making workstation stuff based around it as demoed by Jeff Geerling (and for that matter Apple's M3 Ultra whenever that comes out)
I know I'm late to the show but I really appreciate these reviews. Many of us that do software development (compiling) or other types of productive work in addition to gaming need the benchmarks like Chromium compile. Even if we don't compile, it is representative of a class of workloads that isn't just gaming. Same with blender and 7zip. Thanks for doing all the work and cheers from a copper cup.
I put mine on LN2. You and Bill better have some good bin's ;)
I do find that the 12 cores in my Ryzen 5900x are barely enough now. Some games are starting to utilize 8-cores, and the remaining cores have to handle a lot from various other background software I use. 16-core will be the minimum next time I upgrade. A low end Threadripper still can be an option.
Another great and neutral informative review. I'm one of the people needing these beasts for daily computations, so I'm excited to learn everything before eventually ordering one. Would you have the possibility to make some Passmark tests with the 7980x and 7970 as these scale very well how my own workload will fare against other cpu's. Doing a lot of 'monte carlo' style based works, so if it's similar to the financial monte carlo you ran, it's super promising!
Can you give some more background on why/how Passmark represents your work? That'd help us in planning (and explaining it if we introduce it). Can you give an example of how your work relates to some kind of real world "output" or result? What does it mean for you if a CPU is faster? That'd help a ton. Thank you!
@@GamersNexus Myself and a lot of 'pokercolleagues' run a lot of computerized simulations, some kind of Monte Carlo algorithms to build our strategies. When we look at the internal benchmarks solves we run in our software, we see that they relate very much to the passmark scores in general. Off course there are some small exceptions (as always) - usually related to the number of cores/threads. ("Generalized" our simulation softwares run faster the more cores as a very simplified explanation but the scalability relates very much to what we see in passmark - besides a few exceptions)
Adobe Try To Be Efficient Challenge (Impossible). Honestly, it boggles my mind to see how efficient this CPU for production workloads right out of the box.
Thanks for testing such a wide range of software. As the "PC specialist" of my friend groups it's good to be able to give more informed advice
You just sold another Threadripper for gaming. Thank I will enjoy writing emails, playing excel and word on it!
HEDT's value isn't raw performance, but time. If a job takes you 8 hours, and you can save 1/3 of the time, you can do 50% more work.
Please, GN, compile and post your complete benchmark charts on your new website! Every review shows partial charts that include only a (random) selection of models that you've reviewed, which makes sense for presentation purposes but users want to have a complete reference SOMEWHERE that they can use for their own comparisons. This is especially important for PC cases and CPU coolers, where some very old models still outperform many new ones. At times I've had to open several different old reviews to look at partial charts in order to get a sense of how one model compares to another, and it can be frustrating. 🥺
They will in a weeks time, they said in their video about webiste
@@pachete. Are you sure? I know they'll post their old reviews, but that's not what i'm talking about. I've never heard them say they'll be posting complete charts, but maybe I've missed something.
@@zpd8003 ua-cam.com/video/Mrdw1fiqPmI/v-deo.html
G'day Steve,
I watched the Livestream first, not only was it lots of fun but also BIG THANKS to Amit & Bill for their time answering technical questions.
Also as you mentioned it in the livestream it would be really cool if you did make the GN Logo Blender test available for us to use at home so then we can test our CPUs that are not on your list (like my Athlon 200GE) to see how terrible they are at rendering for a laugh😁.
I must say, it is nice to see those units tested with more "industrial" software, like FEM etc. The regular test suite just does not show what are these CPUs capable of in my opinion.
I love Threadripper CPU's. So glad they are coming back.
I bought a first gen Threadripper, and it's still running in my home server.
Hmm, I'm wondering how these would perform in games if you split them into groups of 8 cores with a video card. In say a really niche case of workstation by day, entire family gaming rig by night.
Absolutely could
But depend of motherboard and setup anyway
Arm based cpu are getting really interesting now. Please add them to the testing list
they're not quite here for the workstation tho, so it's a bit early imo
Until there's a reliable and fast x86 emulation layer for ARM then that's not really possible since the vast majority of software used in these tests are native x86 applications.
How do you guys verify that ECC is indeed enabled and functions correctly on a software side?
CPU review with a WHITEBOARD and a LN2 OC livestream in the same day??? Y'all are crazy. Love the content, to everyone over at Gamer's Nexus: thank you!
But can they play Crisis?
Haha, just wait until they have enough cache to fit Crysis in cache.
@@GamersNexusnow that you said it, I wonder if a X3D Threadripper would make sense.
Iirc, in Ryzen, the X3D is only useful for gaming.
As an AMD shareholder, I hope every one of you viewers builds a new threadripper machine to play games. Preferably one new TR machine per game.
A Ryzen 7 1700x is worth about $50, while a similarly-aged Threadripper costs about $30 more _and_ requires a special motherboard.
In conclusion, you might expect that a Threadripper part would be good for gaming after a few years, but given the actual cost to get one, you'll always be better off with a newer Ryzen 5 or i5.
Well, not always. For mainstream use, definitely. But there are professional use cases where current-gen R5/i5 parts just won't do what you need in heavy enough workloads.
For some people, nah
Those PCIE is really real estate, you can load fuk ton of PCIE device without worrying about which one goes which and gets disable or whatever
@@GamersNexusdo wish those PCIE gen 5 motherboard can do super-split PCIE that turn into lower PCIE version but double the lane.
Like, you got cheap HEDT there
@@GamersNexus That clarification is important. I meant for gaming, of course, and made that comparison because only older Threadrippers would have a similar platform cost to an i5 or Ryzen 5. I guess I didn't make that entirely clear.
@@GamersNexus Yes but : you need more compute power? Buy anything - including EPYC or XEON - for pro use that matches your compute need and fits your price range, and get a mainstream CPU - Ryzen, i5, whatever - for everyday purpose. It will be both cheaper and a better experience.
HEDT is dead, and while these TR 7000 and 7000 PRO themselves are good, the cost AND lack of versatility of the platform, mostly through the absurdity that TRX50 and WRX90 are compared to any decent EPYC 9004 mobo for instance, render the platform a basic scam.
I consider HEDT dead for now, and if it weren't for the scam that DDR5 currenly is (come on, it's been 2 years and we don't even have 64 GB mainstream udimms) and if we at least had these dimms, where you could enable 256 or 512 GB standard DDR5 in 4 slots, this would not even be a discussion : you need real professional feats like tons of PCIE lanes, compute power and at least 1 TB ram : get a server CPU and board. You're a prosumer doing rendering, huge labs and prototypes? Get a Ryzen 9 or an i7 / i9 and 256 / 512 GB DDR5.
Let's be real : with CPUs normally progressing in terms of compute power, by the time Zen 5 / Arrow Lake or Zen 6 / Nova Lake are out, if 64 / 128 GB DDR5 udimms are out, this pseudo HEDT is probably dead. People who need more of everything will go to server chips. People who only need more ram will stick to mainstream.
Correction. "Before VRM efficiency losses" should be "Before VRM inefficiency losses". The 1920 buck converter is about 60% efficient. That means the VRM generates almost as much heat as the CPU. You should look into water-cooling your VRM. Of course, the 2011 Ćuk-buck2 has an efficiency of around 99%, is smaller and costs less than the 1920 design. But that would require the board designers to learn some of the 100 years of improvements in power conversion.
Small point: The TR parts don't have more cache than the regular 7000 series, the total cache doesn't matter. What matters is cache per core, and that is identical except for the X3D parts. If they decide to release X3D TR... yeah lol. But those won't have more cache than the 7800 X3D per core either.
You are correct. I wonder if the reduced cores per CCD models (or if disabling cores in each CCD manually) would make a difference. The 24 core part has the same L3 as the 32 core part, granted that's still far from the cache per core of the X3D CPUs. I doubt AMD will sell X3D threadrippers since productivity workloads don't tend to benefit from it. Then again, AMD does sell a few EPYCs with the extra cache, so who knows? Maybe someone will at least try to get one of those EPYCs into a board that allows overclocking. Some people did it with the older EPYCs.
@@5467nick I just saw der8auer's video before this one and he tries out some CCD and core configurations. Definitely worth a watch.
I confess. I watch the gaming segments for Threadripper reviews because I think it's fun to see how a CPU not meant to handle these kind of workloads performs. But it would be amazing to get to work on something that can use a Threadripper lol.
Who would have thought the most popular part of a GAMERS Nexus videos would be GAMING, lol
"Roofline model" is a better way to think about benchmarking. FMA is the speed limit
Everything boils down to a ratio of how many arithmetic operations you do per byte transferred. All the different cache architectures are just an attempt to deal with the transfer side of that equation
Also, your charts would get MUCH more interesting if you compiled for the the specific CPUs as there are certain computations that would expose orders of magnitude differences between products
an example would be: multiply and add half floats (f16) and see what happens when you compile for different targets as the buffer size increases. (xeonW vs TR gets interesting..)
Don't forget the additional benefit of the increased I/O. Would some of these GPU tests be able to run with multiple GPUs installed in the system?
There is no way to leverage more than 1 GPU for gaming besides the odd accelerator here and there. Its a dead concept.
Now if you are looking to run some parallel workstation workloads over multiple GPUs, now we are talking options.
Those app that utilize full GPU computing actually don't really care about PCIE speed since it doesn't need to communicate that much with CPU during process, and subsequently CPU load
Because of that, the performance is just scale of those GPU performance anyway.
Desire to use server or workstation boards and CPUs for gaming is courtesy of EVGA when they made their SR-2 Classified dual 1366 socket server board capable of using dual Intel XEON processors, 12 dimms of memory, and up to 4x graphics cards. The other cool dream for cool-factor at that time was the Supermicro quad socket AMD Opteron server board. For gaming, my i5-6600 mini-ITX with GTX 1080 Ti still runs most games at high settings on 4k, and zero issues with 3D design rendering. The idea of having a T-rex monster system is always going to be way more awesome than a little gecko, though.
Please can you guys figure out a viewport test for these CPU's for blender. No one in their right mind spends this type of money on a CPU and doesn't have a 4090 to render on. People who use Blender want to know VIEWPORT performance: Can the CPU playback an animation in solid mode at the required fps. How are fluid bake times? How are other physics bake times, etc. I really appreciated your reviews but render times are useless.
first thing I did was look at the scan bar and saw that huge peak in the gaming benchmarks
I think it would be cool to compare that to M3 Ultra or whatever the best apple silicon thing is right now - kinda like best of what prosumers can expect on both ends :)
Love it that you include Stellaris in the Gaming Test. Thats the only game that constantly forces me to upgrade my CPU. I really hope a Ryzen 8000X3D will bring great improvements
24:05 the financial and probability simulations are very important for machine learning - whether that's training or inference, depending on the model architecture. It's a very big deal for ML but even bigger for time series data analysis in market dynamics when you assess a large system of signals and indicators which compare to one another and are compared across several time scales in addition to many different products whose price action is being recorded. Log complexity on millisecond updates across dozens or hundreds of items needs a ton of processing if you don't want to miss out on an arbitrage or trade intraday algorithmically.
lets see how long this one is gonna stay supported
As compared to what exactly, Intel's power-hungry heaters performing half or less? Warranty is all the support I need, gimme performance 🤩
AMD has made no claims about socket support. Expect to retire the motherboard when you need something faster. If you don't like that then you're SOL as Intel's upcoming Fishhawk Falls Refresh will also be EOL after this "generation" (it's basically "14th gen Xeon W").
Excited for the livestream later today!
"Everybody jumps to the gaming [section of the video]" Well who'd have believed that people subscribed to a channel called GAMERS Nexus would have done that? 😁🤣
At least for me, the interest in "checking the gaming scores" for these cpus is the desire for a single computer that can handle heavy workloads while also still being great for gaming. I'd throw a decent chunk of cash at something that can do both without compromise. These new threadripper cpus had seemed interesting in that they might have opened up a new pricier option for people trying to get both gaming and workstation performance, but it looks like you give up a lot on the gaming side of things here despite them being very impressive workstation cpus. At some point it starts to make sense just to build two completely separate machines dedicated for each use case, but, aside from that, the sweet spot for a single machine option is likely still the 7950x3d or 7950x (depending on your tolerance for tinkering with core affinity settings for more gaming perf).
Thanks Steve, now i know i can play games on Threadripper.
Thanks was very helpful for me. I'm a software engineer doing cross-platform compilations. And other compile work. even Linux kernel.
Jesus, that was a confident LN2 pour with bare hands.
You can pour LN on bare skin as long as it runs straight off. LN causes problems when its held against the skin, as in submerging a hand or soaking your clothes.
Look up "the Leidenfrost effect" if you want to know why he (for good reason) wasn't the slightest bit worried.
I think people are jumping to the gaming benchmarks because this is much of your audience. Your channel is called "Gamers Nexus" for a reason. :) These people just don't need or buy workstations, but they are curious about technology and what it can do.
I'm going to buy a 7960X and TRX50 board for gaming and watching UA-cam just to spite Steve :D I'd love to see some VM/Hypervisor and database workloads for ThreadRipper!
I miss the glory days of HEDT gaming, my i7 920 D0 stepping and OCed samsung DDR3 half-height RAM was a BEAST, my 3930K and TR 1920X less so but we don't talk about those. I'd say ThreadRipper is the competitor for Xeon W-2000 series and TR Pro competes with W-3000 series, especially looking at specs and pricing.
24:41 for the gaming benchmarks! Never mind that other unimportant stuff ;)
Wow, that efficiency on the TR 7980X. That low v-core helps alot here and it is very, very good. I hope you're going to be overclocking that particular chip later today as that looks like a good sample.
Why were some games ran with ECC disabled? I see them on the charts but didn't hear them mentioned. There was some talk of the how these are high cache count CPUs but the lower core count V-cache enabled parts, the Ryzen 7800X3D in particular, were able to trounce them in gaming. I would be that you can get V-cache like gaming performance on the TR 7980X parts but you would need to disable 56 of the 64 cores and in a way where only one core per CCD is active. That'd actually give the TR 7980X more cache per core than even the Ryzen 7800X3D. Really curious how would compare as there would be the oddity that the L3 cache is not directly shared on the TR 7980X in that configuration (cache data duplication, more die-to-die coherency traffic etc.).
I'm really curious how the TR 7960X fairs. I'd be a nice jump to the higher PCIe lane platform without the premiums CPU price (premium being compared to the TR 7970X and 7980X parts).
As for testing, I am curious what the TH Pro 7995X could do on this platform. Granted that is an 8 memory channel part on a 4 memory channel board but it is a valid and supported upgrade path. So while AMD only had a single generation of parts for TRX40, the TRX50 platform will have an upgrade path due to being able to run a WRX90 chips in 4 channel configurations.
As for the future, I'm optimistic that there will be a second or even a third generation of WRX90 parts. I suspect there will be a second generation of TRX50 parts. The main thing I'm hoping for are V-cache enabled models as the performance uplift it brings is generally stronger than additional cores, especially for more day-to-day tasks. HEDT and workstations are performance monsters when loaded but v-cache gives a system that extra snappiness when creating the models, moving the data around prior to hitting run on a long simulation.
watched this after L1T Wendells vid, interesting to see the different testing and feature explanations. this set looks great for this review as well
I would love to see a Unity compilation benchmark, I spend an hour a day looking at that loading bar everytime I make a code change...
The channel name is Gamers Nexus and yet we get a dressing down for jumping to the gaming section of the reviews. Gamers gonna game.
Thanks Steve n team, this helps heaps with my cost benefit forms for management. they want to see number/bar go up mean workers work harder/less downtime
wow i have never been this early, i'm gonna predict the future and say great review steve! :)
I have no need for threadripper but love to see these videos.
GN: They are NOT for gaming. DO NOT buy them for gaming.
Viewers: Yeah... but what if.....
Appreciated the HPC (high-performance compute*) benchmarks being added to the mix for the high-core CPUs. I, too, have no idea what most of them mean in real-world terms, but it's cool to see some of the numbers these CPUs might actually be crunching. If you come across someone that _does_ know about these things, is doing something cool, and wants to show off the compute, I wouldn't object to seeing that. HPC is cool. (*-defined because too many things use the letters HP)
Lmao the call out at 3:20 excellent. Well done guys