Thank you for watching! We’re writing all the time at work whether it’s emails, drafting up video scripts, etc. but having a tool like Grammarly will help improve your productivity and work more efficiently! It’s FREE, why not? Sign up for a FREE account and get 20% off Grammarly Premium: grammarly.com/LTT
2 Things Linus, 1) Are you under some type contract agreement, that you give your Canadian viewers prices in US dollars??? 2) DO you Honestly think that companies are telling their customer, yeah we can do this work in halve or more of the time so we will pass the savings on to you??? If they were charging x amount for the time, and now they do something in halve or more of the time, they are still going to charge the same amount for their time, so they can make more money
Yes Linus, I indeed want this, but you see the problem is, I'm broke. So I'm gonna continue to watch your product reviews without ever buying them on my Intel HD graphics thank you very much.
We got our 512gb ram 5995wx machine up and running a week and a half ago. Huge time savings for us, processes our 3D laser scans in 1 scan per 6 Seconds. The pcie lanes support is huge for us.
For our applications there are totally viable cases to use Ryzen 7000/i9 12900ks systems but the 128gb RAM and pcie lanes support limitations reduce the maximum project size we can effectively work on with those systems. The 64 core 512gb and 7 full bandwidth pcie 16 slots allow us to tackle enormous projects without slowing down. The pricing breakdown is totally justified here as when you price things out linearly you end up coming out quite a ways ahead over multiple smaller systems, even without the project size limitations.
I think the lack of substantial improvement in Adobe product based workbenches has less to do with the chip and more to do with the legacy, patchwork framework that Adobe is still carrying on, especially in Premiere. Maybe try a comparison in Da Vinci or some other more modern framework where a ground up effort has been put to juice out every available core.
Adobe software runs a lot like a game with a single mainthread that calls every asynchronous task (additional thread). Many parts of the code are still synchronous and will hold up the main thread and make their software at BEST lightly threaded. It's really just hot garbage with a familiar UI and a stranglehold on the market due to being one of the first.
It's still an important test. Ultimately you don't buy a more powerful computer to run benchmarks better, you buy it to get your work done better. If the tool of your trade is Adobe then how much better that runs is what matters to you. It may also be time to change tools if another one can do the job better.
I sincerely miss SGI's workstations. Those were actually, TRUELY worth the money. UNIX, baby. I still hope to own an Indigo someday, of which I intend to actually use daily.
I have my doubts as to if you can even measure Threadrippers performance the standard way, that is with normal benchmarks. I'm also VERY interested how in performs in engineering fields where FEMs are abused to death. Interestingly enough, some form of FEM calculations could be a great benchmark for CPUs too. Encryption might be another field. It feels like benchmarks used in testing are pretty narrow and always come around to neighbourhood of computer graphics and 3D design.
@Andrew Crews Finite Element Methods, which are numerical algorithms with many variants (Continuous and Discontinuous Galerkin to name a few), that are used to solve partial differential equations in fluid mechanics (elliptic pdes) and in other fields like physics and engineering.
@Andrew Crews finite element method. It is a technique where you approximate continuous functions like airflow over an object by dividing everything into small cells, the finite elements (finite because they have a finite size instead of being infinitesimal as they ideally should be), and then you use an update function to propagate a discrete (stepped) version of your function through the cells. This is basically how almost very modern physics simulation is done. Obviously the smaller your cells, the more accurate the result (usually) but also the compute complexity usually explodes. So having more performance allows you to run rough simulations quicker, meaning less idle time, and it allows you to run more accurate simulations at all.
If I could I would send them some files from my job. I wish I could see how fast a threadripper like this could run some of my sims. Because depending on how fast it is I probably could convince my boss to spend that money.
@@jakobwest4811 I've got my disgusting python code from my undergrad thesis on a benchmark method of solving a particular elliptic convection diffusion pde using continuous Galerkin fem I could send them
256MB of L3 cache? 😳😮 I remember a time when my brand new GPU had like 16 MB of VRAM and my brothers new laptop had an HDD with 8GB. I do feel old now 😨
Dude, my first HDD was 500MB. And I was like: "Wow, it's so much space, it's ridiculous. I mean Doom is only like 12MB and it's the best game ever made..." lol
Ohh the good Times. Sinclair ZX Spectrum with wooping 48KB at full expansion stage. First Pentium PC with 4MB Ram (and that cost more than 64GB DDR4 RAM). You were happy to get a Quantum HDD for a Amiga 2000 under 2000 DM.
You guys sound young ... my first PC was an 8086 Sanyo machine with 40MB hard drive, 256K of RAM (yes K), and an all new 'high end' 64 colour 640 x 480 (or 16 colour 1024 x 768) eVGA display, and could run at either 4.7 or 8 mhz - switchable by a toggle switch at the back.
Milan-X, in the Epyc lineup, has 768MB of L3. Getting awfully close to 1GB, TechTechPotato and Patrick from STH have suggested the next gen we'll see 96core, 1GB L3 chips in the next generation. Wild times.
No you can't because LTT is the usual 'we don't actually know anything about professional computer users', or maybe actively pandering to AMD - the i9 isn't Intel's pro line. Threadripper Pro is *extremely* competitive for certain applications against Intel's pro offerings, and if you're e.g. a trade-from-home type with only one PC, then its super compelling.
I really respect the editors notes in the subtitles. Yet another example of LMG's outstanding production quality, but more importantly research of *both* the writers and the editors. Bravo.
Its more like thw6 dont put to much effort on searching and then watch the video to see if they did good or need to put the notes(corrections :c) in the video :v
The real problem is they probably filmed this a month or two ago, by then new information on the AMD is already out. Timeliness in tech is important when the technology changes so quickly.
In absolute awe of the production values of LTT videos. After watchin the staff meet and greet video its astounding you are where you are given the humble beginnings. So much respect and love for Linus and the entire team
Compiling Unreal from scratch (and letting it compile the shaders as well when first opening it up) is a good benchmark imho of "how useful it can be for a professional". I upgraded once from an i7-4790k to my current 3990x as I was doing a lot of freelance work remotely and I did cut my wait time for compilation drastically in many instances. (from 2h to 10-15 mins and from a pc that I couldn't even watch videos in the meantime to now being able to watch something on the side to wait) Honestly, the TR line-up, even at "enthusiast" level was never meant for gamers but I feel like it's useful for some freelancers that can't afford to throw down close to 20k$ in a machine. But again, it so depends on the daily workload you work with... :)
Back when I was a self-employed Graphics Artist Professional and I was building websites using Dreamweaver and Photoshop, time was money. In fact, it was so much so that I would build custom machines with enough RAM that I could create a RAM DISK where I would load Windows, DreamWeaver and Photoshop into RAM. (I also used RAID 10, dual GPU's and cooled it using an independent window AC unit.) It took about 10 minutes for my machine to boot up, but that was perfect to get a muffin and some coffee. For the rest of the day, even the most intensive tasks were completed in seconds. This level of responsiveness paid for itself quickly and made the work easy and fun - waiting is energy-sucking.
I'm glad you start talking about the bussiness side of tech on this channel. Very useful for future engineers watching this. Hardware is cheap, Time is expensive. Also, List price for hardware is very negotiable, perhaps not for LTT but surely for larger corps :).
I want videos to keep coming out where Linus says the screwdrivers are 'coming soon,' implying an absolutely enormous backlog of like months and months of videos. Maybe even a gag where it's got something that clearly happened after the screwdrivers came out, like someone's watching Andor in the background as he says, "Screwdrivers should be soon, guys!"
I am building a high performance multithreaded Java application doing fintech and would love to get my hands on one of these. When markets are moving, you want as many threads crunching numbers for you.
You know it's a good product when the community starts arguing about whether the benchmarks are sophisticated enough to meaningfully measure it's performance capabilities. If you've gotta start measuring things in a whole new way just to do it justice it's definitely moved the game along.
I have a similar experience about CPUs performing similarly to the GPU at 3:04, rendering in Arnold with a 5900X and a 1660Ti, they perform about the same, if not better, on the 5900X. Less passes and slightly less time needed.
@@jsVfPe3 garbage pairing huh i have a 5900x but i paired it with something more reasonable. (6800xt) beasst for 1440p gaming got it hooked up to the asus pg279qm 1440p 240hz :)
My R7 2700 also renders at about the same speed as my GTX 770 in Blender Cycles, so... But I think he means that in the way it's a 3090. He's ignoring the part the CPU costs a lot more and that the 3090 won't be able to render anything that takes more than its VRAM can hold (something I suffer with so so much so...). Oh, how much I want GPUs with upgradable VRAM, specially when Nvidia is drip feeding the VRAM sizes on the consumer cards this dirty. I would be okay with even as low as 2060 performance if that meant having upwards of 48 GB to use.
I have a 3955 with a 3090 in my desktop. It's good enough at playing games, but it's literally amazing for office tasks. Applying a MLA to a dataset and outputting as a Visio flow is almost instant, which is basically impossible to do on my high end business laptop. Identifying process flows from use data is an amazing tool, and a single map could save a company millions of dollars, and his computer is the difference between me charging $50 an hour or $200 an hour. I'm really excited for these chips to hit the consumer market.
Would LOVE to see you build the ULTIMATE water-cooling PC, using a Cooler Master HAF 700 EVO with FOUR 420mm radiators at the SAME TIME, as a single water loop, with water cooling blocks on the: CPU, GPU, M.2 drive and on the RAM! This would be the ultimate dream setup for next gen hardware (like the intel 13th gen CPU, RTX4090ti, PCIe gen 5 M.2 drive and even hot DDR5 RAM). With full RGB it would also probably be the best-looking PC setup ever! I am planning on this setup myself (to ensure I never suffer from thermal throttling, and to allow me to stably overclock as much as I want), but I am nervous as I never done a custom water loop before, so I, & I am sure many others who have saved ready to build a new top of the range setup with the release of the next gen hardware, would really appreciate it if you could do this as a step by step tutorial. Thanks. :)
@Eden of the East yes I know; I thought I made it clear that my point was new & upcoming hardware is just going to keep getting hotter & hotter, requiring larger & larger heatsinks & fans, OR peoplecould wake up to the common sense solutionof water cooling. Sorry if I was unclear. :)
The refurb market (even on Amazon) is a ripe one for server nerds and it might make for some interesting content. Consider that you can get a refurbed 1u system with 64gb ram + 2x Xeon e5-2670 (total of 16 core/32 thread), and 2x512gb SAS drives for $200 USD.
I could asbolutely use this at work. I run full 3D EM (electromagnetic) sims and FEM for a living and I bet this would really help. I use an older Xeon and 128GB of RAM in my daily workstation right now, but commonly use our networked machines with 128 cores and 2TB ram. I really wish I could send you guys some HFSS or AWR files to test on rather than Blender or 7zip.
This is what they're missing I think. It's been a hown a bunch of times that only a handful of content workloads benefit from these things, but I've heard from people in other industries saying they absolutely slay. STH comes closest I feel to being able to properly benchmark these, aside from L1 techs on Dev workflow stuff, but it's till hard to get a clear vision of where they make the most sense.
The Zen4 chips have more than enough compute for me, but not enough PCIe lanes. I want to be able to put a graphics card from each of the major GPU vendors in my machine and there just aren't enough lanes for that.
Jeez, this thing is probably powerful enough to sculpt in Zbrush with sculptris pro mode on 100m tris model and do several textures bakes in same time and load less then 50%. Dat raw power
The problem is even at this price they can't make them fast enough. There's a huge backlog of orders for TR Pro, so why would they cannibalize that market to make a lower end SKU? I could see it if they had another fab capacity to saturate that market and then some, but it's a bit more nuanced than they just want more money.
If you don't know about it, you're not their target demographic and weren't ever going to buy it anyway. And AMD doesn't need to do any marketing at all for these, they sell like hotcakes anyway
I would very much like to see a new line of "enthusiast grade" chips on a modern DDR5 platform. I'm still hanging onto my 6950x since there's no real reason to upgrade, and I get that these new CPUs are more powerful than anything on the X99 or even X299 platform, but from an enthusiast perspective, something that supports quad channel memory and a whack-ton of SATA/ PCI-E expansion on a modern architecture would be very cool to see. Don't get me wrong, I'm thrilled with how far CPUs have come since X99, and I'm very excited for the upcoming Ryzen 7000/ Raptor Lake launches, but I miss the days of having a more feature rich platform to go alongside the consumer grade ones, not counting the server/ enterprise level stuff. Seeing consumer Threadripper get canned was very disheartening.
@@s.i.m.c.a When I say new, I mean relative to enthusiast platforms. The last consumer grade Threadripper was released on DDR4, and it's the same story with X299
These days server/enterprise equipment is much cheaper than a workstation. No org with any IT/budgetary sense would allow workstations to be purchased over Epyc servers.
@@LeLe-pm2pr X299 was the last from intel I think, and that woulda been 7th gen. I have no personal experience with X299, but I can at least attest to X99 (5th/ 6th gen) having quad channel. Either way, it's certainly been a few years 😂
My family's on a farm, and I feel like getting an LTT screwdriver is similar to getting high speed internet - constantly led on by the phrase "should be available very soon!"
Used to work in an HPC (high performance computing) environment in the pharma industry. Money for them matters little. I remember when covid started we couldn't get cpus so we ended up having to rent HPC servers in aws, OCI, GCP etc. Some times we would get bills o er 20k a day
I paused the video to finish my homework (which is on spanish) and when I resumed the video my mind thought that you talk spanish and for a split second I said "wtf linus is speaking english"
I agree, having support many in my time.. Nothing makes an engineer happier than a speedy beast of a workstation. And you say waiting for 10 minutes.. waiting on that render or computation for 10 minutes to complete also probably means adding an extra 10 or so that they stay extra playing table tennis or whatever else 🙂
Super late to the party but IBM has been doing that for a while in mainframes. The issue you run into is latency. Intel I think is also looking into it for their diamond rapids server parts.
@@jonathanjones7751 If they're all on the same die you just group the ones that are closer to eachother; in fact apple has been doing similar things with the m2 ultra, even though the threads aren't united, there's 2 cpus joined and acting as one big cpu
Even if i got this one for free, I'd sell it again. Sweet chip, but i couldn't come up with enough things to do at the same time on my computer for this CPU to be used anywhere close to a way that makes it shine. And then buy a new PCIe 5.0 AMD PC lol
After watching this video, I somehow got reminded of AMD and Nvidia GPU launch and how their GPUs only start selling after a month or so after the event. It would be so good if every GPU is released at the same time and is available just a week after their event or maybe even days.
I'm curious if this CPU is fast enough to make rendering animations on GPU pointless. With that many cores I'm curious if this thing can render a blender animation faster than a high-end GPU
While I am AMD fan, I am impressed by intels top consumer tech being just a factor 4 slower than the best performing extreme die size server cpu on the market.
My lab purchased a workstation with a 5995WX about three months ago, and we have been torturing it ever since. The workstation is almost exclusively used for de novo genome assemblies, and comparative genomics, and the bottlenecks have traditionally been RAM and cores. The workstation has the 5995WX paired with 1TB of RAM, and came in it just over $13,000. Compared to the cost of our last server (8, 12 core Xenon's over 4 nodes, and 1TB RAM), which was closer to $30,000, this actually is an affordable option.
@@Teluric2For whole genome assembly of eukaryotes, CANU, HiFiASM, Celera, SOAP2, RagTag, and Abyss (the software varies depending on the type of data used). For more general bioinformatics, it's all mostly custom python (2.8 and 3.8...I wish people would just use the latter) scripts, but we do use some outside packages (FastGBS, CLC Genomics, and Geneious) along with BioPerl, and R. The OS is Ubuntu 22.04 LTS, but we also have the workstation set up to dual boot Windows 10. The older server has 4 nodes, each with Ubuntu 22.04. It runs the same software, with individual nodes used for genome assembly (node 1), gene annotation and syntany (node 2), comparative genomics (node 3), and protein modelling (node 4).
Can you guys start to do audio processing and DAW render/ live audio processing latency benchmarks too? I feel like it would be helpful for lots of people looking to build a music production focused machine
Those clock reductions with many cores in use, oof. As ridiculously expensive as these things are, and being that the customers are almost exclusively businesses that have to justify every cent, I'm surprised CPU manufacturers haven't made more of an effort to optimize for *actually properly cooled* use cases to keep those clocks high. Performance is likely about halved vs its theoretical potential (even assuming they can "only" reach 4.5 ghz across all cores), so as long as the cost of a suitable cooler and power delivery is less than a second CPU (and second entire system to put it in...), that should come out ahead financially.
To be fair, that 2.7ghz clock is likely a worst-case scenario, with all cores being hit hard, AVX on, and a cooler without massive headroom. You'll probably get better clock speeds than that most of the time
They'd need liquid nitrogen, at 4.5ghz it's pulling 400w and that's not on all core? The heat is too much for most coolers, and not because of radiator limits but the block transfer and IHS. Direct die would help, but pushing much further isn't gonna be "properly cooled" no matter what, unless you are talking single core boost I suppose. As far as I can tell, I don't have a chip on hand myself. It's already pushing the limit, the difference is just how you want to when OCing it seems.
They actually are optimized, and designed to scale with cooling capacity. That's what PBO offers. At some point you'd be expecting too much from 64 cores in a single socket.
It would have been nice to see a comparison between this and the Threadripper 3000 equivalent. It isn't an apples to apples comparison otherwise. I guess there is a reason this wasn't included. If you want this comparison, Hardware Unboxed did a video featuring both the Threadripper 3000 and 5000 64-core models a few weeks ago.
If you want to compare the threadripper pro 3000 to threadripper pro 5000, then it's basically a 20% on average performance increase in single-thread and multi-thread.
@@bhaveshsonar7558 well i wondered really only about the adobe given that the beast pc didn’t do so well on that - quite similar to the 11900k which apple says they are better than with the mac studio
The only thing that annoys me in this whole segmentation is I don't need more than 16 cores but do need way more PCIe lanes than regular ryzen can offer... As much as it isn't viable for AMD I REALLY want an in between
try to find a bit older yet still good xeon. they generally will support tons of lanes and you can find low core variants, just be sure its on a good architecture and has decent clock speeds. i have a xeon 1620 v3 and it doesnt even bottleneck my 2060 and its 8 years old. so something like a 4 year old xeon would probably be good for you.
Alas, that market... HPDT, is pretty much limping along and dead in all but name. Intel has more or less given up on this "market" and unless you are talking these workstations, so has AMD (in the consumer market, its been Zen 2 Threadripper for the last 2 generations), and Apple... Linus just told you what is up their, their have yet to transition to their M series of chips at this point and their isn't an update on the Intel models. That is just x64 bit of the "market", ARM chips don't cover this demographic save for the odd unit or two and anything else is REALLY specialized... I mean something that you have likely never hear of and lives in the realm of academics, so one-of-a-kind one-off machines or something like POWER chips. Neither of them would qualify remotely as "affordable".
Had to laugh at the fact that they suggested heading over to L1T to buy some merch; I literally bought some from them the other day. Shipping was a bitch, but after adding a few other things to bring the difference between the month shipping time and '7-10 day' shipping. It arrived from the other side of the world in 3 days (4 if you take the fact that the order was placed the day before in the US, but just keeping single time zone I think it was 3). Much respect for Wendell and the team over there, both for the content and the merch. Hopefully they get the next generation of the DP Repeater + Splitter soon; 1440p144 on DP1.2 is good, but 1440p240 on DP1.4 would be awesome, and while I just bought the 144Hz one, I wouldn't hesitate to buy the DP1.4 version to get that extra boost...and buy even more merch to get that fast shipping price difference down ;)
Well, if I win the lottery I'll buy one to edit my UA-cam videos on how to waste your money as a lottery winner. If I'm lucky my UA-cam will stave off bankruptcy for a few extra years.
The AEC firm I work for has 3 5965WX systems on order (128GB RAM, RTX A5000), not top spec systems so “only” about $10k a piece. Will be used for CPU VRay rendering through 3DS Max, Leica Cyclone/3DR, and some Tableau and ArcGIS Pro. They’ll replace some old 9900K based systems so should blow the doors off what we have now, I’m stoked to try them out.
I've been waiting for the 16-core 5955WX since announced. I really don't need even 16 cores, but I really need the 128 lanes. Too bad AMD isn't releasing them for retail only the 24 core and up. That's just too much for my budget.
I was in the exact same situation. I ended up moving from x399 to EPYC instead of TR40 because I couldnt wait anymore. I went with first gen EPYC 7351 which is dirt cheap and can OC with zenstates app so easy. I have a few at 4.1GHz with Enermax tr4 cooler and one machine at 3.8 on air cooling with very low voltages as far as Zen 1 is concerned. Server chips are binned insane... also great that the 3.8 7351 is faster than 1950X and between a 3950x and 5950x in benchmarks. Those machines are for our storage and network intensive tasks where we use as many ASUUS Hyper M.2 for NVMe drives as we can stuff into 1 Workstation and 10GB/20GB NICs
My gripe with these benchmark scores is they don't take into account the actual demographic for multi-threaded systems like this. For Example Adobe's Creative Cloud is an absolute abominable suite for testing cores, and while Blender is closer to the demographic you should really be using V-Ray (Non-GPU), PhoenixFD, Render Man, Octane, Houdini, DaVinci Resolve and other Top Tier Pro packages. The only reason "Adobe" is though of as a "Standard" is because of its (Very smart) position in the Market, but (we) professional Animators, VFX, Simulation Artists, use the Adobe packages to a limited extent. (AE and Premier and notoriously buggy.) This core is meant for one thing and one thing only. Raw (non GPU) Computational Power.
yea this is overkill, a 24-32 core threadripper non pro would have been the sweet spot for every workstation, but AMD refuses to offer that. Hopefully AMD will revive non pro Threadripper with Zen 4 or when Intel finally comes out with their HEDT CPU's.
Since Ryzen 7000 is going DDR5, that hope is now dead. I would have happily sold my other kidney for an upgrade from 3970X, but it's just not happening, nor do they seem to have a TRX40 chipset successor on the roadmap.
Thank you for watching! We’re writing all the time at work whether it’s emails, drafting up video scripts, etc. but having a tool like Grammarly will help improve your productivity and work more efficiently! It’s FREE, why not? Sign up for a FREE account and get 20% off Grammarly Premium: grammarly.com/LTT
gramar
For god sake ive been asking this for ages...what the hell is that damn little blue thing he keeps playing with in vids ?
Sadly nothing is free; if something is free it just means that the merch is you
He really needs money to pay of his lab doesn't he...
2 Things Linus, 1) Are you under some type contract agreement, that you give your Canadian viewers prices in US dollars??? 2) DO you Honestly think that companies are telling their customer, yeah we can do this work in halve or more of the time so we will pass the savings on to you??? If they were charging x amount for the time, and now they do something in halve or more of the time, they are still going to charge the same amount for their time, so they can make more money
Yes Linus, I indeed want this, but you see the problem is, I'm broke. So I'm gonna continue to watch your product reviews without ever buying them on my Intel HD graphics thank you very much.
burger 🍔
Same!
Intel hd graphics gang ✊😔
If broke - you can at least rent one on AWS or Azure - for up to several minutes....
Intel HD graphics op
We got our 512gb ram 5995wx machine up and running a week and a half ago. Huge time savings for us, processes our 3D laser scans in 1 scan per 6 Seconds. The pcie lanes support is huge for us.
For our applications there are totally viable cases to use Ryzen 7000/i9 12900ks systems but the 128gb RAM and pcie lanes support limitations reduce the maximum project size we can effectively work on with those systems. The 64 core 512gb and 7 full bandwidth pcie 16 slots allow us to tackle enormous projects without slowing down. The pricing breakdown is totally justified here as when you price things out linearly you end up coming out quite a ways ahead over multiple smaller systems, even without the project size limitations.
What kind of scans are you taking?
Damn what y’all be doing, Webb telescope renders or something 🧐
@@St0RM33 3D scans of primarily oil and gas infrastructure and facilities
@@louistru8652 long story short we make oil and gas piping systems that fit like adult legos
Linus is like a child on Christmas when there is new tech
And so are we, which is why all our nerd-selves are here lmao
I mean I would be too if I got to fart around with that kinda power
0:00 shows
Most of us here are including me
a child on christmas with a multi million $ business to buy everything :D
I think the lack of substantial improvement in Adobe product based workbenches has less to do with the chip and more to do with the legacy, patchwork framework that Adobe is still carrying on, especially in Premiere. Maybe try a comparison in Da Vinci or some other more modern framework where a ground up effort has been put to juice out every available core.
Adobe software runs a lot like a game with a single mainthread that calls every asynchronous task (additional thread). Many parts of the code are still synchronous and will hold up the main thread and make their software at BEST lightly threaded. It's really just hot garbage with a familiar UI and a stranglehold on the market due to being one of the first.
It's still an important test. Ultimately you don't buy a more powerful computer to run benchmarks better, you buy it to get your work done better. If the tool of your trade is Adobe then how much better that runs is what matters to you. It may also be time to change tools if another one can do the job better.
I sincerely miss SGI's workstations. Those were actually, TRUELY worth the money. UNIX, baby. I still hope to own an Indigo someday, of which I intend to actually use daily.
64 cores and 4,5 GHz turbo.. What a monster!
This is single core only. When all cores are loaded it goes to base clock.
@@killersberg1 All depends on cooling and load, as any other Zen.
all that and Tarkov still stutters and lags
Single core turbo, the biggest misunderstanding
@@SpaceRanger187 that's because tarkov is limited to 4 threads or 8, more powerful cores > more cores for games
I have my doubts as to if you can even measure Threadrippers performance the standard way, that is with normal benchmarks. I'm also VERY interested how in performs in engineering fields where FEMs are abused to death. Interestingly enough, some form of FEM calculations could be a great benchmark for CPUs too. Encryption might be another field. It feels like benchmarks used in testing are pretty narrow and always come around to neighbourhood of computer graphics and 3D design.
@Andrew Crews Finite Element Methods, which are numerical algorithms with many variants (Continuous and Discontinuous Galerkin to name a few), that are used to solve partial differential equations in fluid mechanics (elliptic pdes) and in other fields like physics and engineering.
@Andrew Crews finite element method. It is a technique where you approximate continuous functions like airflow over an object by dividing everything into small cells, the finite elements (finite because they have a finite size instead of being infinitesimal as they ideally should be), and then you use an update function to propagate a discrete (stepped) version of your function through the cells. This is basically how almost very modern physics simulation is done.
Obviously the smaller your cells, the more accurate the result (usually) but also the compute complexity usually explodes. So having more performance allows you to run rough simulations quicker, meaning less idle time, and it allows you to run more accurate simulations at all.
If I could I would send them some files from my job. I wish I could see how fast a threadripper like this could run some of my sims. Because depending on how fast it is I probably could convince my boss to spend that money.
@@jakobwest4811 I've got my disgusting python code from my undergrad thesis on a benchmark method of solving a particular elliptic convection diffusion pde using continuous Galerkin fem I could send them
@@jakobwest4811 I wanna see numbers, I need to see it
256MB of L3 cache? 😳😮
I remember a time when my brand new GPU had like 16 MB of VRAM and my brothers new laptop had an HDD with 8GB. I do feel old now 😨
Dude, my first HDD was 500MB. And I was like: "Wow, it's so much space, it's ridiculous. I mean Doom is only like 12MB and it's the best game ever made..." lol
Ohh the good Times. Sinclair ZX Spectrum with wooping 48KB at full expansion stage. First Pentium PC with 4MB Ram (and that cost more than 64GB DDR4 RAM). You were happy to get a Quantum HDD for a Amiga 2000 under 2000 DM.
You guys sound young ... my first PC was an 8086 Sanyo machine with 40MB hard drive, 256K of RAM (yes K), and an all new 'high end' 64 colour 640 x 480 (or 16 colour 1024 x 768) eVGA display, and could run at either 4.7 or 8 mhz - switchable by a toggle switch at the back.
Milan-X, in the Epyc lineup, has 768MB of L3. Getting awfully close to 1GB, TechTechPotato and Patrick from STH have suggested the next gen we'll see 96core, 1GB L3 chips in the next generation.
Wild times.
My first Gpu I remember was an ATI Radeon x600 with apparently 256mb of ram.... I am 30 ^^
I love seeing massively over-the-top rigs like this that are used for a very specific tasks.
7:03, the fact that Linus believes Apple will update the Mac Pro with anything but Apple Sillicon is laughable, great vid in all other regards!!
7:55 if you're 7x faster at financial applications than the competition then yeah you can basically price your pc however you want
7× faster Excel
True, but those calculations are running in server farms across the street from the NYSE, not on your analyst's desk.
No you can't because LTT is the usual 'we don't actually know anything about professional computer users', or maybe actively pandering to AMD - the i9 isn't Intel's pro line.
Threadripper Pro is *extremely* competitive for certain applications against Intel's pro offerings, and if you're e.g. a trade-from-home type with only one PC, then its super compelling.
@@lucidnonsense942 isn't HFT done on i9 chips with absurd one core overclocks and all but 2 or 4 cores disabled? They're all about clockspeeds.
If your financial work is that time critical.
I really respect the editors notes in the subtitles. Yet another example of LMG's outstanding production quality, but more importantly research of *both* the writers and the editors. Bravo.
Idk, if they really cared they woul put in seperate, real subs for the whole video
@@kkon5ti my point was more on the point that the editors catch mistakes of the writing team / presenters (as no one is perfect).
Its more like thw6 dont put to much effort on searching and then watch the video to see if they did good or need to put the notes(corrections :c) in the video :v
The real problem is they probably filmed this a month or two ago, by then new information on the AMD is already out. Timeliness in tech is important when the technology changes so quickly.
Even their own screwdriver out for that time. Makes me wonder, what has delayed this video that much.
In absolute awe of the production values of LTT videos. After watchin the staff meet and greet video its astounding you are where you are given the humble beginnings. So much respect and love for Linus and the entire team
It’s amazing how slowly I’ve gotten to a point where I watch (and want to watch) every Linus media group video.
Can confirm it's aimed at business. Signing off anything sub $50k for hardware is very easy, so a $6.5k CPU is just a no brainer
Compiling Unreal from scratch (and letting it compile the shaders as well when first opening it up) is a good benchmark imho of "how useful it can be for a professional".
I upgraded once from an i7-4790k to my current 3990x as I was doing a lot of freelance work remotely and I did cut my wait time for compilation drastically in many instances. (from 2h to 10-15 mins and from a pc that I couldn't even watch videos in the meantime to now being able to watch something on the side to wait)
Honestly, the TR line-up, even at "enthusiast" level was never meant for gamers but I feel like it's useful for some freelancers that can't afford to throw down close to 20k$ in a machine.
But again, it so depends on the daily workload you work with... :)
I’m pretty happy with my 3960x. All cores at 4GHz all the time. It smokes a couple of the compute clusters I use.
ok FREUD
imagine a whole cluster full of these
@@electroflame6188 imagine a cluster full of COME BACK COME BACK COME BACK COME BACK COME BACK
I'm just loving the look of the case... Far classier than all the RGB on a modern gaming rig. Want!
looks kind of bland tbh, but it's just there for performance.
@@DJSerpent as opposed to the average chassis that looks tacky? 🤷♂️ Different strokes for different folks
@@AntneeUK some look tacky, some look clean, some look bland like this, some actually look classy.
@@DJSerpent I really like the chassis Nvidia DGX Station. The copper designs on the professional gear is 👌
Turns out the chassis is a Supermicro CSE-GS7A-2000B, and it doesn't appear to be available separately. Shame
1:06 Maaaan i'm applauding you ! The way you put the sponsor is so clever 😂
Back when I was a self-employed Graphics Artist Professional and I was building websites using Dreamweaver and Photoshop, time was money. In fact, it was so much so that I would build custom machines with enough RAM that I could create a RAM DISK where I would load Windows, DreamWeaver and Photoshop into RAM. (I also used RAID 10, dual GPU's and cooled it using an independent window AC unit.) It took about 10 minutes for my machine to boot up, but that was perfect to get a muffin and some coffee. For the rest of the day, even the most intensive tasks were completed in seconds. This level of responsiveness paid for itself quickly and made the work easy and fun - waiting is energy-sucking.
I'm glad you start talking about the bussiness side of tech on this channel. Very useful for future engineers watching this. Hardware is cheap, Time is expensive.
Also, List price for hardware is very negotiable, perhaps not for LTT but surely for larger corps :).
i really like that ever since wendell has split off tek syndicate you do more stuff with him. definitely would like to see more.
I want videos to keep coming out where Linus says the screwdrivers are 'coming soon,' implying an absolutely enormous backlog of like months and months of videos. Maybe even a gag where it's got something that clearly happened after the screwdrivers came out, like someone's watching Andor in the background as he says, "Screwdrivers should be soon, guys!"
I literally had no problem watching this video with only 6 cores. Take that!
lol I literally only have 4 cores! :)
I am building a high performance multithreaded Java application doing fintech and would love to get my hands on one of these. When markets are moving, you want as many threads crunching numbers for you.
You know it's a good product when the community starts arguing about whether the benchmarks are sophisticated enough to meaningfully measure it's performance capabilities. If you've gotta start measuring things in a whole new way just to do it justice it's definitely moved the game along.
I have a similar experience about CPUs performing similarly to the GPU at 3:04, rendering in Arnold with a 5900X and a 1660Ti, they perform about the same, if not better, on the 5900X. Less passes and slightly less time needed.
What kind of sick person pairs a 1660Ti with a 5900X?
@@jsVfPe3 garbage pairing huh i have a 5900x but i paired it with something more reasonable. (6800xt) beasst for 1440p gaming got it hooked up to the asus pg279qm 1440p 240hz :)
My R7 2700 also renders at about the same speed as my GTX 770 in Blender Cycles, so... But I think he means that in the way it's a 3090. He's ignoring the part the CPU costs a lot more and that the 3090 won't be able to render anything that takes more than its VRAM can hold (something I suffer with so so much so...). Oh, how much I want GPUs with upgradable VRAM, specially when Nvidia is drip feeding the VRAM sizes on the consumer cards this dirty. I would be okay with even as low as 2060 performance if that meant having upwards of 48 GB to use.
@@jsVfPe3 I paired my 3900X with a rx570, because gaming comes after compile times. It was tough tho.
@@jsVfPe3 computers are not only for gaming, ya know
AMD : I heard you like cores, so we put cores on your cores
This Episode Exploded My Head. In The End I Was Speechless. And All I Can Say LTT You Have Opened My Eyes.
I have a 3955 with a 3090 in my desktop. It's good enough at playing games, but it's literally amazing for office tasks. Applying a MLA to a dataset and outputting as a Visio flow is almost instant, which is basically impossible to do on my high end business laptop. Identifying process flows from use data is an amazing tool, and a single map could save a company millions of dollars, and his computer is the difference between me charging $50 an hour or $200 an hour. I'm really excited for these chips to hit the consumer market.
Would LOVE to see you build the ULTIMATE water-cooling PC, using a Cooler Master HAF 700 EVO with FOUR 420mm radiators at the SAME TIME, as a single water loop, with water cooling blocks on the: CPU, GPU, M.2 drive and on the RAM! This would be the ultimate dream setup for next gen hardware (like the intel 13th gen CPU, RTX4090ti, PCIe gen 5 M.2 drive and even hot DDR5 RAM). With full RGB it would also probably be the best-looking PC setup ever!
I am planning on this setup myself (to ensure I never suffer from thermal throttling, and to allow me to stably overclock as much as I want), but I am nervous as I never done a custom water loop before, so I, & I am sure many others who have saved ready to build a new top of the range setup with the release of the next gen hardware, would really appreciate it if you could do this as a step by step tutorial. Thanks. :)
RTX4090ti doesn't even exist yet you know that right?
@Eden of the East yes I know; I thought I made it clear that my point was new & upcoming hardware is just going to keep getting hotter & hotter, requiring larger & larger heatsinks & fans, OR peoplecould wake up to the common sense solutionof water cooling. Sorry if I was unclear. :)
This CPU has more L3 cache than my first PC twenty years ago had RAM
Insane
The refurb market (even on Amazon) is a ripe one for server nerds and it might make for some interesting content.
Consider that you can get a refurbed 1u system with 64gb ram + 2x Xeon e5-2670 (total of 16 core/32 thread), and 2x512gb SAS drives for $200 USD.
Competition all but dried up means that the competition is still there.
In these days of high energy prices there are not many who only can dream of this. It becomes just watching new tech.
I could asbolutely use this at work. I run full 3D EM (electromagnetic) sims and FEM for a living and I bet this would really help. I use an older Xeon and 128GB of RAM in my daily workstation right now, but commonly use our networked machines with 128 cores and 2TB ram. I really wish I could send you guys some HFSS or AWR files to test on rather than Blender or 7zip.
This is what they're missing I think. It's been a hown a bunch of times that only a handful of content workloads benefit from these things, but I've heard from people in other industries saying they absolutely slay.
STH comes closest I feel to being able to properly benchmark these, aside from L1 techs on Dev workflow stuff, but it's till hard to get a clear vision of where they make the most sense.
I can potentially run something for you on our machine
I think you guys should start compiling Unreal Engine 5 instead of Firefox, I went up to 72 logicas cores getting ~linear time scaling.
Watching the title of this video change over the past few days was more exciting than the actual video.
"If you think the price is an issue, you're probably not the target market"
I can't imagine how AMD Genoa-X would be like 😵💫🤯
It's crazy to me that Gigabytes worth of cache is coming closer and closer in the server space
The Zen4 chips have more than enough compute for me, but not enough PCIe lanes. I want to be able to put a graphics card from each of the major GPU vendors in my machine and there just aren't enough lanes for that.
$26,000 computer was designed with this cpu and its amazing
Jeez, this thing is probably powerful enough to sculpt in Zbrush with sculptris pro mode on 100m tris model and do several textures bakes in same time and load less then 50%. Dat raw power
love how amd was so interested in profiting I didn't even know about the chip until now
Youre not their demographic then.
The problem is even at this price they can't make them fast enough. There's a huge backlog of orders for TR Pro, so why would they cannibalize that market to make a lower end SKU?
I could see it if they had another fab capacity to saturate that market and then some, but it's a bit more nuanced than they just want more money.
If you don't know about it, you're not their target demographic and weren't ever going to buy it anyway. And AMD doesn't need to do any marketing at all for these, they sell like hotcakes anyway
Linus remains a brillant host for the videos on LTT. I hope he never stops doing this ^_^
Maan... And I remember when Linus was going insane about the e5-v3 18 core Xeons.
Technology evolves so goddamn fast!
real? -thhardglump
You are the only person that I watch sponsored messages from. In fact, I sometimes watch your content solely for the sponsored content LMAO
The moment you get an ad for pulseway featuring Linus before a LTT vid
I have watched this man's videos enough to recognize a segue to his sponsor coming a few seconds ahead.
LTT is a staple now and in the 90's they would have had their own show on cable tv
I would very much like to see a new line of "enthusiast grade" chips on a modern DDR5 platform. I'm still hanging onto my 6950x since there's no real reason to upgrade, and I get that these new CPUs are more powerful than anything on the X99 or even X299 platform, but from an enthusiast perspective, something that supports quad channel memory and a whack-ton of SATA/ PCI-E expansion on a modern architecture would be very cool to see. Don't get me wrong, I'm thrilled with how far CPUs have come since X99, and I'm very excited for the upcoming Ryzen 7000/ Raptor Lake launches, but I miss the days of having a more feature rich platform to go alongside the consumer grade ones, not counting the server/ enterprise level stuff. Seeing consumer Threadripper get canned was very disheartening.
i'm using a "new DDR5" platform for more than a year with Intel already....
@@s.i.m.c.a When I say new, I mean relative to enthusiast platforms. The last consumer grade Threadripper was released on DDR4, and it's the same story with X299
These days server/enterprise equipment is much cheaper than a workstation. No org with any IT/budgetary sense would allow workstations to be purchased over Epyc servers.
the only quad channel consumer chip i can remember is some old intel
maybe around 4th gen
@@LeLe-pm2pr X299 was the last from intel I think, and that woulda been 7th gen. I have no personal experience with X299, but I can at least attest to X99 (5th/ 6th gen) having quad channel. Either way, it's certainly been a few years 😂
My family's on a farm, and I feel like getting an LTT screwdriver is similar to getting high speed internet - constantly led on by the phrase "should be available very soon!"
Used to work in an HPC (high performance computing) environment in the pharma industry. Money for them matters little. I remember when covid started we couldn't get cpus so we ended up having to rent HPC servers in aws, OCI, GCP etc. Some times we would get bills o er 20k a day
Whats the specs of your work computer?
I just love how excited you always get about new stuff :D
If he doesnt then he cant expect anyone else to
His excitement is so infectious, always love it
Just wait to see Zen 4 based TR with 3D-cache too and DDR5 along even more cores. It will be a monstrosity.
I wonder what TDP will be though.
@@michaelsemyanovsky9638 Well it's not like even Threadripper 3xxx likes Air cooling that much either.
I paused the video to finish my homework (which is on spanish) and when I resumed the video my mind thought that you talk spanish and for a split second I said "wtf linus is speaking english"
I agree, having support many in my time.. Nothing makes an engineer happier than a speedy beast of a workstation. And you say waiting for 10 minutes.. waiting on that render or computation for 10 minutes to complete also probably means adding an extra 10 or so that they stay extra playing table tennis or whatever else 🙂
There should be a Core Grouping feature that groups all of these cores to 32 threads so that it can also rip in single core
There is SOME grouping in the speed of getting the cached data from the same chiplet vs a different chiplet.
Like pixel binning on camera sensor
Super late to the party but IBM has been doing that for a while in mainframes. The issue you run into is latency. Intel I think is also looking into it for their diamond rapids server parts.
@@jonathanjones7751 If they're all on the same die you just group the ones that are closer to eachother; in fact apple has been doing similar things with the m2 ultra, even though the threads aren't united, there's 2 cpus joined and acting as one big cpu
Linus : "Let's talk about who and why and about our sponsor...... "
Me : give this guy a medal for that
LTT's sponsor segways are just the best!
Gonna become the fastest graphics designer in the west with this thing 😎
more TOKENS. MORE TOKENS
Yes mom, this is the cpu I need for my math class.😊
I'm going to be using this for rendering mandelbrot zooms.
"Ridonculous" - Linus 2022
I honestly thought that the CPU would be way more expensive than that.
He's underselling it a bit. They're retailing for 6500USD, assuming you can find stock of them.
Even if i got this one for free, I'd sell it again.
Sweet chip, but i couldn't come up with enough things to do at the same time on my computer for this CPU to be used anywhere close to a way that makes it shine.
And then buy a new PCIe 5.0 AMD PC lol
At about 8:40 he said that the screwdriver was coming soon, but it's already up for sale in case anyone wants to order and get in line.
CHAMPING at the bit. Get it right people.
After watching this video, I somehow got reminded of AMD and Nvidia GPU launch and how their GPUs only start selling after a month or so after the event. It would be so good if every GPU is released at the same time and is available just a week after their event or maybe even days.
I'm curious if this CPU is fast enough to make rendering animations on GPU pointless. With that many cores I'm curious if this thing can render a blender animation faster than a high-end GPU
@@marcogenovesi8570 Right but those cores function entirely differently. 4 CPU cores and four graphics card cores are not equal
While I am AMD fan, I am impressed by intels top consumer tech being just a factor 4 slower than the best performing extreme die size server cpu on the market.
That's... not something to be impressed by
Intel is faster per core. That could matter more. Not all jobs can be shared to lots of cores.
My lab purchased a workstation with a 5995WX about three months ago, and we have been torturing it ever since.
The workstation is almost exclusively used for de novo genome assemblies, and comparative genomics, and the bottlenecks have traditionally been RAM and cores. The workstation has the 5995WX paired with 1TB of RAM, and came in it just over $13,000.
Compared to the cost of our last server (8, 12 core Xenon's over 4 nodes, and 1TB RAM), which was closer to $30,000, this actually is an affordable option.
What software and OS?
@@Teluric2For whole genome assembly of eukaryotes, CANU, HiFiASM, Celera, SOAP2, RagTag, and Abyss (the software varies depending on the type of data used).
For more general bioinformatics, it's all mostly custom python (2.8 and 3.8...I wish people would just use the latter) scripts, but we do use some outside packages (FastGBS, CLC Genomics, and Geneious) along with BioPerl, and R.
The OS is Ubuntu 22.04 LTS, but we also have the workstation set up to dual boot Windows 10.
The older server has 4 nodes, each with Ubuntu 22.04. It runs the same software, with individual nodes used for genome assembly (node 1), gene annotation and syntany (node 2), comparative genomics (node 3), and protein modelling (node 4).
2 decades from now many of us will find how amazed we were with such a low end CPU.
Can you guys start to do audio processing and DAW render/ live audio processing latency benchmarks too? I feel like it would be helpful for lots of people looking to build a music production focused machine
AMD is still cheaper per core than Intel. They are selling for about $100/core which is a steal for rendering.
"I DISARGEE're -Ministry of Magic: Department of Mysteries" -thhardglump
Those clock reductions with many cores in use, oof. As ridiculously expensive as these things are, and being that the customers are almost exclusively businesses that have to justify every cent, I'm surprised CPU manufacturers haven't made more of an effort to optimize for *actually properly cooled* use cases to keep those clocks high. Performance is likely about halved vs its theoretical potential (even assuming they can "only" reach 4.5 ghz across all cores), so as long as the cost of a suitable cooler and power delivery is less than a second CPU (and second entire system to put it in...), that should come out ahead financially.
yeah i wish reviewers would show thermal throttling as part of the comparison
To be fair, that 2.7ghz clock is likely a worst-case scenario, with all cores being hit hard, AVX on, and a cooler without massive headroom. You'll probably get better clock speeds than that most of the time
They'd need liquid nitrogen, at 4.5ghz it's pulling 400w and that's not on all core? The heat is too much for most coolers, and not because of radiator limits but the block transfer and IHS. Direct die would help, but pushing much further isn't gonna be "properly cooled" no matter what, unless you are talking single core boost I suppose. As far as I can tell, I don't have a chip on hand myself. It's already pushing the limit, the difference is just how you want to when OCing it seems.
They have started doing that for some EPYC being used in water cooled datacenter applications, but I don't think they're off the shelf SKUs
They actually are optimized, and designed to scale with cooling capacity. That's what PBO offers. At some point you'd be expecting too much from 64 cores in a single socket.
I love the analogy of a single core boosting to 4.9ghz being like the wheel of a car flying off at 350mph.
Tricep flex on 6:08 was peak! Lukin good linus!!!
when Linus says its a privilege to buy something, you know it's going to be expensive af
Yes, I want this.
For which Tasks do you need this Power?
@@gabomasterbp8170 Solitaire
@@kevinmorgan7085 understandable
It would have been nice to see a comparison between this and the Threadripper 3000 equivalent.
It isn't an apples to apples comparison otherwise. I guess there is a reason this wasn't included.
If you want this comparison, Hardware Unboxed did a video featuring both the Threadripper 3000 and 5000 64-core models a few weeks ago.
If you want to compare the threadripper pro 3000 to threadripper pro 5000, then it's basically a 20% on average performance increase in single-thread and multi-thread.
i got a ltt ad before the video and linus tricked me into watching it like it was the intro to this video.
With the amount of shout outs Wendel gets, I'm surprised he has hit 1 million subs yet
Great video! Linus should get a raise!
Disagree Linus should be demoted to the basement of his house
Whoever owns this LTT company should give Linus a raise... Oh wait
hahahaha!!! Yvonne? Give Linus a raise! :D
Would be cool to see this compared with a Mac Studio, especially on the Adobe tests
How did you think mac studio can even compete with this monster
@@bhaveshsonar7558 well i wondered really only about the adobe given that the beast pc didn’t do so well on that - quite similar to the 11900k which apple says they are better than with the mac studio
This thing is gonna wipe the floor with mac studio
@@paniniman6524not gonna win by far against this chip. Maserari vs lexus.
The only thing that annoys me in this whole segmentation is I don't need more than 16 cores but do need way more PCIe lanes than regular ryzen can offer...
As much as it isn't viable for AMD I REALLY want an in between
try to find a bit older yet still good xeon. they generally will support tons of lanes and you can find low core variants, just be sure its on a good architecture and has decent clock speeds. i have a xeon 1620 v3 and it doesnt even bottleneck my 2060 and its 8 years old. so something like a 4 year old xeon would probably be good for you.
There's probably an Epyc sku that answer your needs.
Alas, that market... HPDT, is pretty much limping along and dead in all but name. Intel has more or less given up on this "market" and unless you are talking these workstations, so has AMD (in the consumer market, its been Zen 2 Threadripper for the last 2 generations), and Apple... Linus just told you what is up their, their have yet to transition to their M series of chips at this point and their isn't an update on the Intel models. That is just x64 bit of the "market", ARM chips don't cover this demographic save for the odd unit or two and anything else is REALLY specialized... I mean something that you have likely never hear of and lives in the realm of academics, so one-of-a-kind one-off machines or something like POWER chips. Neither of them would qualify remotely as "affordable".
Had to laugh at the fact that they suggested heading over to L1T to buy some merch; I literally bought some from them the other day. Shipping was a bitch, but after adding a few other things to bring the difference between the month shipping time and '7-10 day' shipping. It arrived from the other side of the world in 3 days (4 if you take the fact that the order was placed the day before in the US, but just keeping single time zone I think it was 3). Much respect for Wendell and the team over there, both for the content and the merch. Hopefully they get the next generation of the DP Repeater + Splitter soon; 1440p144 on DP1.2 is good, but 1440p240 on DP1.4 would be awesome, and while I just bought the 144Hz one, I wouldn't hesitate to buy the DP1.4 version to get that extra boost...and buy even more merch to get that fast shipping price difference down ;)
Love the animation on the graphs so you can see what is being talked about
Well, if I win the lottery I'll buy one to edit my UA-cam videos on how to waste your money as a lottery winner. If I'm lucky my UA-cam will stave off bankruptcy for a few extra years.
Minecraft chunks test
As we're nearing the size limit of chip design, I expect all of us will have 64 core CPU's by the end of the decade.
thats good i guess
The AEC firm I work for has 3 5965WX systems on order (128GB RAM, RTX A5000), not top spec systems so “only” about $10k a piece. Will be used for CPU VRay rendering through 3DS Max, Leica Cyclone/3DR, and some Tableau and ArcGIS Pro. They’ll replace some old 9900K based systems so should blow the doors off what we have now, I’m stoked to try them out.
I just ordered mine, thanks for showing me this sweet, sweet techno-candy!
bro i can't even afford to want that much less actually want it
I've been waiting for the 16-core 5955WX since announced. I really don't need even 16 cores, but I really need the 128 lanes.
Too bad AMD isn't releasing them for retail only the 24 core and up. That's just too much for my budget.
I was in the exact same situation. I ended up moving from x399 to EPYC instead of TR40 because I couldnt wait anymore. I went with first gen EPYC 7351 which is dirt cheap and can OC with zenstates app so easy. I have a few at 4.1GHz with Enermax tr4 cooler and one machine at 3.8 on air cooling with very low voltages as far as Zen 1 is concerned. Server chips are binned insane... also great that the 3.8 7351 is faster than 1950X and between a 3950x and 5950x in benchmarks. Those machines are for our storage and network intensive tasks where we use as many ASUUS Hyper M.2 for NVMe drives as we can stuff into 1 Workstation and 10GB/20GB NICs
Now imagine that with a 4090
Holy smokes
My gripe with these benchmark scores is they don't take into account the actual demographic for multi-threaded systems like this. For Example Adobe's Creative Cloud is an absolute abominable suite for testing cores, and while Blender is closer to the demographic you should really be using V-Ray (Non-GPU), PhoenixFD, Render Man, Octane, Houdini, DaVinci Resolve and other Top Tier Pro packages. The only reason "Adobe" is though of as a "Standard" is because of its (Very smart) position in the Market, but (we) professional Animators, VFX, Simulation Artists, use the Adobe packages to a limited extent. (AE and Premier and notoriously buggy.)
This core is meant for one thing and one thing only. Raw (non GPU) Computational Power.
This is corporate companies version of "Yes mom, I need it for school"
yea this is overkill, a 24-32 core threadripper non pro would have been the sweet spot for every workstation, but AMD refuses to offer that. Hopefully AMD will revive non pro Threadripper with Zen 4 or when Intel finally comes out with their HEDT CPU's.
Since Ryzen 7000 is going DDR5, that hope is now dead. I would have happily sold my other kidney for an upgrade from 3970X, but it's just not happening, nor do they seem to have a TRX40 chipset successor on the roadmap.