When I worked at Nickelodeon, we had a render farm in the building.. it was an entire room with floor to ceiling rack mounted render blades.. dual xeons in each. This single CPU is more powerful than that entire room.
I used to work on a code base that was roughly 100000 files and several million lines of code. On the 20 core Xeon workstation I used at the time, it took more than 30 minutes to compile the product.. Sure, sometimes you just touched a small part of the code and it just took a few minutes, but often you touched some core header file and almost everything needed to be rebuilt - a CPU like this would have saved me tons of trips to the coffee machine while waiting for a build to complete (multiple times daily). It would have paid for itself in just a few weeks.
Would love to know exactly what Xeon you were running in there 'cuz there are many E5 26XX V4 Xeons selling for a nickel a dozen in various places. Cheap MoBos have popped up with X99 chipset as well.
A graph on a monitor behind Jay is so much better than a full screen with voice only in my opinion. I was surprised how much difference it makes to me. Anyways, good job with the video as always. :)
I would just like to note that I agree with this so much that I liked your comment, despite the knowledge that doing so would increment it from 69 (giggity) up to 70.
@@owd200 The pain is there, but I am willing to forgive you :D RIP "69th" like. You will be missed. PS I guess, we are now waiting for the butcher of the "420th" hahah
I love the new way of going through the charts. Sitting at a desk and explaining is much better than just charts and music. Love the channel Jays2Cents team.
I do 3D, owned 3970x, would happily take in the new 7980x, or even 7995x. The speedup that you get is amazing. Outside of that, even running multiple render nodes out of this single CPU would quickly pay off the electricity cost. You can split it into 4x16 core render node, if optimized, will for sure draw less power than 4x 7950x
I have the 3970X as well. It's a great value for a 3D workstation. Getting the faster 64 thteads would be nice, but you could get a 4090 with the 32 thread version for the same price. For 3D GPU rendering is the thing now (even though I prefer CPU).
@@AdrianGarcia-cj7ux You are correct. An overclocked 3970X has the same rendering performance in Daz as a 1080 ti. A RTX 4090 is almost twice the performance of an RTX 3090 in Daz.
I write HIGHLY multithreaded and long running software at home for fun (nothing practical). This kind of CPU is exactly the kind of thing I would use do that. Right now I'm using a dual socket AMD EPYC 75F3 system. I'm using a 5950X as my desktop processor for development, but I'd end up using Threadripper as a server for this kind of compute.
The EPYC got tons more memory bandwidth and lanes over Ryzen processors. I too have a couple of 5950X (Linux workstation and ProxMox) processors. So would love to run EPYC for my ProxMox servers but waiting on prices to come down a bit on the secondary market.
I definitely need this at work. Our security team hasnt found a security suite it doesn't like and thinks they all need to be installed on all systems. This may let me actually use my PC while letting those background tasks run lol
As a security professional, I felt that shot haha. It really do feel like between endpoint security and network teams, they have a bunch of endpoint clients that all want 100% priority on the system, oh and daily health checks and scans that just fuk your day lol.
@@whoopass2rb when I was doing applications, almost all the developers wanted to assume they had 100% of the machine and everything had to run at top priority. It meant users had to sit & twiddle their thumbs for things that were basically background long running tasks, when they could have started doing something else. It's risk aversion, "something might go wrong". I found one busy waiting bug that had caused a large company's weekly billing reports to take more than the weekend, so the developers had had to automate a clone system 2nd machine which was purely for reports ... doh. It was a single character fix on 1 line of code to an OS call argument.
@@whoopass2rbhuntress EDR and bitdefender total protection suites (or cylance + windows defender) is just as bad. Even breaking it down to only two products or one even, you can't get away from patch and scan days :
It would be cool to see you compile often-used software, like Chromium and the Linux kernel, both before and after an overclock. That would be a cool video, Jay. Hope you see this.
@@antoniohagopian213 I've built many systems, 90% are AMD. Not one system is unstable or crash at all. I've had more issues with Intel cpus The old stigmas still alive and kicking.
I appreciate the container ship analogy. I like Jay’s content but don’t always understand everything because I haven’t been interested in computers until recently. So the container ship analogy was very helpful for understanding the purpose of this type of CPU
Real informative, always like how you give a comprehensive and understandable breakdown of the hardware information, as well as the testing methodology and terminology used. Also: Props to whoever named the E: drive 'BALLZ'. I approve of the occasional silliness in a work environment, it helps make things feel way less tense.
It can be used at home, most definitely, but for professional workflow, in the vast majority of cases. I have a friend who uses the 3990x for numerical simulations, with programs such as ANSYS, LS-DYNA, ABAQUS, Simulink, etc. And this year, it is time for an upgrade. Most probably, he will get one of the Pro Series Threadripper, if they will prove to offer much more performance, than the HEDT Threadripper, in the programs that he uses.
@@falsevacuum1988 Agreed ! Honestly I do not know much about the subject. What I do know, is that on his current platform with the 3990x, due to budget constraints he is limited to use 256 GB RAM, where 512 GB, would have been a sweet spot for his needs. Since the new Threadripper (Pro) platforms are already an option, during the years he managed to financially balance himself and he can afford to upgrade to the newest platform, whichever he will find best to suit his needs in accordance with the budget he'll decide to allocate.
Simulations in engineering, medicine, and physics are ideal for this type of processor. I used an earlier Threadripper model for Monte Carlo simulation of the interaction of high energy radiation with human anatomy to plan cancer treatments. The highly parallel nature of these calculations benefits from having lots of threads.
@@brodriguez11000 This is true. The application could make use a Tesla card if one was present, but that was a bit too much for my home workstation budget. The TR processor was surprisingly good on its own.
the 1st thing I learned with my threadripper 3960x build is that cores multiply watts...for that reason overclocking is a big no even if each core only uses 3W more...by 64 it adds 192W just like that if it's 4W it's plus 256W it rises very quickly for the same reason undervolting becomes very interesting
Never thought about it like that, but I suppose that makes sense...I have never had a chance to tinker with HEDT CPUs but it seems like it could be really fun!
Each 7980x core sips only 800mv under full load at default clocks. AMD tuned the efficiency already about as far as it can go - but there is OC headroom in these chips, it just takes a lot of power and completely ruins that efficiency. I think it was Steve at GN that got about a 30% perf increase OCing just on an AIO. Sucked 2x the power though over stock.
@@manoftherainshorts9075 It's not Intel ... lol. Actually the 96 core parts have all power reduced cores - the TDP is the same for all of the threadrippers - pretty much the same as a 14900k under full load.
Currently running a Proxmox box I built a few years ago with a 3970x and 256gb ram, running multiple gaming VMs passing through an RTX 3080 to each vm. Also running several other VMs and docker containers all on one box. This is definitely for me, unfortunately sounds bad that the gaming performance took a hit on this new processor, but threadrippers are the GOAT!
I doubt this CPU will bottleneck a 3080 if you play at 1440p or 4k...Unless you upgrade all your cards to a 4090 you should be just fine...And even if you did, I doubt it would bottleneck most games very much unless you play at 1080p or 1440p at lower than ultra settings...
For gaming, I know of ONE use scenario that would want this: EvE online multi boxing. I used to run 3-6 accounts simultaneously, along with discord, a mapping tool, and a few other small things. My (now very old) Threadripper didn’t go over 60% load, but a regular gaming cpu I had previously to the Threadripper would instantly max out and get throttled.
Would love to see a full length minimal cut video of building a custom water loop for this setup. Video would probably be very long but it would be awesome
Just remember the FX series was the first step on this power effficient multi core journey, sharing cache on this cores was their first hard lesson, their refusal to use multithreading years sooner their biggest.
AMD lost a class action lawsuit over FX. Them calling them "8 physical cores" turned out to be false, it was closer to a 4c/8t so they were sued for false advertising and lost. So they were using hyper threading a long time ago, they just didn't call it that.
That case was a technicality, it still had 8 physical cores but the shared memory made them 4 in the "eyes of the law"... I have delidded and counted 8 physical cores though and CPU Z and Windows both show 8 physical also... like i said, sharing cache was the first hard lesson... The first "Hyperthreading" was not until Zen chips . Still 5GHz per core over a decade ago while people still celebrate that number now.@@falcie7743
Bit ironic since FX is the reason people still call AMD inefficient space heaters even though the current situation is now the complete opposite of 10 years ago. Same way people still call AMD drivers buggy and hit-or-miss when there's literally nothing wrong with them, old stigma from a decade ago that for some reason refuses to die.
I never got the claims about heat, never went above 75-80 on my system, maybe a touch beyond when the OC hit 4.9 /5GHz... but i'm in Britain not death valley... I think Intel and Nvidia still play their games against AMD... Microsoft too, why would they stop... Microsoft likes to unilaterally unistall AMD drivers with updates, it's a PITA...@@JohnDoeWasntTaken
@@mrwpg Mainly had to do with the top end chips like the FX-9590, I remember that thing was a beast but pretty much needed liquid cooling. It annoys me a lot how Intel and Nvidia still have enough mindshare for fanboys to continue spreading lies about AMD. I waited years for AMD to be competitive (because they were a major underdog back then) and now that they are, people still parrot the same stigmas from the past and gloss over how Intel is now in the position AMD was a decade ago. Even when they weren't even into PC building at the time so they are just repeating what they heard from someone else who won't let go of the past. I'm building my first AMD powered PC this Christmas since all my past ones have been Intel, with a Ryzen 7700X. Intel may edge it out a little in certain tests, but Intel also runs way hotter and uses way more power, not to mention the stupid (imo) p-core and e-core design of Intels. I want all p-cores in my CPU. Windows barely knows how to handle e-cores.
I'd use something like this to do finite element acoustic simulations using COMSOL for projects that I work on. I don't know if this exact chip would be the best, but my last machine was a 2019 Mac Pro with a 24 core Xeon chip and ~700GB of ram. I could arbitrarily use more ram by making the model bigger / modeling a larger design to higher frequencies or I could run more models simultaneously. I'd typically use ~300GB of ram per model and run two models simultaneously. The software would use all the threads / cores available. Each model would take 4-5 hours to run. I'd cue up models with geometry iterations and leave it running continuously for days. I had a more complex model that I tried to only run once per project that would usually take more like 48+ hours to run. So more cores / more speed / more ram capabilities would all directly allow me to work faster and do more advanced analysis of the projects I work on at home.
I rely on you guys. While im not just sure, im positive on the use case not being one for science. It helps make studio technology more affordable for mysic production. Im still using a 2013 mac pro for strictly protools/logic, its fully loaded with the exception of ram (128gb is enough) it still eats anything i throw at it. Music production can be done with an decent phone these days. However the racks, and other such equipment i use. The hardware will always beat the digital plugin 9 times out of 10. It could use some more ram now a days tho, although very few paying projects requires 600 something lines in the daw besides recording a full symphony for a movie score.
I own a production print shop. I need this so much to process image files to send to the equipment. I currently use a water cooled 5900x and when we process it brings it to its knees! Strongly considering one of these with a custom loop like my 5900x. Being a business expense lessons the sticker shock. But since it’s my business it’s still a big pill.
You should do a cpu cinabench video of every cpu you have tested. It would be cool to see how cpu's have evolved and at what speed that change happened. I just want to see all the graphs.
I think you should bring out the A/C to see what it does first. Cinebench is the new 3d Mark for Overclockers. It would be insane if you can break 120k cine score. But yeah I would say Threadripper would be amazing for those old fashioned arcades where they their own computers running games off of a server. Threadripper makes the footprint smaller where you will only need 1 machine compared to 4 to run a game off of 3 different virtual computers. Now it would be interesting how Intel will respond to the return of HEDT computers.
anyone doing freelance or just for fun rendering in blender (cycles) or max with corona would use this. The faster the render times, the more enjoyable the artistic process and the less fatigue from waiting for renders and re-rendering to fix mistakes etc.
GPU rendering with single 4090 is still many times faster and you can get about 3 of those for the price of 7980X. This makes more sense for simulations which tend to be CPU and memory heavy.
@@WereCatStudioyou will be severely limited in scene size with a 4090 and will have to split the entire project into a lot of smaller render runs and even then you might have to do multiple passes depending on the complexity. Depending on the workload your projects will take a quality hit rendering on the GPU.
@@xfy123 Yeah, I agree that once you run out of VRAM the CPU rendering is prety much the only thing that makes sense at that point but I personally never dealt with anything more than 8GB (as a hobby 3D artist) so I have no idea how big some scenes can actually be in production. But in that case you would be still looking probably at something like A6000 with 48GB of VRAM most of the time... perhaps.
@@WereCatStudio I'm not a 3d art guy but I know a few of them that work as freelancer's, they all mainly use CPUs because more of the render engine features are supported and they can craft more complex scenes and render them in a single run. Some of the projects I've seen had so much memory utilization that even today a top of the line GPU wouldn't fit them. A friend was working on a short animated film and trying to make it as photorealistic as possible and a small scene of a living room couch an armchair coffee table tv and a cat had almost capped his 256gb of ram.
@@xfy123 I only have an experience in Blender so can't comment on anything else but typically even a photorealistic model of a person can fit just fine into 8GB of VRAM. You can make chair take 250GB easily if you have no concern for any kind of mesh optimization at all. The biggest scene I've done was around 50GB just because I subdivided almost everything so I had over 300M of polygons. With a bit of optimization I could squeeze that into 8GB just fine without affecting visual detail.
You make an interesting point about efficiency... I'd love to know the power draw and thermal performance on this CPU but somehow restricted to 12c/24t.
@@lordyhgm9266 I'm not sure why you would drop 5k on a CPU, when you can have a pair of systems using a load balancer off a raspi or something like that for the total cost, with GPU's and more supporting it. Especially when you consider that for that 5k CPU you still need a motherboard, the ram, and so on which will be an easily trippling of the price to make it worth the while. The truth is, consumer/mainstream platform hardware has gotten so damn good, that running a pile of VM's with GPU pass-through and a lot of other nice features has become feasilbe, with good part selection it's even fairly straight forward. With an 8c/16t CPU, you could easaily toss a pair of GPU's in, and enough ram to run a half dozen fairly demanding VM's, with a small pile of containerized applications on the host OS, to run everything from Email, to File backup, to Automatic Encryption/upload to Cloud storage, Streaming/Transcoding, and more. The problem with these super high core parts is, many applications literally can't take advantage - you run into the problem that spinning up new threads is slower than threads completing, resulting in a saturation limitation. The only way to get past this in some cases is to be running multiple instances - and given home use conditions, odds are you simply don't need that much compute for enough time to really make it worth the while. Now, the 24c/48t part - that gets into something more people could feasilby use. And in a one box solution, the ability to tinker with power draw and such would allow us to likely go to fairly low power draw levels (talking in the 2-3W per core range) without sacrificing all that much in the way of top level performance. Realistically though you are going to want in the range of 15-20Tb of storage in a raid configuration, you are going to want at least one decent GPU if not 2 or 3, set up to do different tasks (like, I dunno - have an instance of stable diffusion running that is configured to output images based on prompts you create for a D&D campaign, so you can generate lots and lots of images to sort through periodically and pick out the better ones). But even at 1500 for the CPU, if we factor in the rest of the system - odds are we are looking into the range of 5000$ to make it worth the while - and at that point, it might actually make more sense to go with multiple manstream based systems, rather than a single HEDT one, given the ability to do load balancing, mirroring of services that you deem critical, etc, and by doing this - create secure redundancy so you don't lose access. I mean, if you have the money for one, and just want a cool all out, all in one system: Go for it. But if you want to think cost to performance ratio and efficiency in terms of redundancy and reliability - for home use it's really damn difficult to find a reliable use case for it.
Rather than use CPU temp for a custom water loop, I took the advice of some UA-cam video to use a Temperature Sensor that employs water termperature to drive radiator fan speed. I use an AMD 5950X to calculate atomic structure spectra and it takes about a week 24/7 to cover the full spectrum of hydrogen (153 lines). Trial and error gets me closer to gussing the internal structure. I'm better on hydrogen than existing models. I could use Threadripper, but affording it is way out of my budget.
This would be likely overkill for me, but I am strongly considering a lower tier one for professional use as a programmer. It would easily allow me to run a virtualized kubernetes cluster, benchmark high load for services I write, allow me to compile my code really fast (both thanks to high core count and huge RAM). Basically, removing most of bottlenecks I have.
Some of the Use cases for TR outside of work environment could be : - 1) AI hobbyist - Machine Learning/ Deep Learning - Training and Inference use-cases and running LLMs and/or developing LLM-based apps. 2) Hosting personal websites for monetization - Blogs etc. 3) Educational Investment - Learn IT stuff that requires infrastructure like Kubernetes, Docker Swarm etc.
I have been wanting to do some aerodynamic simulations for work while at home instead of going in. The primary reason I haven't been able to is my computer so I'm hoping this platform will give me the performance and more importantly the RAM for it.
Doing CFD modelling now in sparetime (former work and master thesis looong ago), I can always use more multithread CPU performance for dynamic 3D post processing purpuses, not to speak the numerical calculations of dynamic flow modelling in all kind of fluids from air, gasses to high viscosity oils through complex geometris. I'm in a totally different field now, but as my dept. closed down I could get the cheaper "PC" test bench for almost free, of which I'm now using as a hobby project, but cannot resist to do small tweaks and the few hw upgrades, still doable in 2023. And $5.000? No kids of today would believe what amounth my small energy department at the university campus invested in computing power for their master degree- and PhD students during my engineering studies mid 1990ies, ofcourse backed up by close collaborations of the companies we did main study projects for. Mainframe costs of which a high end multicore PC could do today at about 1/100th costs, here 27-28 years later.
I'm curious to see how this would benefit rendering videos, if at all. Also would love to see a water cool loop try to cool that beast so it doesn't throttle!
I could use this for my home vSphere lab. Would be awesome. Would surely beat the old 16 core Xeon on an X99 board that I am using currently. I use the lab machine for testing software and developing POC environments.
I got one of the original Threadrippers (1950X) and thought it was crazy with 32 threads. I wasn't as interested in being peak gaming so much as I wanted to not need to close anything while keeping performance up. I recently am replacing and went with a 7950X3D or whatever it is vs staying with the Threadripper. Mostly because I don't need more cores/threads currently. Also the price is insane. Outside of a few very niche home-server ideas it does seem like a product in search of a market.
I work at a huge manufacturing company and their servers should be running this! The current servers have comms issues when running hot tests, were we will have to restart the test and sometimes restart the computer to clear the memory cache.
Answer = megatasking. When you have 170-plus processes running in task manager with a 38 inch screen hosting 5 main apps you want to throw CPU cores at those, and the more the better 😎😎😎 As for having all that 1TB memory = RAMDISK 🔥🔥🔥
It's not just the thermal interface material Jay, it's also a huge IHS where you can spread the heat. I.e the 350W is easier to handle on a smaller cooler to the reason that the heat isnt that centered and can thus be easier be dissipated.
I really think that if you have really high end cpu's that youre wanting to test for gaming, you should test games that are more CPU intensive not the old standbys for graphics testing. Things in the simulator category would be much more of a decent test (like Farming Simulator 22 or Cities Skylines 2 with a city over 200k population, or especially over 300k where the games slow down to a crawl even on 3x speed cause the simulation processing takes forever so you end up under 10 fps, sometimes switching over to seconds per frame)
@@organicinsanity2534 (as a long time Farm Sim player, it's pretty damn frustrating at this point how little they're seemingly putting into each new release and how outdated by 'sim' standards it is; it really should be called 'Farming Arcade 22'... ...but that's a topic for another time...) lol
I remember when we got new machines. 3000 dollar workstations for each developer and we were ~10. We did some math and the expense paid for itself within the week. (This was Java and we got modern SSDs, and since the java compiler is horrifyingly I/O starved, this made a massive difference.) The fact that we could now run a local cluster of actual prodlike servers too, meant that we could test a lot more. Instead of having one app server with 3 deployments, we had 3 app servers each with their own deployment. Was a total game changer in terms of how fast things went. (We had one EJB server, one web server and one "REST" server IIRC.)
I run a physics lab. but to be able to work from home and run simulations would be great. I also use redshift which benefits from more PCI-E lanes for faster rendering. I have a 5950x, I'll be upgrading to a 7995X
RAM is a bit of a good point, since DDR5 ECC I believe is keyed differently Also I like how Jay is like “it’s only $5000, I can’t believe I’m saying this”
go for LRDimms, it will reduce wattage on the memory controller , and tdp be reduced, but allow CPU to increase wattage , and so increase their tdp , lrdimm also makes it dual rank rather than single rank, lrdimm memory is pretty standard setup on servers. is generally faster overall, on wattage more than 50% is on memory and cache controller
How much does your electricity cost? I only pay like $0.17 per KWh...Until recently I never realized how cheap my electricity was compared to a lot of places...I guess I have taken it for granted...
Already dreaming of this CPU and I know I'll have to buy it next year. Currently running 2 OC'd machines - 10980XE with 256GB of RAM and 5950x with 128GB of RAM. What am I doing except playing DCS world? Marketing Mix Modelling. 1 calculation takes about an hour and I have to run 20-30 to deliver a presentable result to the client. Still cheaper (and faster) than cloud computations. This CPU would cut it into a quarter or allow me to run 4 in parallell without losing speed. Yes, I work from home for my own company and yes, this is the mother of all CPUs.
I'm a 3D animator and VFX artist. This CPU is the dream! I could maybe even binge watch on my 2nd monitor while rendering and not expect my pc to freeze.
I used parallel computation in my Ph.D. thesis for post porcess video tracking. this CPU would dominate the algorithm I made as I can process 100 frames in two seconds compare to single frame at one second.
For people like myself who are buying this it would be better to see more Data Science / Ai benchmarks, i.e. token generation speed, any of the pytorch or tensorflow benchmarks etc
Most of the people buying HEDT CPUs are looking at the cost as more of a business expense than a personal expenditure, so yeah AMD knows they can charge much more because of this and it's going to appear obscene by consumer CPU standards.
This is the type of processor that would be completely useless for gaming. This type of CPU would be for people who do heavy video editing all the time to make money, or oilfield calculations, or financial or science calculations.
This would be so useful for 3d modeling and rendering Jay. All kinds of simulations in 3D. Rendering 60fps animation. Get two of these and you already have yourself a mini pixar render farm. This is the ultimate dream of 3d artists like me.
I'm currently on 7950X and I run a minimum of 4 VM's for MDM testing. My previous machine had the same core count, but it was a ThreadRipper 1950X for application integration/VM stress testing. I'm still stunned that on a 7950x I can snapshot an entire VM in under a minute when the 1950x takes upwards of four or five minutes. I'm sure some of that comes down to having 64 gigs or DDR5 running at 6K but it's still amazing how far they've come in just a few years. One thing I do notice is the limited peripherals you have with the desktop version of the CPU compared to the server version with a 1950x it seemed like I would never run out of I/O but on this machine(7950x), I'm lucky to be running two different memory channels.
That is some very impressive "piece" of silicone. Really insane what kind of performance AMD manages to squeeze out there. The Boost frequency and Efficiency. Also love their decision of bringing those kinds of CPUs back, there's certainly demand for here and there. Since Ryzen launched AMD evolved so much and delivered awesome stuff again and again. They never thought it could happen, but yea, Intel - You guys gotta have to step up a notch ( or three ). Competition is always good for the consumer.
In my profession we perform frequent POC testing/simulation/troubleshooting for customers. For these builds we require nested virtualization at scale in our virtual lab environments. As you can imagine, the more 'devices' like virtual routers, switches and servers we spin up, the greater the need for multiple threaded compute resources. This CPU would be great for our use case and workflow.
3D Rendering, CAD, editing music (such as tracks for video), software development, graphics apps, and just handling the large number of browser windows I leave open.
I just want more PCIe Lanes without dropping the GPU to x8 connectivity. I've got multiple nvme drives, capture cards, and wireless VR I wish, so badly, that I could just get a regular ryzen/mainline intel chipset that would properly support that
I really like the differences between each youtuber, JayZ talks a lot more about cooling and ram , while steve from GN talks more technical, watt performance and linus who talks about it over all with more consumer based approach. No single youtuber does the same review,i enjoyed watching all 3 and everytime i watched,i found something new , or had more indepth about something that the other said but didn't go on indepth.
Sound production/mixing: different tracks of sound playing at the same time. One of the workers for our security client does sound production at home and uses a 2xxx threadripper (I believe it's been a hot minute since I last talked to him). He told me that each individual sound track uses a CPU thread. So if you need to use around 100 tracks of sound this CPU is for you... I guess
Audio is not that cpu heavy. And one track per thread is simply incorrect. I work on sessions with way over 100 tracks on a 9900k without problems. Single thread speed can still be more important with a lot of audio software and plugins. You'll see more Mac minis and laptops even in big studios these days simply because more processing power is unnecessary.
I use my 3960x threadripper for a home server that does a bunch of VMs, Plex streaming with support for transcoding a couple streams (no GPU acceleration) and the occasional Handbrake transcode. The VMs are the core use case, most are for work stuff. I undervolted my threadripper to achieve better performance/watt as it's on 24/7 and the system consumes the majority of power in my house. I wouldn't need more than the 24 cores I have today, but if this system ever breaks I'm definitely getting another one, I love this platform.
If the "tech press" hadn't been unduly unfair to AMD with the FX-8000 series - (which was way ahead of its time with "CMT" clustered multithreading) - perhaps AMD would have delivered Ryzen by 2015 instead of 2017, and we'd be onto the 9000-series by now. Developers (like Windows it self) didn't care about AMD, which naturally didn't help at all. In fact, developers didn't care about AMD, until AMD got into the consoles, and then they were forced to care and put in the extra work to utilize the extra cores and extra threads. Big what-if, but absolutely more possible than not. You can thank the "tech press" for the stagnation as a result of always promoting Intel with the single core performance doing their disgusting ghoulish 1080p CPU benchmarks, it allowed developers to just squat on garbage code.
You are right where they were forced to care due to Crossplay. In my opinion it could deal with AMD trying not to follow suite as Intel. AMD was trying to make a statement and ultimately NVIDIA was just highly funded compared. It was a P2W situation@@clitmint
As always an interesting and thought provoking discussion. It's an awesome CPU and no doubt this would be great for forays into numerical analysis. I would love to see what you can put together for water cooling. While I could envision building a 7980X system for grins, the price is a real obstacle and worse, it so far doesn't show on-par gaming performance. For gaming, it's more rational for me to struggle along with my 19-13900KF which despite it's advanced age is still a good CPU for gaming not to mention being as reliable as any I've used. Besides, it's adequately performant giving scores of 40500 on R23, max temps of 82C and max power of about 300W (ahem, in an ambient of 26C). The OC tuning that gets this is basically pushing the big button on XTU (even with a wholly miserable SP of 96).
Professionals need it for data processing. When I did 3d measurements we would end up with hundreds of thousands of points that we would need to convert into models. The process was multiple steps. Sometimes an engineer would be waiting hours for a individual step. Sometimes we would do major steps over night. If you could cut that down to 30 mins with something like this it would be tremendous. We had dedicated systems because we had more load than the servers.
I still have a 1900X functioning. If I had this chip, I'd feel at the top of the world. And all I'd use it for was Folding@home. And it would perform admirably.
What a rollercoaster... so much to love, so much to hate. By the way, clicked you link for merch and noticed you don't have any "Tall" available (surprising for a man of your stature). If you add "Tall" as an option (XLT 2XLT), I'll happily buy one of every shirt your store. 2X, 3X,4X, etc never seem to get longer... just wider. Need More Longer.
We work on a software called Ansys. The software just pulls whatever resources you throw at it. A 128core and 1TB RAM machine would be right at home doing our simulations. It would probably reduce our simulation time by a few hours at the very least, our simulations wouldn't be bottlenecked by RAM either
The potential for so many VMs... I want three, for a cluster, running vSphere. Yes, for my home lab. No, I couldn't even come close to affording three, or making use of them. I still want.
I would love this machine to run The World community grid. It runs all cores 100%. I run 5 machines at home doing this. I never noticed the electricity bill as an issued until I added 2 7950X CPUs into the mix.
We are looking at building a TRX system for our structural engineers that they can remote into to run different models. Although 5000 usd is a lot for a home system, it’s very reasonable for a relatively small engineering department 🙏🏻
My use case would be UE5. I compile code and shaders with UE5 and based on tests that I have read the 7980X compiles UE5 in under 10 minutes (9 something, and that's stock!) while my 5950X does it in around 25 min. My other workloads are training neural nets and for the time being my training can only be done on a CPU.
@ZaHandle not all algorithms are supported on GPUs, also when exploring new kind of NN it's faster to evaluate the architecture on CPU and when you are done perhaps you might be able to optimise for GPUs or on other AI accelerators
Hey, booting up a computer years ago when I first started building them wasn't all that fast either. Granted, it didn't take 15 minutes, but it sure seems like a long time in retrospect because it never got any faster and the system had to check the RAM back then, too, even if it was only 4 MB or 4 MB. A couple of years alter, I built a computer with a much faster processor and put 128 MB of RAM in it. Many, did it take a long time to boot up with all that. I used that computer as a server for my old desktop and the kid's computers (hard wired ethernet), so I rarely ever shut it down.
When you work for a company that doesnt have a render farm, and one you work remote for. Rendering, especially for some people they need to render ultra realistic scenes with multiple angles, maybe even walkthroughs. And also everytime someone asks for a change you have to re-render it all. Each render might take 10 minutes and you have to do 100-300 different renders, if you could cut that time in half with it and get double the work done, this becomes worth it.
I would use it for real time terrain generation from satellite data and 3D modeling/rendering and video editing/encoding, it's what I was using my threadripper 2920x for... since the 2920X sucks now my 7950X3D out performs it in every way so im doing all those tasks on the 7950X3D now..
Hi Jay, its archviz aka architectural vizualization. I work from home currently using 24core threadripper 3960X and lots and lots of hours in 3dsmax and vray. Thats it, 1 computer is fine for me so till now there wasnot much reason for upgrane but this new Threadripper looks juicy :-) And of course if it can run Skyrim, Doom Eternal or cyberpunk well enough then its perfect.
Please do a video series with extreme cooling shenanigans, OC/undervolting etc. with Threadripper. HEDT's are so overkill and extreme in it's nature that it's only fitting to do some crazy "stuff" with it. :)
I am the market for this lol. Game dev, that specifically works in environment creation/simulation. I need something that clocks higher than my dual epyc workstation, but still keep my core count up.
I'm a firmware engineer, and I would love to be able to used one of these. I don't play many games on my PC but compiling Linux, or other software/firmware where you have hundreds of file to compile means that the cycle time is cut down!!! Way down. I remember when Linux took more than 3 hours to compile. Also CAD/CAM software especially something like 3D modeling would be nice on this CPU.
BRAVO! Thank you for a well thought out ENGINEERING analysis!!! This chip could be used for Molecular Dynamics simulations. I would like to see you run LAMMPS, GROMACS and other codes against Threadripper and EPYC chips.
When I worked at Nickelodeon, we had a render farm in the building.. it was an entire room with floor to ceiling rack mounted render blades.. dual xeons in each. This single CPU is more powerful than that entire room.
I don't know what's more incredible, the rate of advancement, or the degree to which almost everyone takes it for granted lol
What a time to be alive! 😂
Maybe if that was 20 years ago...
Aaah the og cartoon channel loved it when it was around
@@HenrySomeone 20 years ago or not that is still impressive ?
I used to work on a code base that was roughly 100000 files and several million lines of code. On the 20 core Xeon workstation I used at the time, it took more than 30 minutes to compile the product.. Sure, sometimes you just touched a small part of the code and it just took a few minutes, but often you touched some core header file and almost everything needed to be rebuilt - a CPU like this would have saved me tons of trips to the coffee machine while waiting for a build to complete (multiple times daily). It would have paid for itself in just a few weeks.
Would love to know exactly what Xeon you were running in there 'cuz there are many E5 26XX V4 Xeons selling for a nickel a dozen in various places. Cheap MoBos have popped up with X99 chipset as well.
A graph on a monitor behind Jay is so much better than a full screen with voice only in my opinion. I was surprised how much difference it makes to me.
Anyways, good job with the video as always. :)
Yeah, I kinda liked it also!
Agreed
I would just like to note that I agree with this so much that I liked your comment, despite the knowledge that doing so would increment it from 69 (giggity) up to 70.
@@owd200 The pain is there, but I am willing to forgive you :D
RIP "69th" like. You will be missed.
PS
I guess, we are now waiting for the butcher of the "420th" hahah
I... Didn't even realize this until you said it. I agree too.
I love the new way of going through the charts. Sitting at a desk and explaining is much better than just charts and music. Love the channel Jays2Cents team.
I do 3D, owned 3970x, would happily take in the new 7980x, or even 7995x. The speedup that you get is amazing. Outside of that, even running multiple render nodes out of this single CPU would quickly pay off the electricity cost. You can split it into 4x16 core render node, if optimized, will for sure draw less power than 4x 7950x
I've been told that 40 series is best for 3D rendering for Daz/Blender
@@AdrianGarcia-cj7ux We're talking CPU's here not GPU's. AMD doesn't have a "40 series" in the Threadripper line up.
I have the 3970X as well. It's a great value for a 3D workstation. Getting the faster 64 thteads would be nice, but you could get a 4090 with the 32 thread version for the same price. For 3D GPU rendering is the thing now (even though I prefer CPU).
@@AdrianGarcia-cj7ux You are correct. An overclocked 3970X has the same rendering performance in Daz as a 1080 ti. A RTX 4090 is almost twice the performance of an RTX 3090 in Daz.
@@JWPanimation Depends on software does it not?
I write HIGHLY multithreaded and long running software at home for fun (nothing practical). This kind of CPU is exactly the kind of thing I would use do that. Right now I'm using a dual socket AMD EPYC 75F3 system. I'm using a 5950X as my desktop processor for development, but I'd end up using Threadripper as a server for this kind of compute.
The EPYC got tons more memory bandwidth and lanes over Ryzen processors. I too have a couple of 5950X (Linux workstation and ProxMox) processors. So would love to run EPYC for my ProxMox servers but waiting on prices to come down a bit on the secondary market.
I definitely need this at work. Our security team hasnt found a security suite it doesn't like and thinks they all need to be installed on all systems.
This may let me actually use my PC while letting those background tasks run lol
😬🤔 😂😂
As a security professional, I felt that shot haha. It really do feel like between endpoint security and network teams, they have a bunch of endpoint clients that all want 100% priority on the system, oh and daily health checks and scans that just fuk your day lol.
@@whoopass2rb when I was doing applications, almost all the developers wanted to assume they had 100% of the machine and everything had to run at top priority.
It meant users had to sit & twiddle their thumbs for things that were basically background long running tasks, when they could have started doing something else.
It's risk aversion, "something might go wrong".
I found one busy waiting bug that had caused a large company's weekly billing reports to take more than the weekend, so the developers had had to automate a clone system 2nd machine which was purely for reports ... doh.
It was a single character fix on 1 line of code to an OS call argument.
@@whoopass2rbhuntress EDR and bitdefender total protection suites (or cylance + windows defender) is just as bad. Even breaking it down to only two products or one even, you can't get away from patch and scan days :
It would be cool to see you compile often-used software, like Chromium and the Linux kernel, both before and after an overclock. That would be a cool video, Jay. Hope you see this.
I enjoy this layout. Also Zen 4 Threadripper is insanely efficient. AMD has done an amazing job.
This would be a killer cpu for a DCS World server. A large mission with a lot of units would keep a high fps possibly with this.
Assuming it doesn't crash nonstop like every amd thing.
@@antoniohagopian213?? Wake up bro, it's not 2014 anymore 💀
@@antoniohagopian213 I've built many systems, 90% are AMD. Not one system is unstable or crash at all. I've had more issues with Intel cpus The old stigmas still alive and kicking.
@@antoniohagopian213Dude I used to be an amd hater and even I know that the old amd driver issue days are over
@@antoniohagopian213this shit needs to stop honestly. If AMD would crash constantly they would not be as successful.
I appreciate the container ship analogy. I like Jay’s content but don’t always understand everything because I haven’t been interested in computers until recently. So the container ship analogy was very helpful for understanding the purpose of this type of CPU
Yeah, it would be better to use the analogy from a Container ships against a cargo plane.
The error correcting memory matters too, if you have a whole department relying on results any data corruption experiences will be traumatic.
Real informative, always like how you give a comprehensive and understandable breakdown of the hardware information, as well as the testing methodology and terminology used.
Also: Props to whoever named the E: drive 'BALLZ'. I approve of the occasional silliness in a work environment, it helps make things feel way less tense.
It can be used at home, most definitely, but for professional workflow, in the vast majority of cases. I have a friend who uses the 3990x for numerical simulations, with programs such as ANSYS, LS-DYNA, ABAQUS, Simulink, etc. And this year, it is time for an upgrade. Most probably, he will get one of the Pro Series Threadripper, if they will prove to offer much more performance, than the HEDT Threadripper, in the programs that he uses.
Numeric computations are memory bandwidth bound. Octa-channel ram in theory should up to double the performance in comparison with HEDT.
@@falsevacuum1988 Agreed ! Honestly I do not know much about the subject. What I do know, is that on his current platform with the 3990x, due to budget constraints he is limited to use 256 GB RAM, where 512 GB, would have been a sweet spot for his needs. Since the new Threadripper (Pro) platforms are already an option, during the years he managed to financially balance himself and he can afford to upgrade to the newest platform, whichever he will find best to suit his needs in accordance with the budget he'll decide to allocate.
Simulations in engineering, medicine, and physics are ideal for this type of processor. I used an earlier Threadripper model for Monte Carlo simulation of the interaction of high energy radiation with human anatomy to plan cancer treatments. The highly parallel nature of these calculations benefits from having lots of threads.
Seems a GPU would help.
@@brodriguez11000 This is true. The application could make use a Tesla card if one was present, but that was a bit too much for my home workstation budget. The TR processor was surprisingly good on its own.
the 1st thing I learned with my threadripper 3960x build is that cores multiply watts...for that reason overclocking is a big no even if each core only uses 3W more...by 64 it adds 192W just like that if it's 4W it's plus 256W it rises very quickly for the same reason undervolting becomes very interesting
Never thought about it like that, but I suppose that makes sense...I have never had a chance to tinker with HEDT CPUs but it seems like it could be really fun!
Each 7980x core sips only 800mv under full load at default clocks. AMD tuned the efficiency already about as far as it can go - but there is OC headroom in these chips, it just takes a lot of power and completely ruins that efficiency. I think it was Steve at GN that got about a 30% perf increase OCing just on an AIO. Sucked 2x the power though over stock.
@@WaspMedia3D i guess for 96 core it's going to be either multiple rads custom liquid cooling or some type of sub-atmospheric shenanigans
@@manoftherainshorts9075 It's not Intel ... lol. Actually the 96 core parts have all power reduced cores - the TDP is the same for all of the threadrippers - pretty much the same as a 14900k under full load.
@@WaspMedia3D we are talking about overclocking here pal
Currently running a Proxmox box I built a few years ago with a 3970x and 256gb ram, running multiple gaming VMs passing through an RTX 3080 to each vm. Also running several other VMs and docker containers all on one box. This is definitely for me, unfortunately sounds bad that the gaming performance took a hit on this new processor, but threadrippers are the GOAT!
I doubt this CPU will bottleneck a 3080 if you play at 1440p or 4k...Unless you upgrade all your cards to a 4090 you should be just fine...And even if you did, I doubt it would bottleneck most games very much unless you play at 1080p or 1440p at lower than ultra settings...
Multiplayer every, single player game.
For gaming, I know of ONE use scenario that would want this: EvE online multi boxing. I used to run 3-6 accounts simultaneously, along with discord, a mapping tool, and a few other small things. My (now very old) Threadripper didn’t go over 60% load, but a regular gaming cpu I had previously to the Threadripper would instantly max out and get throttled.
Would love to see a full length minimal cut video of building a custom water loop for this setup. Video would probably be very long but it would be awesome
also 4 yrs to build.... its jay, he still has builds from 4 yrs ago to finish!!!!!!!
Just remember the FX series was the first step on this power effficient multi core journey, sharing cache on this cores was their first hard lesson, their refusal to use multithreading years sooner their biggest.
AMD lost a class action lawsuit over FX. Them calling them "8 physical cores" turned out to be false, it was closer to a 4c/8t so they were sued for false advertising and lost. So they were using hyper threading a long time ago, they just didn't call it that.
That case was a technicality, it still had 8 physical cores but the shared memory made them 4 in the "eyes of the law"... I have delidded and counted 8 physical cores though and CPU Z and Windows both show 8 physical also... like i said, sharing cache was the first hard lesson...
The first "Hyperthreading" was not until Zen chips .
Still 5GHz per core over a decade ago while people still celebrate that number now.@@falcie7743
Bit ironic since FX is the reason people still call AMD inefficient space heaters even though the current situation is now the complete opposite of 10 years ago. Same way people still call AMD drivers buggy and hit-or-miss when there's literally nothing wrong with them, old stigma from a decade ago that for some reason refuses to die.
I never got the claims about heat, never went above 75-80 on my system, maybe a touch beyond when the OC hit 4.9 /5GHz... but i'm in Britain not death valley...
I think Intel and Nvidia still play their games against AMD... Microsoft too, why would they stop... Microsoft likes to unilaterally unistall AMD drivers with updates, it's a PITA...@@JohnDoeWasntTaken
@@mrwpg Mainly had to do with the top end chips like the FX-9590, I remember that thing was a beast but pretty much needed liquid cooling.
It annoys me a lot how Intel and Nvidia still have enough mindshare for fanboys to continue spreading lies about AMD. I waited years for AMD to be competitive (because they were a major underdog back then) and now that they are, people still parrot the same stigmas from the past and gloss over how Intel is now in the position AMD was a decade ago. Even when they weren't even into PC building at the time so they are just repeating what they heard from someone else who won't let go of the past.
I'm building my first AMD powered PC this Christmas since all my past ones have been Intel, with a Ryzen 7700X. Intel may edge it out a little in certain tests, but Intel also runs way hotter and uses way more power, not to mention the stupid (imo) p-core and e-core design of Intels. I want all p-cores in my CPU. Windows barely knows how to handle e-cores.
I'd use something like this to do finite element acoustic simulations using COMSOL for projects that I work on. I don't know if this exact chip would be the best, but my last machine was a 2019 Mac Pro with a 24 core Xeon chip and ~700GB of ram. I could arbitrarily use more ram by making the model bigger / modeling a larger design to higher frequencies or I could run more models simultaneously. I'd typically use ~300GB of ram per model and run two models simultaneously. The software would use all the threads / cores available. Each model would take 4-5 hours to run. I'd cue up models with geometry iterations and leave it running continuously for days. I had a more complex model that I tried to only run once per project that would usually take more like 48+ hours to run. So more cores / more speed / more ram capabilities would all directly allow me to work faster and do more advanced analysis of the projects I work on at home.
I rely on you guys. While im not just sure, im positive on the use case not being one for science. It helps make studio technology more affordable for mysic production. Im still using a 2013 mac pro for strictly protools/logic, its fully loaded with the exception of ram (128gb is enough) it still eats anything i throw at it. Music production can be done with an decent phone these days. However the racks, and other such equipment i use. The hardware will always beat the digital plugin 9 times out of 10. It could use some more ram now a days tho, although very few paying projects requires 600 something lines in the daw besides recording a full symphony for a movie score.
I love the giant screen setup behind Jay when they do these type of videos. Very sports caster vibes
I own a production print shop. I need this so much to process image files to send to the equipment. I currently use a water cooled 5900x and when we process it brings it to its knees! Strongly considering one of these with a custom loop like my 5900x. Being a business expense lessons the sticker shock. But since it’s my business it’s still a big pill.
You should do a cpu cinabench video of every cpu you have tested. It would be cool to see how cpu's have evolved and at what speed that change happened. I just want to see all the graphs.
This cpu would be a killer for computational fluid dynamics software, which can scale almost linearly with the number of cores.
I think you should bring out the A/C to see what it does first. Cinebench is the new 3d Mark for Overclockers. It would be insane if you can break 120k cine score. But yeah I would say Threadripper would be amazing for those old fashioned arcades where they their own computers running games off of a server. Threadripper makes the footprint smaller where you will only need 1 machine compared to 4 to run a game off of 3 different virtual computers. Now it would be interesting how Intel will respond to the return of HEDT computers.
anyone doing freelance or just for fun rendering in blender (cycles) or max with corona would use this. The faster the render times, the more enjoyable the artistic process and the less fatigue from waiting for renders and re-rendering to fix mistakes etc.
GPU rendering with single 4090 is still many times faster and you can get about 3 of those for the price of 7980X. This makes more sense for simulations which tend to be CPU and memory heavy.
@@WereCatStudioyou will be severely limited in scene size with a 4090 and will have to split the entire project into a lot of smaller render runs and even then you might have to do multiple passes depending on the complexity. Depending on the workload your projects will take a quality hit rendering on the GPU.
@@xfy123 Yeah, I agree that once you run out of VRAM the CPU rendering is prety much the only thing that makes sense at that point but I personally never dealt with anything more than 8GB (as a hobby 3D artist) so I have no idea how big some scenes can actually be in production. But in that case you would be still looking probably at something like A6000 with 48GB of VRAM most of the time... perhaps.
@@WereCatStudio I'm not a 3d art guy but I know a few of them that work as freelancer's, they all mainly use CPUs because more of the render engine features are supported and they can craft more complex scenes and render them in a single run. Some of the projects I've seen had so much memory utilization that even today a top of the line GPU wouldn't fit them. A friend was working on a short animated film and trying to make it as photorealistic as possible and a small scene of a living room couch an armchair coffee table tv and a cat had almost capped his 256gb of ram.
@@xfy123 I only have an experience in Blender so can't comment on anything else but typically even a photorealistic model of a person can fit just fine into 8GB of VRAM. You can make chair take 250GB easily if you have no concern for any kind of mesh optimization at all.
The biggest scene I've done was around 50GB just because I subdivided almost everything so I had over 300M of polygons. With a bit of optimization I could squeeze that into 8GB just fine without affecting visual detail.
Incredible really.. always a joy to see things you wouldn't otherwise get to see, so thank you!
Optimus threadripper block only way to go w Threadripper CPUs. This big boy CPU deserves a big boy block.
You make an interesting point about efficiency... I'd love to know the power draw and thermal performance on this CPU but somehow restricted to 12c/24t.
Even better, run a few dozen virtualisations with assigned cores or smth
@@lordyhgm9266 I'm not sure why you would drop 5k on a CPU, when you can have a pair of systems using a load balancer off a raspi or something like that for the total cost, with GPU's and more supporting it. Especially when you consider that for that 5k CPU you still need a motherboard, the ram, and so on which will be an easily trippling of the price to make it worth the while.
The truth is, consumer/mainstream platform hardware has gotten so damn good, that running a pile of VM's with GPU pass-through and a lot of other nice features has become feasilbe, with good part selection it's even fairly straight forward. With an 8c/16t CPU, you could easaily toss a pair of GPU's in, and enough ram to run a half dozen fairly demanding VM's, with a small pile of containerized applications on the host OS, to run everything from Email, to File backup, to Automatic Encryption/upload to Cloud storage, Streaming/Transcoding, and more.
The problem with these super high core parts is, many applications literally can't take advantage - you run into the problem that spinning up new threads is slower than threads completing, resulting in a saturation limitation. The only way to get past this in some cases is to be running multiple instances - and given home use conditions, odds are you simply don't need that much compute for enough time to really make it worth the while.
Now, the 24c/48t part - that gets into something more people could feasilby use. And in a one box solution, the ability to tinker with power draw and such would allow us to likely go to fairly low power draw levels (talking in the 2-3W per core range) without sacrificing all that much in the way of top level performance. Realistically though you are going to want in the range of 15-20Tb of storage in a raid configuration, you are going to want at least one decent GPU if not 2 or 3, set up to do different tasks (like, I dunno - have an instance of stable diffusion running that is configured to output images based on prompts you create for a D&D campaign, so you can generate lots and lots of images to sort through periodically and pick out the better ones). But even at 1500 for the CPU, if we factor in the rest of the system - odds are we are looking into the range of 5000$ to make it worth the while - and at that point, it might actually make more sense to go with multiple manstream based systems, rather than a single HEDT one, given the ability to do load balancing, mirroring of services that you deem critical, etc, and by doing this - create secure redundancy so you don't lose access.
I mean, if you have the money for one, and just want a cool all out, all in one system: Go for it. But if you want to think cost to performance ratio and efficiency in terms of redundancy and reliability - for home use it's really damn difficult to find a reliable use case for it.
Rather than use CPU temp for a custom water loop, I took the advice of some UA-cam video to use a Temperature Sensor that employs water termperature to drive radiator fan speed. I use an AMD 5950X to calculate atomic structure spectra and it takes about a week 24/7 to cover the full spectrum of hydrogen (153 lines). Trial and error gets me closer to gussing the internal structure. I'm better on hydrogen than existing models. I could use Threadripper, but affording it is way out of my budget.
This would be likely overkill for me, but I am strongly considering a lower tier one for professional use as a programmer.
It would easily allow me to run a virtualized kubernetes cluster, benchmark high load for services I write, allow me to compile my code really fast (both thanks to high core count and huge RAM). Basically, removing most of bottlenecks I have.
Some of the Use cases for TR outside of work environment could be : -
1) AI hobbyist - Machine Learning/ Deep Learning - Training and Inference use-cases and running LLMs and/or developing LLM-based apps.
2) Hosting personal websites for monetization - Blogs etc.
3) Educational Investment - Learn IT stuff that requires infrastructure like Kubernetes, Docker Swarm etc.
I have been wanting to do some aerodynamic simulations for work while at home instead of going in. The primary reason I haven't been able to is my computer so I'm hoping this platform will give me the performance and more importantly the RAM for it.
For when "remote work" comes home.
Doing CFD modelling now in sparetime (former work and master thesis looong ago), I can always use more multithread CPU performance for dynamic 3D post processing purpuses, not to speak the numerical calculations of dynamic flow modelling in all kind of fluids from air, gasses to high viscosity oils through complex geometris. I'm in a totally different field now, but as my dept. closed down I could get the cheaper "PC" test bench for almost free, of which I'm now using as a hobby project, but cannot resist to do small tweaks and the few hw upgrades, still doable in 2023. And $5.000? No kids of today would believe what amounth my small energy department at the university campus invested in computing power for their master degree- and PhD students during my engineering studies mid 1990ies, ofcourse backed up by close collaborations of the companies we did main study projects for. Mainframe costs of which a high end multicore PC could do today at about 1/100th costs, here 27-28 years later.
I'm curious to see how this would benefit rendering videos, if at all. Also would love to see a water cool loop try to cool that beast so it doesn't throttle!
As a VFX artist I would use it for 3D CPU rendering and simulation of water, fire, cloth and hair.
I could use this for my home vSphere lab. Would be awesome. Would surely beat the old 16 core Xeon on an X99 board that I am using currently. I use the lab machine for testing software and developing POC environments.
I got one of the original Threadrippers (1950X) and thought it was crazy with 32 threads. I wasn't as interested in being peak gaming so much as I wanted to not need to close anything while keeping performance up.
I recently am replacing and went with a 7950X3D or whatever it is vs staying with the Threadripper. Mostly because I don't need more cores/threads currently. Also the price is insane.
Outside of a few very niche home-server ideas it does seem like a product in search of a market.
Pretty insane. Makes me wish i had like 10k lying around.
I work at a huge manufacturing company and their servers should be running this! The current servers have comms issues when running hot tests, were we will have to restart the test and sometimes restart the computer to clear the memory cache.
5k wow. now thats an expensive cpu
Lightning McQeen: "Wow."
Answer = megatasking. When you have 170-plus processes running in task manager with a 38 inch screen hosting 5 main apps you want to throw CPU cores at those, and the more the better 😎😎😎
As for having all that 1TB memory = RAMDISK 🔥🔥🔥
It's not just the thermal interface material Jay, it's also a huge IHS where you can spread the heat. I.e the 350W is easier to handle on a smaller cooler to the reason that the heat isnt that centered and can thus be easier be dissipated.
I love jay in the foreground with graphs at the background rather than graphs just flashing on the screen
I really think that if you have really high end cpu's that youre wanting to test for gaming, you should test games that are more CPU intensive not the old standbys for graphics testing. Things in the simulator category would be much more of a decent test (like Farming Simulator 22 or Cities Skylines 2 with a city over 200k population, or especially over 300k where the games slow down to a crawl even on 3x speed cause the simulation processing takes forever so you end up under 10 fps, sometimes switching over to seconds per frame)
Games like this slow down more so from ram speed ull look at ur task Mgr and see it's only useing like 20% of the cpu same with factorio
Farming sim is engine limited and runs best at 60 fps. The physics really just suck. I’d love to see what skylines does tho.
@@organicinsanity2534 (as a long time Farm Sim player, it's pretty damn frustrating at this point how little they're seemingly putting into each new release and how outdated by 'sim' standards it is; it really should be called 'Farming Arcade 22'... ...but that's a topic for another time...) lol
"cpu limited" in benchmarking means it's only using one thread for the main task.
I remember when we got new machines. 3000 dollar workstations for each developer and we were ~10.
We did some math and the expense paid for itself within the week. (This was Java and we got modern SSDs, and since the java compiler is horrifyingly I/O starved, this made a massive difference.) The fact that we could now run a local cluster of actual prodlike servers too, meant that we could test a lot more. Instead of having one app server with 3 deployments, we had 3 app servers each with their own deployment. Was a total game changer in terms of how fast things went. (We had one EJB server, one web server and one "REST" server IIRC.)
I guess it's time for a Skunkworks upgrade
I run a physics lab. but to be able to work from home and run simulations would be great. I also use redshift which benefits from more PCI-E lanes for faster rendering. I have a 5950x, I'll be upgrading to a 7995X
RAM is a bit of a good point, since DDR5 ECC I believe is keyed differently
Also I like how Jay is like “it’s only $5000, I can’t believe I’m saying this”
I wonder how the Phil : Jay conversation is going to pan out .... but Jay you said "only" $5K, like thats a good deal. Can I PLEEEEEEEEAASSE have one?
go for LRDimms, it will reduce wattage on the memory controller , and tdp be reduced, but allow CPU to increase wattage , and so increase their tdp , lrdimm also makes it dual rank rather than single rank, lrdimm memory is pretty standard setup on servers. is generally faster overall, on wattage more than 50% is on memory and cache controller
For a home user like me, the electricity bill alone for running that monster would be a big no no for me 😂
How much does your electricity cost? I only pay like $0.17 per KWh...Until recently I never realized how cheap my electricity was compared to a lot of places...I guess I have taken it for granted...
the chip at idle can use as low as 1 watt you can check Level1Techs for that.
@@brucepreston3927 I'm in the UK and pay £0.29p per kWh and on top of that we have a standing charge
Already dreaming of this CPU and I know I'll have to buy it next year. Currently running 2 OC'd machines - 10980XE with 256GB of RAM and 5950x with 128GB of RAM. What am I doing except playing DCS world? Marketing Mix Modelling. 1 calculation takes about an hour and I have to run 20-30 to deliver a presentable result to the client. Still cheaper (and faster) than cloud computations. This CPU would cut it into a quarter or allow me to run 4 in parallell without losing speed. Yes, I work from home for my own company and yes, this is the mother of all CPUs.
Does this mean we need to upgrade skunkworks again
I'm a 3D animator and VFX artist. This CPU is the dream! I could maybe even binge watch on my 2nd monitor while rendering and not expect my pc to freeze.
Is this strong enough to run Roblox? (Asking 4 a friend)
I used parallel computation in my Ph.D. thesis for post porcess video tracking. this CPU would dominate the algorithm I made as I can process 100 frames in two seconds compare to single frame at one second.
For people like myself who are buying this it would be better to see more Data Science / Ai benchmarks, i.e. token generation speed, any of the pytorch or tensorflow benchmarks etc
Like the graphs on the screen in the back and you explaining them in the front
I would expect a build with this to cost $5000 not 5000 for the CPU alone.
Considering 4080/4090 are over $1100, 6900xtx is 1000 bucks, h100 etc over 1000 bucks. I would not have that same expectation.
Most of the people buying HEDT CPUs are looking at the cost as more of a business expense than a personal expenditure, so yeah AMD knows they can charge much more because of this and it's going to appear obscene by consumer CPU standards.
This is the type of processor that would be completely useless for gaming. This type of CPU would be for people who do heavy video editing all the time to make money, or oilfield calculations, or financial or science calculations.
This is not a gaming CPU 🤦♂️🤦♂️
Dang i wanted to see 2000 fps in rocket league with this bad boy!
For my professional use, running NCBI BLASTP and BLASTN to search the NR database will use all the cores with perfect scaling.
Bed Ripper
Just love the BALLZ name on your drive XD Great review as usual
This would be so useful for 3d modeling and rendering Jay. All kinds of simulations in 3D. Rendering 60fps animation. Get two of these and you already have yourself a mini pixar render farm. This is the ultimate dream of 3d artists like me.
Not if you use blender, a gpu will run circles around any cpu.
I'm currently on 7950X and I run a minimum of 4 VM's for MDM testing. My previous machine had the same core count, but it was a ThreadRipper 1950X for application integration/VM stress testing. I'm still stunned that on a 7950x I can snapshot an entire VM in under a minute when the 1950x takes upwards of four or five minutes. I'm sure some of that comes down to having 64 gigs or DDR5 running at 6K but it's still amazing how far they've come in just a few years. One thing I do notice is the limited peripherals you have with the desktop version of the CPU compared to the server version with a 1950x it seemed like I would never run out of I/O but on this machine(7950x), I'm lucky to be running two different memory channels.
For $5000 the VERY LEAST it can do is function utterly perfect..... for 5k I better not have to replace this thing for 34 years what the fuck
That is some very impressive "piece" of silicone.
Really insane what kind of performance AMD manages to squeeze out there. The Boost frequency and Efficiency.
Also love their decision of bringing those kinds of CPUs back, there's certainly demand for here and there.
Since Ryzen launched AMD evolved so much and delivered awesome stuff again and again.
They never thought it could happen, but yea, Intel - You guys gotta have to step up a notch ( or three ).
Competition is always good for the consumer.
In my profession we perform frequent POC testing/simulation/troubleshooting for customers. For these builds we require nested virtualization at scale in our virtual lab environments. As you can imagine, the more 'devices' like virtual routers, switches and servers we spin up, the greater the need for multiple threaded compute resources. This CPU would be great for our use case and workflow.
3D Rendering, CAD, editing music (such as tracks for video), software development, graphics apps, and just handling the large number of browser windows I leave open.
I just want more PCIe Lanes without dropping the GPU to x8 connectivity. I've got multiple nvme drives, capture cards, and wireless VR
I wish, so badly, that I could just get a regular ryzen/mainline intel chipset that would properly support that
I really like the differences between each youtuber, JayZ talks a lot more about cooling and ram , while steve from GN talks more technical, watt performance and linus who talks about it over all with more consumer based approach. No single youtuber does the same review,i enjoyed watching all 3 and everytime i watched,i found something new , or had more indepth about something that the other said but didn't go on indepth.
Sound production/mixing: different tracks of sound playing at the same time. One of the workers for our security client does sound production at home and uses a 2xxx threadripper (I believe it's been a hot minute since I last talked to him). He told me that each individual sound track uses a CPU thread. So if you need to use around 100 tracks of sound this CPU is for you... I guess
Audio is not that cpu heavy. And one track per thread is simply incorrect. I work on sessions with way over 100 tracks on a 9900k without problems. Single thread speed can still be more important with a lot of audio software and plugins. You'll see more Mac minis and laptops even in big studios these days simply because more processing power is unnecessary.
I have no idea on how audio stuff works that's just how it was explained to me. He broke it down Barney style.
Love the name of the Drive you’re pulling graphs from. On another note, I like the format of you and the graph more versus the graph with voice-over.
Please bring back LN2, pretty peas!!!
Would love to see you tinker with competition score, modding, liquid metal perhaps.....! ;)
I use my 3960x threadripper for a home server that does a bunch of VMs, Plex streaming with support for transcoding a couple streams (no GPU acceleration) and the occasional Handbrake transcode. The VMs are the core use case, most are for work stuff. I undervolted my threadripper to achieve better performance/watt as it's on 24/7 and the system consumes the majority of power in my house. I wouldn't need more than the 24 cores I have today, but if this system ever breaks I'm definitely getting another one, I love this platform.
I love how far technology is coming in the past decade, insane improvements in every department
If the "tech press" hadn't been unduly unfair to AMD with the FX-8000 series - (which was way ahead of its time with "CMT" clustered multithreading) - perhaps AMD would have delivered Ryzen by 2015 instead of 2017, and we'd be onto the 9000-series by now. Developers (like Windows it self) didn't care about AMD, which naturally didn't help at all. In fact, developers didn't care about AMD, until AMD got into the consoles, and then they were forced to care and put in the extra work to utilize the extra cores and extra threads.
Big what-if, but absolutely more possible than not. You can thank the "tech press" for the stagnation as a result of always promoting Intel with the single core performance doing their disgusting ghoulish 1080p CPU benchmarks, it allowed developers to just squat on garbage code.
You are right where they were forced to care due to Crossplay. In my opinion it could deal with AMD trying not to follow suite as Intel. AMD was trying to make a statement and ultimately NVIDIA was just highly funded compared. It was a P2W situation@@clitmint
As always an interesting and thought provoking discussion. It's an awesome CPU and no doubt this would be great for forays into numerical analysis. I would love to see what you can put together for water cooling. While I could envision building a 7980X system for grins, the price is a real obstacle and worse, it so far doesn't show on-par gaming performance. For gaming, it's more rational for me to struggle along with my 19-13900KF which despite it's advanced age is still a good CPU for gaming not to mention being as reliable as any I've used. Besides, it's adequately performant giving scores of 40500 on R23, max temps of 82C and max power of about 300W (ahem, in an ambient of 26C). The OC tuning that gets this is basically pushing the big button on XTU (even with a wholly miserable SP of 96).
Professionals need it for data processing. When I did 3d measurements we would end up with hundreds of thousands of points that we would need to convert into models. The process was multiple steps. Sometimes an engineer would be waiting hours for a individual step. Sometimes we would do major steps over night. If you could cut that down to 30 mins with something like this it would be tremendous. We had dedicated systems because we had more load than the servers.
I still have a 1900X functioning. If I had this chip, I'd feel at the top of the world. And all I'd use it for was Folding@home. And it would perform admirably.
Because of Jay I will be doing a custom watercooled pc. Just need a few more bits 😊
What a rollercoaster... so much to love, so much to hate. By the way, clicked you link for merch and noticed you don't have any "Tall" available (surprising for a man of your stature). If you add "Tall" as an option (XLT 2XLT), I'll happily buy one of every shirt your store. 2X, 3X,4X, etc never seem to get longer... just wider. Need More Longer.
We work on a software called Ansys. The software just pulls whatever resources you throw at it.
A 128core and 1TB RAM machine would be right at home doing our simulations.
It would probably reduce our simulation time by a few hours at the very least, our simulations wouldn't be bottlenecked by RAM either
The potential for so many VMs...
I want three, for a cluster, running vSphere. Yes, for my home lab. No, I couldn't even come close to affording three, or making use of them. I still want.
I would love this machine to run The World community grid. It runs all cores 100%. I run 5 machines at home doing this. I never noticed the electricity bill as an issued until I added 2 7950X CPUs into the mix.
We are looking at building a TRX system for our structural engineers that they can remote into to run different models. Although 5000 usd is a lot for a home system, it’s very reasonable for a relatively small engineering department 🙏🏻
As @TheDani7973 said I would also enjoy more of the graph behind Jay or whoever is presenting!
My use case would be UE5. I compile code and shaders with UE5 and based on tests that I have read the 7980X compiles UE5 in under 10 minutes (9 something, and that's stock!) while my 5950X does it in around 25 min. My other workloads are training neural nets and for the time being my training can only be done on a CPU.
@ZaHandle not all algorithms are supported on GPUs, also when exploring new kind of NN it's faster to evaluate the architecture on CPU and when you are done perhaps you might be able to optimise for GPUs or on other AI accelerators
Hey, booting up a computer years ago when I first started building them wasn't all that fast either. Granted, it didn't take 15 minutes, but it sure seems like a long time in retrospect because it never got any faster and the system had to check the RAM back then, too, even if it was only 4 MB or 4 MB. A couple of years alter, I built a computer with a much faster processor and put 128 MB of RAM in it. Many, did it take a long time to boot up with all that. I used that computer as a server for my old desktop and the kid's computers (hard wired ethernet), so I rarely ever shut it down.
When you work for a company that doesnt have a render farm, and one you work remote for. Rendering, especially for some people they need to render ultra realistic scenes with multiple angles, maybe even walkthroughs. And also everytime someone asks for a change you have to re-render it all. Each render might take 10 minutes and you have to do 100-300 different renders, if you could cut that time in half with it and get double the work done, this becomes worth it.
I would use it for real time terrain generation from satellite data and 3D modeling/rendering and video editing/encoding, it's what I was using my threadripper 2920x for... since the 2920X sucks now my 7950X3D out performs it in every way so im doing all those tasks on the 7950X3D now..
If you don't break out the indoor AC unit, cardboard, and duct tape... I'm gonna be really disappointed...XD
Thanks, Jay!!
Love you guys!!
Hi Jay, its archviz aka architectural vizualization. I work from home currently using 24core threadripper 3960X and lots and lots of hours in 3dsmax and vray. Thats it, 1 computer is fine for me so till now there wasnot much reason for upgrane but this new Threadripper looks juicy :-) And of course if it can run Skyrim, Doom Eternal or cyberpunk well enough then its perfect.
Finally been waiting to see coverage with these New CPUs
Apple needs this and amd or nvidia pro gpus in the mac pro
Please do a video series with extreme cooling shenanigans, OC/undervolting etc. with Threadripper. HEDT's are so overkill and extreme in it's nature that it's only fitting to do some crazy "stuff" with it. :)
An avatar Jay in a simulated environment.
Hands down best impersonation of Doc Brown. I thought he made a guest appearance when I first heard that
I am the market for this lol. Game dev, that specifically works in environment creation/simulation. I need something that clocks higher than my dual epyc workstation, but still keep my core count up.
I started cackling at the 720W. I'd love one of these for my workstation at the office (where I do tons of heavy compiling)
I'm hoping we see some last gen Threadrippers hit the ebay scene with the new stuff coming into the market.
I'm a firmware engineer, and I would love to be able to used one of these. I don't play many games on my PC but compiling Linux, or other software/firmware where you have hundreds of file to compile means that the cycle time is cut down!!! Way down. I remember when Linux took more than 3 hours to compile. Also CAD/CAM software especially something like 3D modeling would be nice on this CPU.
BRAVO! Thank you for a well thought out ENGINEERING analysis!!! This chip could be used for Molecular Dynamics simulations. I would like to see you run LAMMPS, GROMACS and other codes against Threadripper and EPYC chips.