I work in the HPC industry and can confidently say that the EPYC Genoa-X instance types on AWS and Azure are going to be a big hit. All of our major customers have been requesting access to these processors since they are blazing fast and have an incredible interconnect speeds. It's a lot of fun to be an early adopter of this amount of processing power.
Good to know the response is good so far 😊Not sure what kind of workloads you run but we are working on optimising all of our software to take advantage of the new CPUs so it should be even better soon!
In regards to the RDDR5 peculiarities you guys noticed in this video - DDR2/DDR3/DDR4 were all 72-bit ECC. As you noticed, DDR5 is 80-bit ECC due to the DDR5 DIMM having two separate 32-bit subchannels. Each subchannel needs its own parity, and while it only needs 4 bits of ECC per subchannel, there aren't any 4 bit die structures so they get a full 8 bits of parity each. This means, yeah, a non-ECC DIMM has 8 DRAM chips, and an ECC DDR5 DIMM has 10. Previous DDR formats only needed 9 per rank. Since 10 chips is 11% more than 9 chips, there will always be at LEAST an 11% cost premium for DDR5 ECC DIMMs compared to DDR4 even if there is a per-bit price parity between DDR4 and DDR5 DRAM. Also, due to each DDR5 DIMM having an onboard VRM, DDR5 will cost more by structure. Eventually, DDR4 and DDR5 DRAM production volumes will flip, so eventually DDR5 becomes lower cost than DDR4. However, there's always going to be that cost premium baked into the structure. Also, there will be off-roadmap 72bit DDR5 RDIMMs designed for specific hyperscale customers who do not want to pay the 11% extra bit premium for full 80bit ECC. A 72bit DDR5 ECC DIMM does NOT have full ECC coverage, but companies like AWS who control their entire software stack have written their environment to be aware of this and just deal with it. 72bit DDR5 will not be available to general customers because most people won't understand what 72bit DDR5 is, would buy it expecting ECC support, and have ECC failures in production due to the nature of 72bit ECC in DDR5. To avoid customer fallout, these 72bit modules won't be available to most customers nor will they be advertised on websites or general product roadmaps. Secondly, you noticed that the registered and unbuffered DIMMs have a different notch. This has always been the case. You were not specifically comparing ECC vs non-ECC DIMMs in this video when you compared the key notch - yes, one had ECC support and the other did not, however, the key difference was one was an Unbuffered DIMM and the other was a Registered DIMM. There are no modern memory controllers which can support both RDIMM and UDIMM memory modules, so they are keyed differently. All registered or RDIMM modules are ECC, but unbuffered or UDIMM modules can be ECC or non-ECC. Consumer processors are all based on UDIMM. So, yes, you can have a CPU support both ECC and Non-ECC memory. Specifically, unbuffered ECC and unbuffered Non-ECC.
Sounds like you know a lot about this. Very interesting explanation where I could understand most of it even though having almost no idea about this stuff otherwise. The 72-bit ECC module story sounds interesting. How does one learn about such a story?
the real question should be, can we install windows 98 on the L3 cache?
2 роки тому+33
Biggest surprise (for me at least, at the time) would be the "multicore CPU" thing. We didn't even talk about "cores", since a package contained one CPU with, natch, one "core".
@@Jimmy_Jones or maybe, like me, I didn't feel like watching the whole thing but when I saw the comment it intrigued me. So for others like me, there's your time saved
- “How large is your computer's RAM?” - "Three Terrabytes." - "I mean the RAM, not storage! You know nothing about a computer." - "No, YOU know nothing about EPYC Genoa" *pulls out personal server*
this is like the shit we made up as kids just throwing around numbers when talking about pc hardware, where the numbers where so big it wasnt funny anymore but dumb. 😂
Well, er yeah, obviously that would be great. Count me in. Then you just know an AIB partner in 2036 is going to push 800watts through that puppy to power the 80,000 shader units and the GDDR11XX. Until we hit some kind of ceiling or have some kind of sensible standard this shit is just going to get crazier and crazier. Unless there's some kind of architecture or engine that renders all that power obsolete. I started to get hyped for the Euclidean thing until I realised it was all voxel... But something. Just don't knock the people that made it all possible. These guys are legends. They made all this real. Like Dave Haynie and the guys at Commodore are my heroes.The whole 'boomer' shit is just insolent, infant bullshit to piss people off and get attention. Rise above that. You're better than that.
Seeing Linus talk in these server themed videos and go into all the details and his avid interaction back and forth with the viewers and the machines, makes me feel like a parent whose gifted their kids the present they wanted the whole year on Christmas and the kids enthusiastically explaining to me what it is and how it works and everything. lol
Back in my ISP/Datacenter days, we had a long standing joke about slow boots. Whenever something took forever to boot, we would say it was doing a SuperMicro. We would joke around and say that the reason why people buy 4 of their servers at a time is because they took so long to boot that there was a risk that another one would fail before the first one finished booting back up. Good to know that things haven't changed much.
Don't know much about servers but why does it take so long to boot ? Do all SuperMicro servers take a long time to boot ? A little off topic but I fell out of gaming/PC building for a while but my old AMD Athlon X2 system booted much faster than my current Ryzen 7 system (due to UEFI I would guess ?) You would think with faster hardware it wouldn't take so long to boot.
@@Gatorade69 in datacenter we use a special type of RAM memory with the integrated error correction code or ECC. With the introduction of DDR5 the ECC comes to the wide consumer market, the ECC technology is part of the default design of every DDR5 module. Now back to the question you asked - compared to the classic desktop setups based on the CPUs you've mentioned, within the ecosystems using the server grade buffered RAM with ECC we have a lot more stuff going on during boot time (as one might have guessed) - with the primary time consuming one being the so called "memory training". Now, that alone is a whole new topic with a ton of settings in BIOS and it prolly deserves its own chapter on Linus TT or Steve's GamersNexus... Anyway, memory training is a one-time event that's performed by the system on its first boot or subsequently after any significant or otherwise unique change within the particular system, during which the system sets up, tweaks and tests the RAM memory and all its advanced features like the ECC so that it works at its best. That takes time. Of course, in this video Linus showcases the latest CPU by AMD and also explains how there's a bug present in the microcode that causes the long boot times nonetheless and which AMD announced a fix for in the upcoming days hopefully.
@@MilanPutnik I worked for SoftLayer before IBM bought them for their cloud. We had super micro servers and they actually didn’t take that long to boot up in comparison to AWS servers. When I worked for a AWS manufacturing plant doing diagnostics, those were by far the worst. Lenovo had some shit boot times too. I actually prefer Supermicro boot times over Lenovo and AWS boot times anyday lol. Unless you have one of them 4us with 2/3TB of RAM then forget it.
Makes sense. Thanks for the answers. I remember old computers used to take a while to check the memory when booting up. I also wouldn't guess that it would also check the hard drive. Servers usually have a lot of space and memory so I can see that taking a while to check 5tem on boot.
@@cyjanek7818 depends on what metric you look at. Performance? In the server space, absolutly not a underdog anymore. Market cap in consumer market? Yeah it's still a underdog tale developing.
To those complaining about a server booting in 15 minutes, there were IBM p series (booting AIX) that would take half an hour just for the POST, before loading the OS. With the OS booting and getting everything running, it would take about an hour. I had to deal with 7 of those in a test lab I worked at back in 2010-2016. They were not fun.
Probably built not having to reboot it often. Yeah I too hate long server reboot times while my users keep asking "Is it back up yet as I have work to do?!" 🤣
@@RobBroderick44 Know what's more fun? Having those in a test lab, where they have to be reformatted and total OS reinstall about every 3-6 months. Wait half an hour for POST, then have a 30 second window to press F11 to get the boot menu to tell it to boot from CD, then another half an hour to start the OS installer. Yeah, I had fun with those.
@@dangingerich2559 One day if it is ever possible in your case switching to Linux and kexec'ing the kernel the reboot would allow you to scrap out the post time when the reason for the reboot is not to check the hardware. Also there were more and more patches coming in the past two years or even more as well as in the years to come for Linux to allow parallel cpu booting and greatly reduce boot time.
@@naguam-postowl1067 That's nice and all, but I (thankfully) am not in that job anymore, and am not dealing with IBM p series or AIX any longer. I don't plan on applying to any jobs that include such things, either. At the time, I had to stay with AIX as the OS because we were testing customer specific circumstances, to make sure their OS and software would work with our backup appliance. So, I had little choice in what OS we tested. I believe that customer was US government, too, so they had little choice in the matter, either.
@@flammablewater1755 Cerebras already sells a computer with 850 thousand cores on one processor. It's basically a CPU that fills the entire silicon wafer.
The chaos level of those filenames is impressive. I bet LMG has strict workflow processes in place purely to ensure that Linus never gets the chance to name a important file.
[chuckle] Yeah. People generally don't think to make sure their programs handle "filename too long" gracefully. PTS was just trying to soldier on rather than aborting after the first failure.
You had me at 96 cores....then you throw in that it supports 6 Terabytes of DDR5!!! My mind completely exploded after that!!! Now imagine those numbers on a video card.....when that day comes I'm upgrading.
Is it me or is Nvidia taking it a little too far with size of the 4090?? I wont be able to afford one for many many many many years, but i personally think its outrageously big :P
@@Emell09 I just built my dream computer (threadripper pro 5975rw 32 core and 4090 liquid suprim X GPU) and I used the HAF 700 case. The liquid suprim x 4090 looks small in the HAF 700 case.
The Blender result isn't necessarily another dual-Genoa system. Blender tests have a single-threaded setup period at the beginning. With such a short render time overall, that setup period becomes significant. So if you have, say, 64 total cores with an EPYC F-series chip (the higher frequency models), the setup period will complete faster, allowing the render proper to begin faster. The actual render time could easily take longer, while the total task time is shorter due to a faster single-core speed.
I actually work in purchasing and deploying equipment like this. We've been scratching our heads over how to properly cool Genoa systems. Supermicro's website includes notes that in order to support higher CPU TDPs "special requirements" exist. When we've spoken with Supermicro they've told us that (as of Milan) above 220W requires liquid cooling. But here you are air cooling the 9654. This brings me to a question: What thermals were you seeing when running the CPU(s) at full load? What CPU socket temps, etc? Thank you for this video and many more.
Actually just running one is different than running them in the Datacenter, I have to say that Supermicro is very cautious because you want something like this not burning your rack 😂
They can get away with air-cooling it because they only ran one of them. Datacenters require multiple of these all tucked together, so liquid cooling with high whining airflow is going to be a must
You can cool more than 220W on air, but when you have a bunch of machines in a cabinet, the air flow design becomes important. If it's just a cabinet sitting in an air-conditioned room, that's not good enough. It should be a cabinet where the hot air out the back is collected at the top and ducted away, and all empty slots at the front are covered with blanks. So the only air path entails cool air coming in the front, hot air going out the back, and all that hot air being ducted away through the top. It's really above 280W that you need to seriously consider water cooling, which means not a typical data center. And to reach the max 400W TDP, you need to know exactly how you're going to cool it. AMD only put the 400W capability in because customers requested it.
What I would have done is let the new server go through a complete reboot and then re-ran cinebench, reason being is that I think the performance might be taking hits from Windows updates and driver installs and such, you may find that if you let the new server do a few full re-boots the performance may improve a lot.
@fakecubed yeah I'm skipping this generation. If something happens and my 2080 breaks, then I'll get a 3080 or maybe possibly an AMD card, but no way am I touching the 4000 series.
But, how else am I going to overcompensate for relatively cheap??? It’s not like I have an extra $100k(+$20) for a lifted truck, w/ flesh-colored Truck Nuts.
not just big but heavy as well. if the 50 series is going to be bigger very few will have cases big enough to fit it in. Ive got a full size case and the 4090 only just fits in. had to take some HHD enclosures out to get it in. mine came with a support rod to support the weight of it lol. it screws into the rear end closest to the front of the case and it sits on the bottom of the case to give it extra support. had probs trying to fit the support rod as i have an intake fan on the bottom of the case where the support rod is supposed to go. its the length that is the prob with them not how many slots they take up. No way will u be able to put 2 of them in a case and use SLI with them cause the motherboard would buckle under the weight of them. you also have to keep an eye on the 40 series cards as the power cables from the PSU to the GPU can melt and catch fire. Nvidia had to get new cables made so the newer cables should be fine but if u get 1 of the 1st gen cables that come with the GPU it could melt. Its a 16 pin cable but not the same type other GPU use. its 12 pins then another 4 smaller pins so an adaptor is supplied so it can be connected to a PSU and its the adaptor that has the fault. the adaptor has sockets to fit in 3 8 pin cables from the PSU. its where the adaptor fits into the GPU where it tends to melt and catch fire if its not put in correctly or becomes lose and its not a very tight fit so can come lose very easy. seems Nvidia r scamming a bit as well turns out the 4080 is a 3080TI rebadged and put in a bigger case
@@cliffbird5016 What you said about Nvidia scamming is baseless bullshit. The 4080 has 4 more gigs of vram than the 3080ti and its boost clock is nearly a gigahertz higher.
It’s almost stupid to think that just 30 years ago we couldn’t even have one mb of vram and we were not hitting even 200 mhz frequencies and now we got 96 core CPUs boosting to very good frequencies
1992 had CPU's that hit 200mHz by 95 there were some hitting 500mHz. Meanwhile I had a 386DX reading about them Look up Digital Equipment Corporations play in early computers.
3 decades ? Wait just 10 years ago sandy bridge Xeon had max cores of 8 and we know when intel increased 4 cores to 6 with 990x today the top desktop one is 24 core
Would be good to quickly see it as an aside again in another episode, just to see how it compares. I'm betting it would at least be playable this time. What was it before? Like 12fps?
As a career sysadmin I'm glad to see LTT doing way more videos on datacenter infrastructure and getting tours of them and fabs. Also, lmao at the 4090 being used to compare how "absurdly" giant something is. And obviously size doesn't matter Linus! you guys have like 4 kids.
They should hire someone to make good sysadmin // network engineering content that’s entertaining enough for young people to learn and potentially find a career in, Linus kinda bodges all the infra for his company cause he enjoys that but if they had someone making videos in a way that captivates his current audience but has best practices and all that, it could get alot more people into our industry
@@tranquil14738 I think the topics are very interesting, they just need to be delivered in an interesting manner. and ltt media has managed to make videos on computing compelling to watch, so I'm sure they can do the same to data center infrastructure
I want a full episode like 16:18 where Linus and the team only say random sounds like this but still keep the intonation as if they are conveying actual message.
@@thomasb282 You apparently forgot that AMD actually got a nice headstart in the GPU market by acquiring ATI. They already proved they were capable and weren't all that small. You make it sound like AMD cut their tiny little corner shop in half to make GPUs one day.
Reminds me when I installed 4x EPYC 7742 and 4x wtv the 8core was CPUs in new servers. They ate paste like crazy and had to go back to the store twice to get more. Was an honor.
@@domainmojo2162 A lot of people are indeed unfortunate. I just wanted to point out that our computers are nothing that just boxes. People and real life is the deal! So, having the "best" PC would be less exciting that meeting a new good friend or finding a relationship, or adopting a child etc. And keep in mind that I only got to TRULY understand that recently! That's why I now thinking about keeping my current PC as long as it gets. Getting a new is just waste of money!
@@hwstar9416 Depends what you mean by screen. You can write to stdout faster than stdout can be rendered to a framebuffer and stdout can be rendered to a framebuffer faster than your screen's refresh rate. However, the original question was explicitly how fast it could write to "the screen" and the screen only goes so fast.
A normal motherboard can support ECC and non ECC memory. The different notch is because EPYC and other server platforms REQUIRE registered memory, while platforms like am4 only support unbuffered dimms, which can still be ECC (with an intel W480 motherboard, for example).
I think the notch is because DDR5 RDIMM's run off 12V while UDIMM's run off 5V. Why this is, nobody knows, especially with intel pushing 12VO and PSU 5V rails being not that great usually.
Linus, I've just come home from a really terrible day at work, just challenging in so many ways. Your intro alone has honestly calmed me down and put a little smile back on my face. Now let's continue watching to see there world records.
While the GPU market has been quite disappointing in terms of innovation this year, AMD has been killing it with these performance boosts with EPYC, I am willing to bet that these CPU's will absolutely DOMINATE the high end server market for the next few years
@@stocky7134 what? If you wanted something better go work for the government to get their advanced shit in the cia or something. Go use their 8090 Super if they are even that slow.
If you are benchmarking Ubuntu, you have to set the scaling_governor to "performance" because by default, Ubuntu sets it to "powersave" even on Ubuntu Server.
I feel like we maybe walking into a tech where things start to get bigger again. Then we try to figure out how to make it smaller again. And I’m glad for that.
I work in animation, specifically CG Rendering, and while we are looking at Gen4 for our renderfarm upgrade - we are held back by the thread limits of our render engine software which at the moment is capped at 256 threads, so we are stuck for now on our decision!
I used to work in a server motherboard manufacturer, involved in the SMT and DIP process and believe me, the more pins a socket has, the more stress it gave me. This straight out gave me Vietnam-like flashbacks. The amount of scrap due to bent pins that we didn’t know where it happened is amazing. I still don’t know how our company had any profits.
I remember being fascinated by the FX-9590 8-core CPU when I was in high school. All of us nerds were drooling over that. I could have never imagined that there would be 96/192 core/thread CPUs.
We were drooling over the multi million 96 node 64 bit super computer that got installed in our uni when we were there. Threadrippers have been eating it up quite some time ago.
The problem is still the speed. Writing code that can utilize many cores for most applications is not possible. Rather have high GHz over high core count.
Same way people in the 90s would not imagine that we would had 64GB of RAM. I'll never understand how people "could not have imagined". Technology progresses. That's the whole point...
@@godnyx117if i told you that there would be a 1 million core cpu in the future, you would be shocked too. it’s not that they thought “oh 128 mb of ram is the maximum!” its that 32 gb is so far ahead of the curve that it’s absurd to them
The 256 core limit is because the max number value is 255 + the core 0, so 256 When they created the software they thought "meh, we will never reach that anyway" and used the smallest possible variable, which makes sense Now they need to update that lmao
and they might as well figure on going to something extra absurd like 4096 core count, because just going to 512 from 256 isn't going to buy that much time.
@@QualityDoggo At no point in the lifetime of Cinema 4D was it important to save a couple bytes of memory. It's unlikely that they are actually using a single byte to store the number of threads. It's more likely that the thread count has a hard-coded limit for an arbitrary reason.
That record on y-cruncher, you could have also absolutely obliterated the 100B decimal digits record given that doing it entirely on memory is what gives you a tremendous advantage, and with all that cache included
god.... just imagining the fact that just *one* intermediate value in the algorithm could be approx 40GB in size makes my head spin.... *Then* considering that all of that could exist entirely in RAM just makes me collapse.
@@haniyasu8236 There is an entire microcosm of scientific programs that had to be written with the assumption that the users will never ever have enough RAM to fit the intermediate values into RAM.
They build supercomputers with these kinds of processors. There are scientific simulations that consume literally thousands (sometimes tens or even hundreds of thousands) of processors and need months to run to completion.
@@jacobfields8111they do cool shit like simulate the detonation of nuclear bombs, supernova, interactions between large celestial bodies, and all kinds of other fun stuff
3:51 How many things should I ACTUALLY check the documentation for with these kinds of things? 7:05 16x slot of PCIe gen 1*.1* 11:04 Another cool thing to if you've read documentation.
Things have come a long way since my first PC. That being the Amstrad PC 1512 with a 20MB HDD. That thing was so amazing I put 3 guys out of work in the printing press (sorry guys) I believe the DTP software I used was called Jetsetter!? I eventually upgraded to the 1640 - Happy days
We do quite a bit of molecular dynamics and quantum chemistry and these are pretty much unbeatable in terms of performance, especially when coupled with selected GPUs. For Gromacs nerds: 1M atoms, 27 ns/day @ 2 fs timestep, running on 32 cores + 4GPUs.
I had a thought.. SSD's weren't created because they were the next logical step in storage... they were created because of the amount of platters Linus kept dropping was making them exponentially more expensive so they needed a shock-resistant form of storage.
On the John the Ripper benchmark it looks like just one of these processors would beat out 2 of the Epyc 7763s. So 96 cores of Zen 4 beating out 128 cores of Zen 3 cores, cool to see the generational uplift to be as good as this.
@@sigmamale4147 The 7900xtx wasn't a fail. Cheaper and smaller. AMD did say they weren't trying to compete with the 4090. Some people need Nvidia but for casual gamers and computer users, the 7xxx GPU series from AMD is the better choice.
I felt the frustration Linus was feeling during the first half of the video where things weren't planned and things got delayed by the insane boot times, hyper threading wasn't enabled and that something as simple as HWinfo64 wasn't even installed. This reminds me of WAN show a few weeks back where Linus talked about how his patience for stupidity had gotten lower and lower throughout the years
I wonder just how physically large CPUs can get before it experiences issues due to timing (as in, signals having to move further and resistance of the internals messing stuff up), pad pressure, heat issues, or other major issues.
There's the wafer scale chip by Cerebras. 850,000 cores. 20kW. That's probably the ceiling for now, unless someone figures out how to create chips larger than wafers
Just think of the old days when some MMO's (Star wars Galaxies released back in June 2003) were ran on 100's of Pentium II cpu's. Now can be ran on just one of these bad boys. Pretty amazing. Would love to have one of those Supermicro servers for my home lab virtualization. After I win the lottery of course.
Can you imagine the amount of confidence you need to screw this thing down? At least 8Gigabytes of confidence. I can't even imagine how much thermal paste you would need to cool this beast.
@@Gatorade69 Are you kidding? The Celeron 300A was a gaming beast because it was so overclockable it could compete against the much more expensive Pentium IIs AND IIIs.
It's the way they "cheated" the transistor size limitations to get more performance. Just stick more of them inside, even if they aren't smaller, easy. You just end up with twice the size and twice the heat and power consumption....
Seeing Linus be so happy about this new CPU was wonderful. I know I'd be that excited to run these benchmarks. I wouldn't be happy to watch that boot time. Maybe a coffee break is in order for every reboot? (:
I love how LTT can pivot from $128 budget builds for price conscious consumers to the most financially inaccessible enterprise solutions available on a moment's notice.
I work in the HPC industry and can confidently say that the EPYC Genoa-X instance types on AWS and Azure are going to be a big hit. All of our major customers have been requesting access to these processors since they are blazing fast and have an incredible interconnect speeds. It's a lot of fun to be an early adopter of this amount of processing power.
nice 🙂
Good to know the response is good so far 😊Not sure what kind of workloads you run but we are working on optimising all of our software to take advantage of the new CPUs so it should be even better soon!
blazing fast... just like their java script with the New framework released an hour ago🔥🔥🔥🔥🔥
It also helps that AWS charges 20% less for AMD-based EC2 instances XD
You're sure about that? Software is still difficult with AMD systems (and many other problems....)
You know a dual CPU system is absurd when it breaks task manager.
No one except linus would be running windows on these computers. These would be linux machines.
Even the 64 core breaks task manager, not even speaking of duals
You'll find task manager in the corner at the local pub wondering where they went wrong in life.
It "broke" Cinebench too, that i never saw lol
The task manager is broken by armless Microsoft monkeys.
Man, we are at the point where we could theoretically install an operating system on the L3 cache alone.
Someone has to do that like NOW.
Agreed, OS's should be embedded.
Well, we should be able to cram DOS onto it
can we run Doom off of just the L1 cache though?
@@avroarchitect1793 asking the real questions here
The production value is so so so great on this channel, shout out the camera and editing crew!
In regards to the RDDR5 peculiarities you guys noticed in this video -
DDR2/DDR3/DDR4 were all 72-bit ECC. As you noticed, DDR5 is 80-bit ECC due to the DDR5 DIMM having two separate 32-bit subchannels. Each subchannel needs its own parity, and while it only needs 4 bits of ECC per subchannel, there aren't any 4 bit die structures so they get a full 8 bits of parity each. This means, yeah, a non-ECC DIMM has 8 DRAM chips, and an ECC DDR5 DIMM has 10. Previous DDR formats only needed 9 per rank. Since 10 chips is 11% more than 9 chips, there will always be at LEAST an 11% cost premium for DDR5 ECC DIMMs compared to DDR4 even if there is a per-bit price parity between DDR4 and DDR5 DRAM. Also, due to each DDR5 DIMM having an onboard VRM, DDR5 will cost more by structure. Eventually, DDR4 and DDR5 DRAM production volumes will flip, so eventually DDR5 becomes lower cost than DDR4. However, there's always going to be that cost premium baked into the structure.
Also, there will be off-roadmap 72bit DDR5 RDIMMs designed for specific hyperscale customers who do not want to pay the 11% extra bit premium for full 80bit ECC. A 72bit DDR5 ECC DIMM does NOT have full ECC coverage, but companies like AWS who control their entire software stack have written their environment to be aware of this and just deal with it. 72bit DDR5 will not be available to general customers because most people won't understand what 72bit DDR5 is, would buy it expecting ECC support, and have ECC failures in production due to the nature of 72bit ECC in DDR5. To avoid customer fallout, these 72bit modules won't be available to most customers nor will they be advertised on websites or general product roadmaps.
Secondly, you noticed that the registered and unbuffered DIMMs have a different notch. This has always been the case. You were not specifically comparing ECC vs non-ECC DIMMs in this video when you compared the key notch - yes, one had ECC support and the other did not, however, the key difference was one was an Unbuffered DIMM and the other was a Registered DIMM. There are no modern memory controllers which can support both RDIMM and UDIMM memory modules, so they are keyed differently. All registered or RDIMM modules are ECC, but unbuffered or UDIMM modules can be ECC or non-ECC. Consumer processors are all based on UDIMM. So, yes, you can have a CPU support both ECC and Non-ECC memory. Specifically, unbuffered ECC and unbuffered Non-ECC.
Thanks for the explanation. Where can I learn about all these and more? Very interested in DDR technology
Sounds like you know a lot about this.
Very interesting explanation where I could understand most of it even though having almost no idea about this stuff otherwise.
The 72-bit ECC module story sounds interesting. How does one learn about such a story?
Bro break it down. You lost me after i clicked
Fantastic lecture! This man should teach a class on memory topology!
Yes
Comes in, breaks records, leaves.
Absolute monstrous CPU's.
I can imagine telling my 14 year old self, using Windows 98 on a 600MB hard drive, that there would eventually be a 96 core CPU
cool story bro.
They have come a long way since Rage 3d 8mb....
@@dobermanownerforlife3902 8 millibit?
the real question should be, can we install windows 98 on the L3 cache?
Biggest surprise (for me at least, at the time) would be the "multicore CPU" thing. We didn't even talk about "cores", since a package contained one CPU with, natch, one "core".
My first job working on server BIOS was actually on the AMD Genoa platform for Dell, it’s so cool seeing people work with it now that it’s public
This is the video that finally made me realize how big a 4090 really is. I actually started laughing when that comparison happened wow
For those looking, it's 9:16
@Derick D If you missed it then you must be blind.
It's comically large lol
At 12 inches it's huge, but the 3090 was actually 12.5 inches, so somehow even bigger.
@@Jimmy_Jones or maybe, like me, I didn't feel like watching the whole thing but when I saw the comment it intrigued me. So for others like me, there's your time saved
- “How large is your computer's RAM?”
- "Three Terrabytes."
- "I mean the RAM, not storage! You know nothing about a computer."
- "No, YOU know nothing about EPYC Genoa" *pulls out personal server*
I hope those Terrabytes aren't backed by USTbytes tho, or they would probably generates lot of crashes
"How much is your cache?"
"4 GB"
"That's very little RAM"
"No, I mean my server's cache is 4 GB."
remember the joke was about 128gb ram and we thought that was insane? yeah
this is like the shit we made up as kids just throwing around numbers when talking about pc hardware, where the numbers where so big it wasnt funny anymore but dumb. 😂
1/10th of 1 cpu has more cores than an average joe. Just insane
Was floored at the 4090 comparison, the fact that the card is the size of the power supply and looked like a tenth of the server rack, is insane.
Yeah man, that thing is insane. I bet theres ITX cases with less volume than that
@@musguelha14 micro itx, you're probably right!
It's called bad engineering. Boomer tech. I can wait for thin small nice gpu with 75 watt to play 4K 240fps. No cable needed.
Well, er yeah, obviously that would be great. Count me in. Then you just know an AIB partner in 2036 is going to push 800watts through that puppy to power the 80,000 shader units and the GDDR11XX.
Until we hit some kind of ceiling or have some kind of sensible standard this shit is just going to get crazier and crazier.
Unless there's some kind of architecture or engine that renders all that power obsolete. I started to get hyped for the Euclidean thing until I realised it was all voxel... But something.
Just don't knock the people that made it all possible. These guys are legends. They made all this real. Like Dave Haynie and the guys at Commodore are my heroes.The whole 'boomer' shit is just insolent, infant bullshit to piss people off and get attention. Rise above that. You're better than that.
Seeing Linus talk in these server themed videos and go into all the details and his avid interaction back and forth with the viewers and the machines, makes me feel like a parent whose gifted their kids the present they wanted the whole year on Christmas and the kids enthusiastically explaining to me what it is and how it works and everything. lol
Back in my ISP/Datacenter days, we had a long standing joke about slow boots. Whenever something took forever to boot, we would say it was doing a SuperMicro. We would joke around and say that the reason why people buy 4 of their servers at a time is because they took so long to boot that there was a risk that another one would fail before the first one finished booting back up. Good to know that things haven't changed much.
Don't know much about servers but why does it take so long to boot ? Do all SuperMicro servers take a long time to boot ?
A little off topic but I fell out of gaming/PC building for a while but my old AMD Athlon X2 system booted much faster than my current Ryzen 7 system (due to UEFI I would guess ?) You would think with faster hardware it wouldn't take so long to boot.
@@Gatorade69 in datacenter we use a special type of RAM memory with the integrated error correction code or ECC. With the introduction of DDR5 the ECC comes to the wide consumer market, the ECC technology is part of the default design of every DDR5 module. Now back to the question you asked - compared to the classic desktop setups based on the CPUs you've mentioned, within the ecosystems using the server grade buffered RAM with ECC we have a lot more stuff going on during boot time (as one might have guessed) - with the primary time consuming one being the so called "memory training".
Now, that alone is a whole new topic with a ton of settings in BIOS and it prolly deserves its own chapter on Linus TT or Steve's GamersNexus... Anyway, memory training is a one-time event that's performed by the system on its first boot or subsequently after any significant or otherwise unique change within the particular system, during which the system sets up, tweaks and tests the RAM memory and all its advanced features like the ECC so that it works at its best.
That takes time.
Of course, in this video Linus showcases the latest CPU by AMD and also explains how there's a bug present in the microcode that causes the long boot times nonetheless and which AMD announced a fix for in the upcoming days hopefully.
@@MilanPutnik I worked for SoftLayer before IBM bought them for their cloud. We had super micro servers and they actually didn’t take that long to boot up in comparison to AWS servers. When I worked for a AWS manufacturing plant doing diagnostics, those were by far the worst. Lenovo had some shit boot times too. I actually prefer Supermicro boot times over Lenovo and AWS boot times anyday lol. Unless you have one of them 4us with 2/3TB of RAM then forget it.
@@Gatorade69 Cause the motherboard needs to check absolutely everything. The ram and the hard disks usually take the longest to check.
Makes sense. Thanks for the answers. I remember old computers used to take a while to check the memory when booting up. I also wouldn't guess that it would also check the hard drive. Servers usually have a lot of space and memory so I can see that taking a while to check 5tem on boot.
Competition indeed does breed inovation. Still remember when AMD was the underdog? Do you?
Don't forget that AMD still is the underdog in terms of market cap or revenue
my brother in christ, there are only 2 cpu makers
They still are, people who dont follow tech topics have no idea that intel was worse (in most cases) for 5 years. So yeah, we remember.
@@cyjanek7818 depends on what metric you look at. Performance? In the server space, absolutly not a underdog anymore. Market cap in consumer market? Yeah it's still a underdog tale developing.
Only because of Microsoft.
There used to be _so many_ CPU makers.
To those complaining about a server booting in 15 minutes, there were IBM p series (booting AIX) that would take half an hour just for the POST, before loading the OS. With the OS booting and getting everything running, it would take about an hour. I had to deal with 7 of those in a test lab I worked at back in 2010-2016. They were not fun.
Probably built not having to reboot it often. Yeah I too hate long server reboot times while my users keep asking "Is it back up yet as I have work to do?!" 🤣
Huh. Stores I assist still have p615 servers running AIX 5.X and they take a bit less than 15 min to reboot 🧐 But these need to go 😭
@@RobBroderick44 Know what's more fun? Having those in a test lab, where they have to be reformatted and total OS reinstall about every 3-6 months. Wait half an hour for POST, then have a 30 second window to press F11 to get the boot menu to tell it to boot from CD, then another half an hour to start the OS installer. Yeah, I had fun with those.
@@dangingerich2559 One day if it is ever possible in your case switching to Linux and kexec'ing the kernel the reboot would allow you to scrap out the post time when the reason for the reboot is not to check the hardware.
Also there were more and more patches coming in the past two years or even more as well as in the years to come for Linux to allow parallel cpu booting and greatly reduce boot time.
@@naguam-postowl1067 That's nice and all, but I (thankfully) am not in that job anymore, and am not dealing with IBM p series or AIX any longer. I don't plan on applying to any jobs that include such things, either.
At the time, I had to stay with AIX as the OS because we were testing customer specific circumstances, to make sure their OS and software would work with our backup appliance. So, I had little choice in what OS we tested. I believe that customer was US government, too, so they had little choice in the matter, either.
WHOAAA...!!! THIS OWSOME...!!! REALLY EPIC...!!! MAKE IT MORE..!!! GOOD JOB LINUS..!! BEST CHANNEL EVER..!!!
Almost 20 years ago we were stunned with dual-core CPUs. It's amazing, what AMD is doing.
And that was a game changer.
@@flammablewater1755 Cerebras already sells a computer with 850 thousand cores on one processor. It's basically a CPU that fills the entire silicon wafer.
Yeah. Software, except few special cases, still can't do squat about these cores.
@@SaHaRaSquad and it consumes 15kW, you'll need a 400V 3x20A connection for just one as the whole system with that one CPU consumes 20kW
@@BH4x0r Which makes it incredibly efficient, consuming only 0.023W per core.
The chaos level of those filenames is impressive. I bet LMG has strict workflow processes in place purely to ensure that Linus never gets the chance to name a important file.
[chuckle] Yeah. People generally don't think to make sure their programs handle "filename too long" gracefully. PTS was just trying to soldier on rather than aborting after the first failure.
@@ssokolow "[chuckle]" 🤓
have to be amd processor the great one advance machine german = amd advance micro device AMD EPYC
My naming skeem for personal files is mgdasfkjg, and then for work it is 'Datecreated,Project,Revision#,Maybeextadetail'
@@Myrskyukko ""[chuckle]" 🤓" 🤓
You know you have a beefy CPU when your OS's Task Manager window shows CPU cores like it's a defrag.exe from the late 90s 😂
lmao that's actually true
lol they clearly just glued 4 CPUs together
@@pandemicneetbux2110 still beafy
@@pandemicneetbux2110 It's actually 6 16-core chiplets and an IO die on top.
@@pandemicneetbux2110 are you an intel engineer?
You had me at 96 cores....then you throw in that it supports 6 Terabytes of DDR5!!! My mind completely exploded after that!!! Now imagine those numbers on a video card.....when that day comes I'm upgrading.
The comparison between the PSU and the 4090 had me laugh spontaneously.
Is it me or is Nvidia taking it a little too far with size of the 4090?? I wont be able to afford one for many many many many years, but i personally think its outrageously big :P
@@Emell09 it is outrageously big
@@etaashmathamsetty7399 😂😂
@@Emell09 I just built my dream computer (threadripper pro 5975rw 32 core and 4090 liquid suprim X GPU) and I used the HAF 700 case. The liquid suprim x 4090 looks small in the HAF 700 case.
@@2011blueman And play Minecraft with it, am i right?
The Blender result isn't necessarily another dual-Genoa system. Blender tests have a single-threaded setup period at the beginning. With such a short render time overall, that setup period becomes significant. So if you have, say, 64 total cores with an EPYC F-series chip (the higher frequency models), the setup period will complete faster, allowing the render proper to begin faster. The actual render time could easily take longer, while the total task time is shorter due to a faster single-core speed.
Windows cant eat all that cores you need linux to squeeze all that cores.
Poor tim cook he must be salivating
I actually work in purchasing and deploying equipment like this. We've been scratching our heads over how to properly cool Genoa systems. Supermicro's website includes notes that in order to support higher CPU TDPs "special requirements" exist. When we've spoken with Supermicro they've told us that (as of Milan) above 220W requires liquid cooling. But here you are air cooling the 9654. This brings me to a question:
What thermals were you seeing when running the CPU(s) at full load? What CPU socket temps, etc?
Thank you for this video and many more.
Actually just running one is different than running them in the Datacenter, I have to say that Supermicro is very cautious because you want something like this not burning your rack 😂
They can get away with air-cooling it because they only ran one of them.
Datacenters require multiple of these all tucked together, so liquid cooling with high whining airflow is going to be a must
use an air compressor
The answer is just one Wendell @Level1Techs away - check the forum over there!😊
You can cool more than 220W on air, but when you have a bunch of machines in a cabinet, the air flow design becomes important. If it's just a cabinet sitting in an air-conditioned room, that's not good enough. It should be a cabinet where the hot air out the back is collected at the top and ducted away, and all empty slots at the front are covered with blanks. So the only air path entails cool air coming in the front, hot air going out the back, and all that hot air being ducted away through the top.
It's really above 280W that you need to seriously consider water cooling, which means not a typical data center. And to reach the max 400W TDP, you need to know exactly how you're going to cool it. AMD only put the 400W capability in because customers requested it.
What I would have done is let the new server go through a complete reboot and then re-ran cinebench, reason being is that I think the performance might be taking hits from Windows updates and driver installs and such, you may find that if you let the new server do a few full re-boots the performance may improve a lot.
The problem is even a warm reboot can take a few minutes on these. Source: I have a Supermicro Epyc Rome in my homelab.
I keep forgetting how ridiculously huge the 4090 is until it's compared to other things.
I almost got tempted to buy one today, then I remembered how absurdly large and power hungry it is and I don't want to encourage them.
@fakecubed yeah I'm skipping this generation. If something happens and my 2080 breaks, then I'll get a 3080 or maybe possibly an AMD card, but no way am I touching the 4000 series.
@@AgentJ1314yea i just got a 6950xt for 800 recently on amazon. great steal
The moment I read this lynus started talking about that
The 4080 is also massive
As someone who occasionally works with HPC servers but never a 40 series card, that comparison at 9:19 is wild. The 4090 is too damn big.
But, how else am I going to overcompensate for relatively cheap???
It’s not like I have an extra $100k(+$20) for a lifted truck, w/ flesh-colored Truck Nuts.
Just another reason to never buy one.
the 5 is going to be 4 full slots. the 4 series is only 3.5. so prepare for them to get even bigger
not just big but heavy as well. if the 50 series is going to be bigger very few will have cases big enough to fit it in.
Ive got a full size case and the 4090 only just fits in. had to take some HHD enclosures out to get it in. mine came with a support rod to support the weight of it lol. it screws into the rear end closest to the front of the case and it sits on the bottom of the case to give it extra support. had probs trying to fit the support rod as i have an intake fan on the bottom of the case where the support rod is supposed to go. its the length that is the prob with them not how many slots they take up.
No way will u be able to put 2 of them in a case and use SLI with them cause the motherboard would buckle under the weight of them.
you also have to keep an eye on the 40 series cards as the power cables from the PSU to the GPU can melt and catch fire.
Nvidia had to get new cables made so the newer cables should be fine but if u get 1 of the 1st gen cables that come with the GPU it could melt. Its a 16 pin cable but not the same type other GPU use. its 12 pins then another 4 smaller pins so an adaptor is supplied so it can be connected to a PSU and its the adaptor that has the fault. the adaptor has sockets to fit in 3 8 pin cables from the PSU. its where the adaptor fits into the GPU where it tends to melt and catch fire if its not put in correctly or becomes lose and its not a very tight fit so can come lose very easy.
seems Nvidia r scamming a bit as well turns out the 4080 is a 3080TI rebadged and put in a bigger case
@@cliffbird5016 What you said about Nvidia scamming is baseless bullshit. The 4080 has 4 more gigs of vram than the 3080ti and its boost clock is nearly a gigahertz higher.
It’s almost stupid to think that just 30 years ago we couldn’t even have one mb of vram and we were not hitting even 200 mhz frequencies and now we got 96 core CPUs boosting to very good frequencies
1992 had CPU's that hit 200mHz by 95 there were some hitting 500mHz. Meanwhile I had a 386DX reading about them Look up Digital Equipment Corporations play in early computers.
3 decades ? Wait just 10 years ago sandy bridge Xeon had max cores of 8 and we know when intel increased 4 cores to 6 with 990x today the top desktop one is 24 core
@@idzkk albeit 8 p core
@@Malc180s Speak for yourself bud.. I absolutely need more than 6 cores. I wouldn't get half of my work done on time without the extra compute.
@@Malc180s If you talk strictly about gaming, then yeah sure. 6 cores are optimal. Even so, new games nowadays now utilized more than six cores.
that humming was driving me nuts.
You should totally try the software renderer in Crysis again! See how good these CPUs are at being a GPU!
But can it run Crysis? /s
@@RebrandSoon0000 Probably not these CPU's suck for gaming.
@@Aka_daka He was talking about software renderer just to see how it compares to other CPUs.
Would be good to quickly see it as an aside again in another episode, just to see how it compares. I'm betting it would at least be playable this time. What was it before? Like 12fps?
I just need Ryzen 7700 coming January 10th 2023 to build my 4K 240fps PC. But idk what gpu, all high power earth killers higher than $400 price.
gotta love that AMD sockets are starting to approach a pin count equal to an SD monitor's pixels
As a career sysadmin I'm glad to see LTT doing way more videos on datacenter infrastructure and getting tours of them and fabs.
Also, lmao at the 4090 being used to compare how "absurdly" giant something is.
And obviously size doesn't matter Linus! you guys have like 4 kids.
They should hire someone to make good sysadmin // network engineering content that’s entertaining enough for young people to learn and potentially find a career in, Linus kinda bodges all the infra for his company cause he enjoys that but if they had someone making videos in a way that captivates his current audience but has best practices and all that, it could get alot more people into our industry
@@tranquil14738 easier said than done. Let's be real, it's very hard to hype up a content inherently boring, like servers.
@@hueanao Lots of people such as me find servers exciting
@@hueanao I might be a loser but I find the topics very interesting I just find the videos online very monotonous and monochrome
@@tranquil14738 I think the topics are very interesting, they just need to be delivered in an interesting manner. and ltt media has managed to make videos on computing compelling to watch, so I'm sure they can do the same to data center infrastructure
9:23 I literally just realized that Linus is the real world Michael Scott! 😂
I want a full episode like 16:18 where Linus and the team only say random sounds like this but still keep the intonation as if they are conveying actual message.
Back in around 2015 when I was still in college, AMD was particular with hiring. They hired those mostly from the best unis and it finally paid off.
@@thomasb282 You apparently forgot that AMD actually got a nice headstart in the GPU market by acquiring ATI. They already proved they were capable and weren't all that small.
You make it sound like AMD cut their tiny little corner shop in half to make GPUs one day.
Reminds me when I installed 4x EPYC 7742 and 4x wtv the 8core was CPUs in new servers. They ate paste like crazy and had to go back to the store twice to get more. Was an honor.
man i can only imagine how surprisingly amzed look you would have
I use toothpaste instead of that "special" whatever. Never had a CPU burn on me.
@@domainmojo2162 If you think having a good box is "luck" then I feel sorry for you...
@@domainmojo2162 A lot of people are indeed unfortunate. I just wanted to point out that our computers are nothing that just boxes. People and real life is the deal!
So, having the "best" PC would be less exciting that meeting a new good friend or finding a relationship, or adopting a child etc.
And keep in mind that I only got to TRULY understand that recently! That's why I now thinking about keeping my current PC as long as it gets. Getting a new is just waste of money!
Always wanted see the best intro of techburners on youtube. Literally yours one always fits and rocks🤖🤖💥💥
imagine how fast this bad boy could print hello world to the screen
this wont really be faster than a normal PC (might even be slower)
depends on the screen
@@CalcProgrammer1 really doesn't. Printing to standard out is just writing to a file
@@hwstar9416 Depends what you mean by screen. You can write to stdout faster than stdout can be rendered to a framebuffer and stdout can be rendered to a framebuffer faster than your screen's refresh rate. However, the original question was explicitly how fast it could write to "the screen" and the screen only goes so fast.
@@hwstar9416 You must be fun at parties
A normal motherboard can support ECC and non ECC memory. The different notch is because EPYC and other server platforms REQUIRE registered memory, while platforms like am4 only support unbuffered dimms, which can still be ECC (with an intel W480 motherboard, for example).
AM5 consumer CPUs supports both ECC and non ECC RAM
@@valentj3 yes but only unbuffered/unregistered, EPYC requires buffered/registered ECC RAM.
I think the notch is because DDR5 RDIMM's run off 12V while UDIMM's run off 5V. Why this is, nobody knows, especially with intel pushing 12VO and PSU 5V rails being not that great usually.
@@stephhugnis Many server power supplies are only 12v, even if they're not ATX.
@@Henrik_Holst ahh makes sense. Thanks for the info!
7:14 "I wouldn't marry you if I thought it mattered" -Yvonne probably.
5:30 linus just casually has a 4090 in the backround
Linus, I've just come home from a really terrible day at work, just challenging in so many ways. Your intro alone has honestly calmed me down and put a little smile back on my face. Now let's continue watching to see there world records.
Sorry to hear stranger danger brother from some where. Hope you have a better day tomorrow. Cheers!
12:12 i love how the lights on the graphics card progressively turn on as the fans ramp up
Damn. This is a crazy time to be a tech head
11:11 I was sitting in my car thinking that my city was running a nuclear sound test warning ⚠️
While the GPU market has been quite disappointing in terms of innovation this year, AMD has been killing it with these performance boosts with EPYC, I am willing to bet that these CPU's will absolutely DOMINATE the high end server market for the next few years
Well, as long as they can deliver them.
bruh what? Have you seen the 4090?
@@michaelhenlsy55 The 4090 especially is disappointing
@@stocky7134 what? If you wanted something better go work for the government to get their advanced shit in the cia or something. Go use their 8090 Super if they are even that slow.
I doubt it, amd and intel lack vision and are complacent.
If you are benchmarking Ubuntu, you have to set the scaling_governor to "performance" because by default, Ubuntu sets it to "powersave" even on Ubuntu Server.
you sure its not "balanced"?
@@gzqhesflexcl performance won't make you all your cores run "all the time" at max frequency. Also, RHEL distros uses performance by default.
That's fucking bizarre
Ahh that makes sense
AMD and Intel's rivalry is really driving innovation. I'm so glad to see both companies fighting to stay on top.
0:42 a 3 cpu system? that's really weird... I never heard of an odd number socket server til now
No he said three CPUs across two different systems so 1 has 1 and 1 has 2 haha :)
The sound of that server at high speeds reminded me of a subway train standing in a station waiting for passengers. Awesome
Gets annoying after awhile. That's why remote capability is so important. Put the noise elsewhere.
I feel like we maybe walking into a tech where things start to get bigger again. Then we try to figure out how to make it smaller again. And I’m glad for that.
I work in animation, specifically CG Rendering, and while we are looking at Gen4 for our renderfarm upgrade - we are held back by the thread limits of our render engine software which at the moment is capped at 256 threads, so we are stuck for now on our decision!
Just disable SMT and run on 192 cores.
Maybe they shouldn't store the thread count in an u8 (2^8=256)
@@BurnsRubber Or buy the model with 64 cores.
Can't you render on GPUs?
You can get dual 64 core CPUs and they will run at higher clock speed.
nice edit there :D that made me happy. :) 13:40
This monster has more cache than my first PC had hard disk space. What a time to be alive o.o
My first PC had 10MB of HDD. Took hours to defrag too.
I used to work in a server motherboard manufacturer, involved in the SMT and DIP process and believe me, the more pins a socket has, the more stress it gave me. This straight out gave me Vietnam-like flashbacks. The amount of scrap due to bent pins that we didn’t know where it happened is amazing. I still don’t know how our company had any profits.
Watching him apply thermal paste was like watching someone cement a parking lot.
excessive thermal paste has been proven to maybe add 1 degree or temps lol
@@BaconSenpai cap
@@uchihasasuke7436 emphasis on "maybe" but really it's "at most"
at 7:13 I remembered to click the like button
0:09 really like the transition here with the impact sound effect
I remember being fascinated by the FX-9590 8-core CPU when I was in high school. All of us nerds were drooling over that. I could have never imagined that there would be 96/192 core/thread CPUs.
We were drooling over the multi million 96 node 64 bit super computer that got installed in our uni when we were there.
Threadrippers have been eating it up quite some time ago.
@@slash2bot oh wow
The problem is still the speed. Writing code that can utilize many cores for most applications is not possible. Rather have high GHz over high core count.
Same way people in the 90s would not imagine that we would had 64GB of RAM.
I'll never understand how people "could not have imagined". Technology progresses. That's the whole point...
@@godnyx117if i told you that there would be a 1 million core cpu in the future, you would be shocked too. it’s not that they thought “oh 128 mb of ram is the maximum!” its that 32 gb is so far ahead of the curve that it’s absurd to them
The 256 core limit is because the max number value is 255 + the core 0, so 256
When they created the software they thought "meh, we will never reach that anyway" and used the smallest possible variable, which makes sense
Now they need to update that lmao
Funny thing is that using u8 instead of u16 or even u32 makes things slower. At just 1 or 3 bytes saved. Totally not worth the tradeoff.
@@PanduPoluan At the time, less memory was more precious. Without a reason to change it, they've left it the same.
and they might as well figure on going to something extra absurd like 4096 core count, because just going to 512 from 256 isn't going to buy that much time.
@@QualityDoggo At no point in the lifetime of Cinema 4D was it important to save a couple bytes of memory.
It's unlikely that they are actually using a single byte to store the number of threads. It's more likely that the thread count has a hard-coded limit for an arbitrary reason.
And isn't that limit due to using an 8bit number for the core count?
Can't wait to see the updated video for Epyc 9754
Being able to reallocate PCI-e lanes for intersocket communication on dual socket servers is cool
That record on y-cruncher, you could have also absolutely obliterated the 100B decimal digits record given that doing it entirely on memory is what gives you a tremendous advantage, and with all that cache included
god.... just imagining the fact that just *one* intermediate value in the algorithm could be approx 40GB in size makes my head spin.... *Then* considering that all of that could exist entirely in RAM just makes me collapse.
@@haniyasu8236 There is an entire microcosm of scientific programs that had to be written with the assumption that the users will never ever have enough RAM to fit the intermediate values into RAM.
@@TheBackyardChemist And I'd bet quite a few of them are approaching the point of being wrong.
@@Gabu_ Yeah, well kinda. Having 1 TB or 4 TB of RAM is now possible, but it is still not exactly cheap or common.
@@TheBackyardChemist Yeah, I'm aware. Still astounding anyways
I realize that many people are eagerly awaiting this sort of power to get there work done but I can't imagine what I would ever do with it.
Fluidics simulations can easily 100% as many of those as they can get for days and weeks.
They build supercomputers with these kinds of processors. There are scientific simulations that consume literally thousands (sometimes tens or even hundreds of thousands) of processors and need months to run to completion.
@@jacobfields8111they do cool shit like simulate the detonation of nuclear bombs, supernova, interactions between large celestial bodies, and all kinds of other fun stuff
European here: Heat my home.
I know what you can do. Simply pack up your server into the new LTT “Server Backpack,” and attend a Minecraft LAN party. Lol
3:51 How many things should I ACTUALLY check the documentation for with these kinds of things?
7:05 16x slot of PCIe gen 1*.1*
11:04 Another cool thing to if you've read documentation.
It's great working for Supermicro and watching this cuz you'll never know if that's actually 1 of the system I built and test :^)
Things have come a long way since my first PC. That being the Amstrad PC 1512 with a 20MB HDD. That thing was so amazing I put 3 guys out of work in the printing press (sorry guys) I believe the DTP software I used was called Jetsetter!? I eventually upgraded to the 1640 - Happy days
I had an amstrad cpc464 green screen back in the day 💀
That 4090 comparison made me flinch in my chair that thing is scarily big
Love this video, something magical using them type of machines for first time
We're living in a special time, we can witness AMD hit the 100 core mark very soon
AMD's Bergamo is just around the corner and it has 128 cores.
We do quite a bit of molecular dynamics and quantum chemistry and these are pretty much unbeatable in terms of performance, especially when coupled with selected GPUs. For Gromacs nerds: 1M atoms, 27 ns/day @ 2 fs timestep, running on 32 cores + 4GPUs.
I had a thought.. SSD's weren't created because they were the next logical step in storage... they were created because of the amount of platters Linus kept dropping was making them exponentially more expensive so they needed a shock-resistant form of storage.
legend has it the gyro in the macs to park the hdd head when a fall is detected was made because of linus
@Gomam0n certainly not a common feature, specialist drives maybe
My company just recently started using supermicros. Good products.
Wow, I'm so envious of Linus getting to play with ALL THAT AMAZING GEAR!! I live vicariously through you, sir!
got to love the mad look in linus's eyes when he gets these amazing products to use
On the John the Ripper benchmark it looks like just one of these processors would beat out 2 of the Epyc 7763s. So 96 cores of Zen 4 beating out 128 cores of Zen 3 cores, cool to see the generational uplift to be as good as this.
minecraft speedrunners in 2023: 48 instances minecraft speedrunning
I can see how AI will become a real contender in pretty much every field with hardware like this coming out. Geez
Another AMD W? Splendid.
Yeah they need some wins after the 7900xt massive L
@@sigmamale4147 based sigma male
@@sigmamale4147 7900XT should have been between 600 and 700 me thinks
Your comment was copied by a bot btw
@@sigmamale4147 The 7900xtx wasn't a fail. Cheaper and smaller. AMD did say they weren't trying to compete with the 4090. Some people need Nvidia but for casual gamers and computer users, the 7xxx GPU series from AMD is the better choice.
can't wait to see this system sold @1% of the price in 10 years later
By then, another silicon crisis like in 2021 will probably hit the market. Silicon reserves can't last forever, yk😑.
I didn't understand half of what Linus said, but I loved every minute of it, just knowing how absolutely monstrous this stuff is.
Definitely gonna need to get this for my first pc build!
I felt the frustration Linus was feeling during the first half of the video where things weren't planned and things got delayed by the insane boot times, hyper threading wasn't enabled and that something as simple as HWinfo64 wasn't even installed.
This reminds me of WAN show a few weeks back where Linus talked about how his patience for stupidity had gotten lower and lower throughout the years
I wonder just how physically large CPUs can get before it experiences issues due to timing (as in, signals having to move further and resistance of the internals messing stuff up), pad pressure, heat issues, or other major issues.
There's the wafer scale chip by Cerebras. 850,000 cores. 20kW. That's probably the ceiling for now, unless someone figures out how to create chips larger than wafers
I'm waiting to see you put out some videos on quantum computing now.
Just think of the old days when some MMO's (Star wars Galaxies released back in June 2003) were ran on 100's of Pentium II cpu's. Now can be ran on just one of these bad boys. Pretty amazing. Would love to have one of those Supermicro servers for my home lab virtualization. After I win the lottery of course.
I love how it just got wider and wider and now is a square.
CXL sounds very interesting. Could really shake up the space if something like that made it into the consumer x86 market.
Man imagine having to manufacture all those tiny pins, truly insane tech
Intel sever and data center teams must be so annoyed with AMD
AMD's chiplet design keeps on giving
Can you imagine the amount of confidence you need to screw this thing down? At least 8Gigabytes of confidence. I can't even imagine how much thermal paste you would need to cool this beast.
stefan i dont think you should be in charge of thermal paste XD
Just incredible progress being made in silicon! Blinding speeds
Back in my day, all of the CPUs used to be the gaming CPUs of all time
they are still, to this day, undoubtedly the gaming cpus of all time.
Nah. Celerons existed. They couldn't do anything.
@@Gatorade69 Are you kidding? The Celeron 300A was a gaming beast because it was so overclockable it could compete against the much more expensive Pentium IIs AND IIIs.
@@Gatorade69 there used to be a time where celerons weren’t the garbage they were today, however that was indeed a long time ago
@@Obi-Wan_Kenobi62 Were still *mostly* garbage with a few exceptions.
Now imagine the thread ripper counterpart
its so cool to see Linus have a kid in a candy store moment ❤
Yeah .... he certainly gets enthusiastic !
I have the same setup with this cpu which I use to emulate Nintendo 3DS games. Peak utilisation right here fellas.
That 4090 being as long as that PSU is wild. Guess Nvidia thinks size does matter.
It's the way they "cheated" the transistor size limitations to get more performance. Just stick more of them inside, even if they aren't smaller, easy. You just end up with twice the size and twice the heat and power consumption....
@@Leonhart_93 damn
Seeing Linus be so happy about this new CPU was wonderful.
I know I'd be that excited to run these benchmarks. I wouldn't be happy to watch that boot time. Maybe a coffee break is in order for every reboot? (:
I love how LTT can pivot from $128 budget builds for price conscious consumers to the most financially inaccessible enterprise solutions available on a moment's notice.
This right here is why I love technology.