Find Level1Techs here! ua-cam.com/users/level1techs Grab a GN Mouse Mat or Coaster Pack on the GN store! 10% of all store revenue goes to the charity and cat shelter Cat Angels until 9/17/22 - store.gamersnexus.net/
@@sammiller6631 though also not go to the white paper and read for an hour either. It is normally only 80 to 100 pages long with diagrams and descriptions that rely on each other so you need 2 or 3 of them open to understand what you are looking at.
I had the pleasure of seeing part of a supercomputer array used by the late Stephen Hawking for his physics research being fired up. One thing I noticed about it was how it had so many fans running at full speed when starting up, it was almost as loud as a PS4 Pro!
@@GamersNexus It was really rather neat. It's actually at a massive collection of vintage computers in North Yorkshire, England. I'm sure you could arrange a visit should you ever find yourselves in the UK c:
I built servers for a living some years back. One server is obnoxious, but when you have 40 - 80 1U servers in the test room all running torture test with 40°C in the room every fan is running at max speed. You do need hearing protection even if all you are doing is setting up a new server for test. Being of the "I'm invincible" mind I often didn't, and today I have pretty bad tinnitus. So use hearing protection when working in a server center or even when just testing a single server if it's powerful. At worst you're a bit uncomfortable or look like a dork, but at least you can hear what people are saying. Cuts down a lot on the number of times you yell "What?"
Absolutely, he understands the monetary value, and that most of his audience wouldn't spend half the cost of 1 gpu on their entire system. Such a contrast to another tech tuber who would just rip it out with the attitude of "if I break it I'll just buy it"
The budget for designing this hardware is a bit different from that of a mass-market corporate desktop. Also, as these are practically always sold with a support contract, Dell people are the ones who need to be able to fix them, as quickly and efficiently as possible. Probably worth spending the money and time to get the design right.
@@zodwraith5745 Don’t know about HPs desktops, but their omen laptops are more repairable than other “gaming” laptops. No soldered components, a repair guide that makes it obvious that they were designed to come apart if the end user needed to fix them. I’m reasonably sure their omen desktops even use off the shelf components, saw one tear down feature a cooler master PSU. They just for some unknown reason use a crap tastic case.
@@ConditionsCloudy For sure, I was just saying that HP has at least one line where they avoid the custom everything horseshit. Dell throws that garbage on everything, Alienware being especially egregious.
I didn't expect this video to be as interesting as it was, and now it's one of my all-time favorite videos from you guys. EVERYTHING in that is just so well-planned out and I can't imagine how much engineering it took to make something so compact yet have an unbelievable ease of use.
Can confirm on the double-stacked fan designs that rackmount gear normally uses. They have two sets of counter-rotating blades because you want the air to actually hit the blades and be propelled by them. the issue is though, when air is moved by a fan it introduces some rotation to it, and when you sling another fan in front of that already spinning air and that fan co-rotates, its effectiveness is reduced significantly, and can actually impede airflow while making the fan even louder than normal. so to get around this, you have counter-rotating blades. A lot of manufacturers will also change the blade profiles of one of the fans to ensure a balance of high airflow and high static pressure can be achieved. this is especially important in 1U rackmounted systems as their fans can't be any larger than 40mm, so airflow comes at a bit of a premium. In larger system fans as well, such as 60 and 80mm double-stacked units, the fan may also have additional stator set in the frame that is designed to remove the vorticity generated by the rotation of the fan blades. these styles of fan are normally used in 2U servers especially, as removing that vorticity from the airflow makes it easier to more evenly and accurately split the airflow throughout the system, reducing the chances of a dead-spot in the airflow.
Crucially, those fans aren't really redundant as they require both fans to run to deliver the full performance. Usually axial fans just have a pressure-airflow curve that is basically a straight line, but by adding a second counterrotating fan it is able to deliver a pressure-airflow curve like a centrifugal/blower fan, almost doubling the airflow at the operating point. And you definitely need that if you want to cool 3kW in a 2U server. I'm kinda sad that they didn't inspect the fan further, you could definitely try it by hooking it up to a normal PSU, or at least show the model number and show the actual current figure. Because 60W isn't really much anymore, something like the Sanyo Denki 9CRB0812P8G001 has a rated power draw of 110W, four of those is more than what my entire PC consumes
@@or2kr Yeah, agreed on that. It's pretty damn impressive how much current a lot of server grade cooling fans actually draw. Even the little 40mm jobbies in 1U servers can pull north of a couple amps at full power.
Im thrilled and amazed how AMD improved their technology. Thanks so our eupwerwomen dr.Lisa Su and their engineers... Been waiting more than 15 years to see AMD logo on my PC and finally got It...
Could you link the LTT video with Wendell? Searching for "LTT Wendell" or "Linus Tech Tips Wendell" comes up blank, and their thumbnails/titles don't help either.
@Grey-man m.ua-cam.com/video/zcchDu7KoYs/v-deo.html It's not a collab really, they just use Wendell's Threadripper workstation for a video and give him a shout out.
I worked in a banking datacentre for 6 years. We has about 1500 servers like this. One build of 150 servers had a $5 million build price. The internet banking servers were a 4 cluster array, each one a quad CPU board with 256GB of memory, the 4th cluster was a hot swap cluster, and the entire thing was mirrored at another data centre.
Some time ago I had a junior tech job at a company building super computers. Supermicro was one of the hardware manufacturer we worked with. The run for dense systems was insane and more than often we had to find solution for the enormous power consumption and power distribution. Another big point is initial power consumption on startup. If you have a rack of those beasts and all systems are startet synchronized, it most likely leads to a flying safety fuse. The supermicro systems are designed really well and the manufacturing is on another level. I met systems scoring a top 100 and better, systems having more memory than my own pc had disk capacity, single CPUs which cost more than my car and insanely high throughput network (Mellanox 56Gbps at that time). It was a great time for me, learning a lot each day and having a deep insight on this field. Thanks for showing the system and creating a nostalgia moment for me.
The ammount of engineering that went into that server is just mindblowing. It's insane how much computing and power you can fit into such a tiny space if you have the talent, money and time. Really wanna see a teardown of one of those MCM GPUs in the future.
There are currently very few data centers able to power an entire rack of compute dense nodes. With the meteoric rise of computing power since 2015, data centers are not able to keep up with power and cooling demands. Realistically, a standard 42U rack 'can' fit 21 of these 2U nodes but will rarely do so. One, top-of-rack connectivity is still needed. Two, the vast majority of high end data centers have a power limit of 30kW per rack (even the case for some of the largest ISP's). This means that most racks will only be half-populated with these dense configurations. Also, focus is shifting towards liquid cooling in data centers as it is not feasible, realistic or price-effective to keep jamming hotter and denser systems in one rack. This is why many server manufacturers are coming out with liquid cooling-ready servers. These dense configurations are mainly popular with hyperscalers and companies with a massive AI/ML/HPC footprint who are capable of cooling it. Not being able to cool it properly carries a significant performance penalty. The vast majority still look at performance/$ and will prefer lower specced hardware that is easier to power and cool. A topic that might be interesting to cover is multi-node connectivity. These compute clusters are so latency sensitive that once you are scaling out (let's say more than four nodes in a cluster), you cannot rely on Ethernet connectivity. Ethernet wants to deliver the correct packet instead of delivering the packet as quickly as possible, and can result in up to 30% performance penalty. Which is where Mellanox Infiniband (=Nvidia) or Intel OmniPath come into play (=Cornelis Networks). That 100Gb card looks like a Mellanox. The OCP cards are also pretty cool, essentially a standardized format across server vendors.
I use to be a Data Center Engineer for HPE - Absolutely loved working on ProLiants, Blades, Synergy's and Apollo based servers. This video brings back good memories.
damn, good to see Wendell again, i've been off the loop of tech news for some years now but it was always a pleasure to hear him, and amazing to hear him again
Is this my day? I was just doing some deep dive into ROCm, and there it is Wendell with MI210 !! And when he said "multiple videos"... Music to my ears, my heart just started singing :DD
I know absolutely nothing about servers and supercomputers. Honestly, when I saw the title and thumbnail, I thought I probably wouldn't care about this video because I'm only into home PCs... I'm happy I was wrong and that I watched this. I was glued to my screen and in awe of the density and perfect engineering of this - and all of it cooled by 4 fans! I can't wrap my head around what I just watched! Also, I loved Steve's shifty eyes when Wendell mentions the price of the Instinct card. 😆
Oatmeal indeed, but it's got cinnamon & maple syrup! I just wish we could get the Bar-Talk between Steve, Wendell, Gordon, and Ian after a few drinks... Or even better, a podcast!
I've been watching some of the older vids from 2 years ago when you were pulling apart servers with Wendell. Super happy to see a new one. Love teardowns of components and devices that I, as a regular consumer, probably will never see in real life.
Loved the video Steve, Wendell and GN! I work in a Data Center each day working as a Network Architect and seeing stuff like this is always awesome! And yeah, those rooms get EXTREMELY loud!
Don't about anyone else but it's always nice to reveal brand new fresh hardware, when I built my new rig 2 years ago which was my first in 10 years. There's nothing like the smell of new hardware of the knowning of new hardware when building
Wendell is always an amazing guest star, he could have a whole career just showing up in other tech UA-camrs' videos, and yet he has his fantastic channel to boot.
I do find it sweet that i see this stuff on GN, ill probably never get into server stuff, but just getting a general understanding of that world its pretty interesting.
Pretty standard fare from Supermicro. The crazy thing about that is that a SINGLE EPYC processor can serve the entire system (vs. needing to run dual processors).
It takes guts to even approach it. I would be scared of breaking it by accident. VERY efficient use of space and a lot of engineering going to remove the heat from the server. Wow
Customer: We need more density Dell/HP/Lenovo: Its not possible you cannot break the laws of physics Supermicro: Its true you cannot break the laws of physics ...... but you can bend them .
In fairness, they are solving a different problem. In most data centers these days, the ultimate issue is heat. When you can already max out the heat budget for a rack, making the system denser is not a win. Also, there is a service cost to higher density. HPE also sells support, and the higher density systems require more effort if the technician needs to replace something.
bravo! what an incredible machine! I can't wait to fire up those FP32/64 systems!! kudos to the crew!! so exciting, what the future holds in store! good luck !
Find Level1Techs here! ua-cam.com/users/level1techs
Grab a GN Mouse Mat or Coaster Pack on the GN store! 10% of all store revenue goes to the charity and cat shelter Cat Angels until 9/17/22 - store.gamersnexus.net/
will absolutely watch the server upgrade. Hyper dense server .
So no Crysis 🤔
Wendell.....LEGEND
He has a point, it's a pealing.
*almost*
"We don't do that here" - Steve
"It's a-pealin'" - Wendell..
:D Good one Wendell
I do enjoy how Wendell makes every other Tech UA-camr sound like a complete tech novice.
Wendell doesn't make Dr Ian Cutress of TechTechPotato sound like a complete tech novice.
@@sammiller6631 Ian is ex-AnandTech. AnandTech makes every non-computer-engineer sound like a complete tech novice.
Most other youtubers are explaining for the masses but Wendell has his audience down. The people who want to know the hardcore details.
@@DovahDoVolom Wendell is for people who want to know the hardcore details in a _concise_ way instead of Buildzoid's hour of rambling
@@sammiller6631 though also not go to the white paper and read for an hour either. It is normally only 80 to 100 pages long with diagrams and descriptions that rely on each other so you need 2 or 3 of them open to understand what you are looking at.
I had the pleasure of seeing part of a supercomputer array used by the late Stephen Hawking for his physics research being fired up. One thing I noticed about it was how it had so many fans running at full speed when starting up, it was almost as loud as a PS4 Pro!
Wow! That's really cool that you got to see that!
@@GamersNexus It was really rather neat. It's actually at a massive collection of vintage computers in North Yorkshire, England. I'm sure you could arrange a visit should you ever find yourselves in the UK c:
💀
I built servers for a living some years back. One server is obnoxious, but when you have 40 - 80 1U servers in the test room all running torture test with 40°C in the room every fan is running at max speed. You do need hearing protection even if all you are doing is setting up a new server for test. Being of the "I'm invincible" mind I often didn't, and today I have pretty bad tinnitus. So use hearing protection when working in a server center or even when just testing a single server if it's powerful. At worst you're a bit uncomfortable or look like a dork, but at least you can hear what people are saying. Cuts down a lot on the number of times you yell "What?"
Daum
Steve's face and reaction to Wendell telling him the MSRP of the Instinct card at 16:20 is the best. I cracked up and had to watch it a second time.
Came to the comments for this. *tries to act cool* *puts it down*.
@@quantumuninstall Looks around nervously.
Considering how these things can mysteriously die from nothing, it's a legit concern.
pure gold.
I had a similar reaction when I found out how much a broken RTX A6000 I was replacing in a work machine cost 💀
Steve going "we don't do that here" has some SERIOUS daddy energy.
😵
daddy?
daddy? sorry... daddY? sorry.
Sus
@@cmas5854 a-peeling
2600W 80+ Titanium PSU in that formfactor is insane
Agreed, i remember being limited to a 300w power supply in my 1st and 2nd shuttle builds. 😪. Dont miss those days at all
That's for each node and 2 nodes per unit.
Imagine the price
You know shit just got real when Steve goes "I'm not normally scared of pulling a GPU, but..."
Loved the way he put the card down when Wendell said the they retailed at about $7K...
Absolutely, he understands the monetary value, and that most of his audience wouldn't spend half the cost of 1 gpu on their entire system.
Such a contrast to another tech tuber who would just rip it out with the attitude of "if I break it I'll just buy it"
i ve never seen steve flustered in front of hardware before
@@markboz3366 who are you alluding to?
@@tasnimulsarwar9189 Who cares?
Crazy how servers that are so densely packed are still repairable, yet Dell can't make their tower PCs with standard parts.
Ironically Dell and HP make quite competent servers. It's their desktop offerings that rank below a wish computer.
The budget for designing this hardware is a bit different from that of a mass-market corporate desktop.
Also, as these are practically always sold with a support contract, Dell people are the ones who need to be able to fix them, as quickly and efficiently as possible. Probably worth spending the money and time to get the design right.
@@zodwraith5745 Don’t know about HPs desktops, but their omen laptops are more repairable than other “gaming” laptops. No soldered components, a repair guide that makes it obvious that they were designed to come apart if the end user needed to fix them.
I’m reasonably sure their omen desktops even use off the shelf components, saw one tear down feature a cooler master PSU. They just for some unknown reason use a crap tastic case.
It's not just Dell, unfortunately. HP, Lenovo, Acer, etc. Basically all of them are guilty of it in some models somewhere in their product stack.
@@ConditionsCloudy For sure, I was just saying that HP has at least one line where they avoid the custom everything horseshit. Dell throws that garbage on everything, Alienware being especially egregious.
Every time Wendell shows up with hot steamy tech it's like TechMoses coming down with another Ten Commandments. I love it.
Which Moses transferred from a cube to a tablet 😘
Wendell is the best 😂
Love the intro 😁
well, Steve's no slouch either!
I didn't expect this video to be as interesting as it was, and now it's one of my all-time favorite videos from you guys. EVERYTHING in that is just so well-planned out and I can't imagine how much engineering it took to make something so compact yet have an unbelievable ease of use.
And in 10 years when hardware is 10x as fast the same performance should cost like 7k
yes it's very a-peeling
@@nutzeeer yup. It'll cost the same to buy as one month's power bill. The bad news is the power bill won't depreciate.
Wendell is quickly becoming "the IT guy" for UA-camrs 😂
No no. He's the computer janitor.
@@kojack57 Especially with that collection of toilet lids, right?
Been following Wendell since Tek Linux started. If there's anyone to trust, it's him,
It's crazy seeing Wendell in front of a camera after growing up watching the teksyndicate
@@JBrinx18 A very particular bit of knowledge. Although the orange soda ...saga during Covid was very amusing.
Steve & Wendell together in a video is always a treat. :D
Can confirm on the double-stacked fan designs that rackmount gear normally uses.
They have two sets of counter-rotating blades because you want the air to actually hit the blades and be propelled by them. the issue is though, when air is moved by a fan it introduces some rotation to it, and when you sling another fan in front of that already spinning air and that fan co-rotates, its effectiveness is reduced significantly, and can actually impede airflow while making the fan even louder than normal.
so to get around this, you have counter-rotating blades. A lot of manufacturers will also change the blade profiles of one of the fans to ensure a balance of high airflow and high static pressure can be achieved. this is especially important in 1U rackmounted systems as their fans can't be any larger than 40mm, so airflow comes at a bit of a premium.
In larger system fans as well, such as 60 and 80mm double-stacked units, the fan may also have additional stator set in the frame that is designed to remove the vorticity generated by the rotation of the fan blades. these styles of fan are normally used in 2U servers especially, as removing that vorticity from the airflow makes it easier to more evenly and accurately split the airflow throughout the system, reducing the chances of a dead-spot in the airflow.
Crucially, those fans aren't really redundant as they require both fans to run to deliver the full performance.
Usually axial fans just have a pressure-airflow curve that is basically a straight line, but by adding a second counterrotating fan it is able to deliver a pressure-airflow curve like a centrifugal/blower fan, almost doubling the airflow at the operating point.
And you definitely need that if you want to cool 3kW in a 2U server.
I'm kinda sad that they didn't inspect the fan further, you could definitely try it by hooking it up to a normal PSU, or at least show the model number and show the actual current figure.
Because 60W isn't really much anymore, something like the Sanyo Denki 9CRB0812P8G001 has a rated power draw of 110W, four of those is more than what my entire PC consumes
@@or2kr Yeah, agreed on that. It's pretty damn impressive how much current a lot of server grade cooling fans actually draw. Even the little 40mm jobbies in 1U servers can pull north of a couple amps at full power.
Wendell’s excitement is so pure and contagious. Any time he’s with GN, it’s an absolute treat to watch.
Im thrilled and amazed how AMD improved their technology.
Thanks so our eupwerwomen dr.Lisa Su and their engineers...
Been waiting more than 15 years to see AMD logo on my PC and finally got It...
I love seeing the two of you together.
Also, Steve's putting down the GPU when Wendell said the price of it was, well, priceless...
As a Data Center Engineer as of recent (3+ months) I’m enjoying this type of content from you guys even more!
Can you please give us a brief roadmap as to how you reach such a position? Knowledge Path and Career Path?
@@ambhaiji experience and connections. Comptia server+ helps I heard.
I really enjoy your collaborations with Wendell.
If you've never watched level 1 news, I highly recommend it.
Informative and entertaining.
I just came from the LTT video. I don't seem to be able to escape Wendell.
Thanks for the great video!
Wendell travels via telecom rack. You can't escape him.
He's inside your pc
Could you link the LTT video with Wendell?
Searching for "LTT Wendell" or "Linus Tech Tips Wendell" comes up blank, and their thumbnails/titles don't help either.
@Grey-man m.ua-cam.com/video/zcchDu7KoYs/v-deo.html
It's not a collab really, they just use Wendell's Threadripper workstation for a video and give him a shout out.
@@slartibartfast2649 Thank you!
This thing is exactly why Intel is trying to get into GPU's. And exactly why they won't *ever* ditch the GPU market.
yep, theres a lot of money in data centers.
Awesome, love both of you guys. Great content as always!
Steve's reaction to handling high cost parts: 16:22
Linus' reaction to high cost parts: *tosses card around*
Linus has expensive insurance
Rest of us would treat the hardware like Steve 😂
Steve has to pay for his toys. Linus does not.
I worked in a banking datacentre for 6 years. We has about 1500 servers like this. One build of 150 servers had a $5 million build price. The internet banking servers were a 4 cluster array, each one a quad CPU board with 256GB of memory, the 4th cluster was a hot swap cluster, and the entire thing was mirrored at another data centre.
I’m hard coded to play the L1T theme songs in my head when Wendell talks.
I just randomly catch myself humming the tune .
My favorites, Gamers Nexus + Level1Techs in one Video, I love it!
"We don't do that here" 😂😂
Some time ago I had a junior tech job at a company building super computers. Supermicro was one of the hardware manufacturer we worked with.
The run for dense systems was insane and more than often we had to find solution for the enormous power consumption and power distribution.
Another big point is initial power consumption on startup. If you have a rack of those beasts and all systems are startet synchronized, it most likely leads to a flying safety fuse.
The supermicro systems are designed really well and the manufacturing is on another level.
I met systems scoring a top 100 and better, systems having more memory than my own pc had disk capacity, single CPUs which cost more than my car and insanely high throughput network (Mellanox 56Gbps at that time).
It was a great time for me, learning a lot each day and having a deep insight on this field.
Thanks for showing the system and creating a nostalgia moment for me.
The ammount of engineering that went into that server is just mindblowing. It's insane how much computing and power you can fit into such a tiny space if you have the talent, money and time.
Really wanna see a teardown of one of those MCM GPUs in the future.
collabs with Wendell are fantastic, he's something else)
There are currently very few data centers able to power an entire rack of compute dense nodes.
With the meteoric rise of computing power since 2015, data centers are not able to keep up with power and cooling demands.
Realistically, a standard 42U rack 'can' fit 21 of these 2U nodes but will rarely do so. One, top-of-rack connectivity is still needed.
Two, the vast majority of high end data centers have a power limit of 30kW per rack (even the case for some of the largest ISP's). This means that most racks will only be half-populated with these dense configurations.
Also, focus is shifting towards liquid cooling in data centers as it is not feasible, realistic or price-effective to keep jamming hotter and denser systems in one rack.
This is why many server manufacturers are coming out with liquid cooling-ready servers.
These dense configurations are mainly popular with hyperscalers and companies with a massive AI/ML/HPC footprint who are capable of cooling it.
Not being able to cool it properly carries a significant performance penalty.
The vast majority still look at performance/$ and will prefer lower specced hardware that is easier to power and cool.
A topic that might be interesting to cover is multi-node connectivity.
These compute clusters are so latency sensitive that once you are scaling out (let's say more than four nodes in a cluster), you cannot rely on Ethernet connectivity.
Ethernet wants to deliver the correct packet instead of delivering the packet as quickly as possible, and can result in up to 30% performance penalty.
Which is where Mellanox Infiniband (=Nvidia) or Intel OmniPath come into play (=Cornelis Networks). That 100Gb card looks like a Mellanox.
The OCP cards are also pretty cool, essentially a standardized format across server vendors.
I love Wendell so much, such awesome company to be around.
I use to be a Data Center Engineer for HPE - Absolutely loved working on ProLiants, Blades, Synergy's and Apollo based servers. This video brings back good memories.
Wedel is always such a win.
scary what you could do with a few of these, let alone hundreds
Run Crysis on CPU on a cluster
damn, good to see Wendell again, i've been off the loop of tech news for some years now but it was always a pleasure to hear him, and amazing to hear him again
Is this my day? I was just doing some deep dive into ROCm, and there it is Wendell with MI210 !! And when he said "multiple videos"... Music to my ears, my heart just started singing :DD
Ok I understood about 30% of what was going on but that's why I love Steve' and GN content. Wendell is awesome too love his channel
Don't worry, even Steve looked confused at some of this haha!
There's so much power in that server it's warping the mod mat. lol
I know absolutely nothing about servers and supercomputers. Honestly, when I saw the title and thumbnail, I thought I probably wouldn't care about this video because I'm only into home PCs...
I'm happy I was wrong and that I watched this. I was glued to my screen and in awe of the density and perfect engineering of this - and all of it cooled by 4 fans! I can't wrap my head around what I just watched!
Also, I loved Steve's shifty eyes when Wendell mentions the price of the Instinct card. 😆
That opener LOL
Oatmeal indeed, but it's got cinnamon & maple syrup!
I just wish we could get the Bar-Talk between Steve, Wendell, Gordon, and Ian after a few drinks...
Or even better, a podcast!
What a great collab two of the best tech channels
Glad to see the Wendell collab, hope we see more in the future.
"It's appealing." Goddamn it, I love this guy so much.
I've been watching some of the older vids from 2 years ago when you were pulling apart servers with Wendell. Super happy to see a new one. Love teardowns of components and devices that I, as a regular consumer, probably will never see in real life.
You *_HAVE_* to use that sound bite of Wendell yelling when you're doing fan review methodology or anything fan review related.
I see Steve and Wendell, I automatically smash like.
Yep, that's a good team.
I love the Wendell and Steve collaborations. Hopefully more of them in the future!
When _Wendell_ says it's the most powerful computer he's ever worked with, you know it's serious stuff.
for that price it better be
Watched that intro over and over, love it. Expression says it all
Loved the video Steve, Wendell and GN! I work in a Data Center each day working as a Network Architect and seeing stuff like this is always awesome! And yeah, those rooms get EXTREMELY loud!
Nice to see you here Wendell! Logan also looks good with his hair grown back up!
Always great to see a Wendell Level1Tech colab
AWS hardware engineer here: I can confirm this server requires hearing protection to operate around, especially if you have a full rack of these...
Yay Wendell! Love seeing him taking these trips to everyone.
5:45 - The sticker says ATI :)
Makes my Canadian heart smile
The humor is getting better and better! That intro and the acting/timing!
It's always fun to see guys like Steve and Linus coming to Wendell when things get really complicated.
Thank You Gamers Nexus, Nice to see a SkyNet Box!
Ngl, I hit the thumbs up as soon as I saw Steve's expression while Wendell was peeling the plastic. 😂
Don't about anyone else but it's always nice to reveal brand new fresh hardware, when I built my new rig 2 years ago which was my first in 10 years. There's nothing like the smell of new hardware of the knowning of new hardware when building
Level1Techs is such an underrated channel. :-)
Wendell is always an amazing guest star, he could have a whole career just showing up in other tech UA-camrs' videos, and yet he has his fantastic channel to boot.
Thanks for having my buddy Wendell on! Besides being an ultra tech wiz, he does the best Fozzi Bear looks!
Nice to see Wendell as guest on GN. Can't wait for him to get this server running.
I love that the most advanced stuff has gone back to basically a mini version of ribbon cable.
As someone who worked in IBM p-series servers, it's fun watching new server stuff. I kinda miss building racks of them.
Wendell!!! Always great when he is on.
Wendell had soft and steady hands, perfect for handling this sort of thing
I do find it sweet that i see this stuff on GN, ill probably never get into server stuff, but just getting a general understanding of that world its pretty interesting.
I can listen to Wendell server talk all day even though I have not nor will use this grade of equipment.
This video was just great. Thanks for sharing with us and having Wendell on showing us just how awesome this thing really is.
It's comforting to know that Wendel is a proud punster.
Cool to see wendell. Last time I watched him was way back during tek syndicate.
Somebody went crazy making this hyper optimized with space AND thermals.
These servers are stacked on top of each other on a rack.
Pretty standard fare from Supermicro.
The crazy thing about that is that a SINGLE EPYC processor can serve the entire system (vs. needing to run dual processors).
I love your dynamic SOOOO much...
Thank you Tech Jesus and WENDELL! :)
We loves us some Wendell! Always a fun time when he brings his toys for show & tell.
I'm in awe of the specs of that machine. Its crazy how in 10 years this video will go from jaw dropping to funny bc of new technology
Always good to see L1T x GN videos!
It takes guts to even approach it. I would be scared of breaking it by accident. VERY efficient use of space and a lot of engineering going to remove the heat from the server. Wow
Amazing intro
I will take two. For heating this winter :) Thumbs up for all those collabs lately.
Customer: We need more density
Dell/HP/Lenovo: Its not possible you cannot break the laws of physics
Supermicro: Its true you cannot break the laws of physics ...... but you can bend them .
In fairness, they are solving a different problem. In most data centers these days, the ultimate issue is heat. When you can already max out the heat budget for a rack, making the system denser is not a win. Also, there is a service cost to higher density. HPE also sells support, and the higher density systems require more effort if the technician needs to replace something.
Big fan of L1T and GN. This is great.
Looking forward to the server build series!
also, ASUS has toolless m.2's!
Yes more Level 1 + GN videos!
I would love to see you put that fanstack into the fan tester😂
i know this is not GNs cup of tea but I absolutely love videos of hardware like these 🤘
bravo! what an incredible machine! I can't wait to fire up those FP32/64 systems!! kudos to the crew!! so exciting, what the future holds in store!
good luck !
Steve is so excited and terrified at the same time. Luv it! Great vid
If there's something strange,
In your server room,
Who you gonna call?
CALL WENDELL!!
But, what if it's a Wendel that appeared and is strange?
Wendell Wilson of Level 1 Techs on Gamers Nexus? What is this a crossover episode? You love to see it.
That AMD GPU looks so much better then any AMD gaming Radeon reference card!
Because it’s server where amd has no compromise, best silicon, best material, best design . Best of the best basically
This is perhaps the greatest collab on YT since GN and Jay did the XOC-"competition"... holy hell, this is pure awesome!
its really hard for me to wrap my head around this system... absolutely bananas
Wendell is a gem that needs to be protected at all costs!
Wendell My Go to server Channel Best On Utube i feel, Thank u Steve and Wendell
I want Wendell to take an I.Q. test, I guarantee it's north of 180
Wendell is amazing, I can't wait to see move over at lvl1