1u Servers are DEAD! Long Live 2u Servers! But Why? -Ft. Supermicro AS -2114GT-DNR

Поділитися
Вставка
  • Опубліковано 8 лют 2025
  • Is this was the future looks like? More like, this is what the future will sound like! At least until they make quieter fans. Join Wendell as he goes over his 2u GamersNexus project! Is it a super computer? Pretty much! Is it expensive? Yeah, $70,000! Is Wendell like a little kid in a candy shop? You Betcha!
    System Specs: www.supermicro...
    GPU Specs: www.amd.com/en...
    GamersNexus Vid: • 3000W AMD Epyc Server ...
    **********************************
    Check us out online at the following places!
    linktr.ee/leve...
    IMPORTANT Any email lacking “level1techs.com” should be ignored and immediately reported to Queries@level1techs.com.
    -------------------------------------------------------------------------------------------------------------
    Intro and Outro Music: "Earth Bound" by Slynk
    Other Music: "Lively" by Zeeky Beats
    Edited by Autumn

КОМЕНТАРІ • 310

  • @SimmiesSchrauberChannel
    @SimmiesSchrauberChannel 2 роки тому +357

    One day apart in 2 videos: Linus "I want all my LAN-PCs in 1U, so I don't waste 1 rack-slot" - Wendell "1U is dead cause 2U is more efficient" XD

    • @handlealreadytaken
      @handlealreadytaken 2 роки тому +44

      Enterprise server vs bespoke gaming chassis. However not sure why Linus just didn’t get a second rack and move the networking equipment and do 5 c 4u chassis and avoid the headache. Those are easy to obtain and let him run more common components.

    • @Blustride
      @Blustride 2 роки тому +48

      In fairness, Linus isn't using the chassis fans for any significant amount of cooling, so that negates half of the reasons Wendell suggests that 1u is dead.

    • @wiziek
      @wiziek 2 роки тому +63

      Linus isn't technical person.

    • @EminemLovesGrapes
      @EminemLovesGrapes 2 роки тому +27

      @@wiziek Nowadays he basically outsources all of the knowledge and throws either his money or his influence at the wall.

    • @Mallchad
      @Mallchad 2 роки тому +9

      @@handlealreadytaken His idea's were unsustainable and ended up in
      "I need 1 rack per computer", which pretty quickly devolves into an explosion of racks...
      Prob best not to buy a new rack every time he has a new idea :P

  • @JoshLiechty
    @JoshLiechty 2 роки тому +237

    Having spent some time with multi-node chassis-based systems like this, my vote for a collective noun for a group of servers goes to "a cacophony."

    • @MiIIiIIion
      @MiIIiIIion 2 роки тому +116

      Alternatively: "A tinnitus of servers".

    • @Level1Techs
      @Level1Techs  2 роки тому +69

      I am getting such a kick out of these replies

    • @waterflame321
      @waterflame321 2 роки тому +25

      How about a "whatt?!" Because you can't hear anything over the fans

    • @johnmijo
      @johnmijo 2 роки тому +5

      A *MULTIPLICITY* of Nodes/Servers ?

    • @jannegrey
      @jannegrey 2 роки тому +10

      "Nuisance" or "Pain in the Ass" sound about right for when you have to troubleshoot them. For those rare times when everything is okay? Hairdryers is already taken by some GPU's. And in US English IDK any short word for Vacuum cleaner. But when you have whole rack of them you certainly need some protective platforms, like on Aircraft Carriers, when jets are taking off. When those fans spin up on every unit at the same time, you do have most important building block of Wind Tunnel. And yes - there are Wind Tunnels (or at least Wind Simulators) that use a lot of PC fans, so that you can control the flow and strength of the wind with good granularity and create uneven Wind to simulate for example Urban environment.

  • @GeoffSeeley
    @GeoffSeeley 2 роки тому +108

    @1:39 the 1U servers aren't dead, they're just huddled together in 2U chassis for warmth.

    • @Jamesaepp
      @Jamesaepp 3 місяці тому

      In a nutshell: 1U chassis is dead, long live 2U chassis.

  • @UntouchedWagons
    @UntouchedWagons 2 роки тому +29

    A gaggle of those servers would certainly murder my power bills, and my ear drums.

  • @jacobnoori
    @jacobnoori 2 роки тому +16

    Finally, more server content! Please make them more frequently!

  • @PhoeniXfromNL
    @PhoeniXfromNL 2 роки тому +48

    it's always nice when Wendell is excited about something

  • @johntotten4872
    @johntotten4872 2 роки тому +5

    Legend has it headphone users ears are still bleeding.
    A. Scream of servers?

  • @Gilgwathir
    @Gilgwathir 2 роки тому +4

    Wendell doing the sillies when he's excited 🙂 Love it! Also the plural of servers should be a sounder of servers (a group of wild boar is called a sounder) because they make such a racket!

  • @TwistedD85
    @TwistedD85 2 роки тому +23

    I know I'll probably never get to work with anything like this, but it's still fun and interesting to watch. It's like I'm on a field trip to a data center and the technician is trying to make everything fun and engaging for the students :D

    • @robr4662
      @robr4662 2 роки тому +11

      You may not be able to afford this but used enterprise stuff can be had extremely cheap and you can have almost as much fun. ;-)

    • @morosis82
      @morosis82 2 роки тому +4

      Some of the older x10 platforms from Supermicro are getting somewhat affordable these days, the twin family of servers aren't crazy anymore.

  • @wyattarich
    @wyattarich 2 роки тому +1

    Every time I see a new upload, I'm excited. I can't say the same about ANY other channels on YT. I love what you're doing Wendell-never stop!

  • @Chloiber
    @Chloiber 2 роки тому +7

    We have a few multi node chassis from Supermicro running since several years. Mainly 2U QuadNodes (I believe TwinPros).
    While having multiple nodes so densely in a single chassis is great, it comes with a major downside:
    The nodes often share a single backplane (which is partitioned). So if you have a failure there, you are screwed. Additionally, if you have an issue with an onboard controller, you are screwed as well: you need to replace the whole node, as you cannot simply install a backup raid card / hba.
    While yes, these things are great, you should be aware of the downsides to some or these models. Ours always ran great without any issue until I bricked an onboard controller - after half a day, and many tries, I was able to recover it but it made me very aware of the downsides :-)

    • @Loanshark753
      @Loanshark753 Рік тому

      @Chloiber do you know if server racks with shared psus and cooling fans exist to centralize components. Maybe one standard height rack with two nodes per u and three or five shared psus. For further energy optimisation the systems could be liquid cooled and the rack could be powered by 400 volt direct current.

    • @jfbeam
      @jfbeam Рік тому

      Everything is builtin these days. You're lucky if you can replace a processor or memory. (and now there's Stupid(tm) to prevent changing the processor.)

  • @keithpetrino
    @keithpetrino 2 роки тому +4

    A racket of servers. A reference to the fact that they're in racks but also to the noise.

  • @MrLamrod174
    @MrLamrod174 2 роки тому +3

    A serfdom of servers 😅
    Also, I hope you had hearing protection while in your comms room! That node was SUPER loud!

  • @halbouma6720
    @halbouma6720 2 роки тому

    I gave up thinking about dense 1U servers myself over a decade ago because I'd run out of power long before of rackspace in every cabinet. Even in this video you're not able to plugin more than one of these into your circuit lol. So I standardized more on 2U setups for all the reasons you gave, fans for airflow, more room for storage and cards, or gpus, etc. Plus its easier to work on than some ultra dense 2 servers in 1U setup. Thanks for the video!

  • @survey1010
    @survey1010 2 роки тому +15

    Thoughts on doing walk-through of your data center / "server room"? Would be interesting to see what you're running for day-to-day.

  • @dismafuggerhere2753
    @dismafuggerhere2753 2 роки тому +8

    a whole restaurant of servers ?
    I'll show myself out

    • @acubley
      @acubley 2 роки тому +2

      You got a gen-u-wine laugh out of me!

  • @jackhildebrandt7797
    @jackhildebrandt7797 2 роки тому +3

    Dang I was excited for Wendell to look one of the cray ex liquid cooled nodes.

  • @Dan-Simms
    @Dan-Simms 2 роки тому

    Clicking the link and commenting here for your engagement. Cheers bud, keep up the great work!

  • @mtothem1337
    @mtothem1337 2 роки тому +48

    I get that it's not really your thing. But i think many of us would be interested in seeing builds like these, but which are optimized for energy effiency / low noise instead.

    • @Blacklands
      @Blacklands 2 роки тому +4

      (Is your avatar Lain with a crown of roses??)
      Also yes, I would like to see that. I think a bunch of us (maybe even the majority?) don't have a noise-insulated server room at home!

    • @jmwintenn
      @jmwintenn 2 роки тому +4

      the server room is built to contain the sound. they dont care how loud the servers are as long as vibration is controlled.

    • @morosis82
      @morosis82 2 роки тому +5

      @@jmwintenn sort of true, but systems that need fans running at full speed constantly spend a lot of power budget on cooling and not computing.

    • @bernds6587
      @bernds6587 2 роки тому

      @@morosis82 Well, having the fans at 100% all the time makes no sense be it power efficiency wise or attrition, especially of the bearings. When Wendell entered the serverroom, you can hear one of the servers constantly cycling between two fan speeds back and forth -> no full fan speed.
      When the "new" one gets turned on, the fans spin up to full speed (PCs do that, too) and then reduce that speed after successful initialization.
      For fan speeds in general: A certain minimum of fan speed is necessary so the fans can spin at all. I've never seen a 10k RPM fan be able to spin at 1k RPM. (1U server fans can go up to over 20k RPM)
      The combination of density and heat production makes such loud and truly "moving" fans necessary.

    • @im.thatoneguy
      @im.thatoneguy 2 роки тому

      @@bernds6587 unfortunately supermicro doesn't have good fan curve controls... Because they don't care.
      I had to write an ipmi hack script which does it on our nvme server because they offer no customization.
      Their solution is "Oh it's 1C over threshold? Time for 100% fan until it's cool enough and then back to 25% for 5 minutes " way more irritating than keeping the fans a little higher and holding steady.

  • @MazeFrame
    @MazeFrame 2 роки тому +2

    9:42 You can feel the current limiting making the fans start up slowly! Beauty!

  • @velo1337
    @velo1337 2 роки тому +2

    it also comes down if you are single tenant or multi tenant and how the SLAs are structured. those 1Us are damn cheap, we swap them out like underwear :) those are also very interesting if the stuff you run doesnt need a lot of compute, like webservers and stuff. for database servers you are running 4U server usually since you need the pcie slots

  • @LiLBitsDK
    @LiLBitsDK 2 роки тому +3

    watching Wendell booting up a server being blasted by the air is like watching a kid in a giant candy store for the first time in their life :D

  • @t.m.grokas6832
    @t.m.grokas6832 2 роки тому

    I paused @7:23 and accidentally discovered your next video's thumbnail. Editor Autumn, you're welcome.

    • @Level1Techs
      @Level1Techs  2 роки тому +1

      That was actually one of the contenders for this video lol! Fun fact, all the thumbnails are created with assets from the video it is being made for. ~ Editor Autumn

  • @ajr993
    @ajr993 2 роки тому +3

    Both HPE and Dell sell a lot of servers in the 1U form factor. For example the HPE proliant servers have a lot of cheaper 1U configurations like the DL325. No its not used in a datacenter, but there's a huge use case for racks outside of a data center. Enterprise customers need racks but they don't have an entire datacenter. 1U is not dead at all in the SMB space.

  • @nukedathlonman
    @nukedathlonman 2 роки тому +1

    Big agreement - a 2U chassis with 2U redundant PSU's and a full 2U cooling system combined with doubled 1u internals makes much more sense for space utilization and redundancy.

  • @willcurry6964
    @willcurry6964 2 роки тому

    You always have great informative videos. Some a little too complex for me, a non IT Guy. I now know I need a a Chassis (not rack mount) Server and the server should have E1.S drives....maybe start with 6- 7 TB drives....dont now where to buy.

  • @TheClumsySpectre2
    @TheClumsySpectre2 2 роки тому +5

    Do you think eventually we'll move to 4U equivalents? For that 1 power supply failure would still provide 3 PSUs for 4 systems which would proportionally offer more power per system and offer redundancy even with one unit down. Could also use fans that were larger again.

  • @kevlarandchrome
    @kevlarandchrome 2 роки тому +20

    I love how the sound of the fans comes together for a kind of screams of the damned from far away in old horror movies sound, very season appropriate. The hardware's pretty damned dope too.

    • @jimecherry
      @jimecherry 2 роки тому +1

      banshee fans

    • @ghostbirdofprey
      @ghostbirdofprey 2 роки тому +1

      Suddenly I wonder if there's a supercomputer or other cluster named "Banshee"

  • @wskinnyodden
    @wskinnyodden 2 роки тому +1

    So Server Cadres based around 1U Servers are going the way of the Dodo and instead we'll have some sort of Irish based Server Cadre Datacenters around "U2" nodes :P

  • @killerful
    @killerful 2 роки тому +1

    "Definitely think you'll find that appealing"
    god fucking dammit😂

  • @Verhagenvictor
    @Verhagenvictor 2 роки тому +5

    Wendel, my first through on this was "huh, that kinda looks like a horizontal blade setup", what are your thoughts on that comparison? Are blades going to make a comeback?

  • @andljoy
    @andljoy 2 роки тому +4

    9:41 Sounds you don't want to hear when you are at the back of a messy rack. Happened to me last week when i was trying to clean up some old shit at the back of a rack and all of a sudden our pure storage starts sounding like a jet taking off as i knocked a PSU out :D.
    This server just screams VDI at me.

  • @llortaton2834
    @llortaton2834 2 роки тому +1

    AHAH, jokes on you wendell, my 4U ATX compliant consumer grade server will NEVER DIE :D

  • @boomerau
    @boomerau 2 роки тому

    I've also seen the side-by-side HP HP Left & Right GPU 4RU servers. Basically this is a change in blade chassis form factor and capital investment.

  • @BigHeadClan
    @BigHeadClan 2 роки тому

    One of my past clients consolidated down from about 40 racks to 20 from snagging a few c6000 blade chassis and Virtualizing a lot of their older hardware , 16 bay's for servers per chassis in 10u of rack space is some pretty solid density. This type of 2 node setup probably makes more sense for an engineering perspective but I always appreciated how scalable the Blade Chassis design was.
    If you have a free bay populated or upgraded one of the blades you just plop the new one in and away you go. No need to re-rack or fiddle around with rails, re-run cables etc. That said it does suffer from the size restrictions of a blade chassis, which is even smaller than a 1u server so fan pressure and the other issues Wendel raised are still a problem.

    • @jfbeam
      @jfbeam Рік тому

      His systems are for massing GPU's. This little 2U thing is one of the few ways to do this without having to sell body parts. For you and me, who care about general purpose computing, blades have been the way to go for decades. (but it does often mean settling for vendor lock-in. and once they know you're on the hook, the deep discounts go away.)

  • @MarkRose1337
    @MarkRose1337 2 роки тому +12

    1u never made sense to me for the reasons mentioned for going 2u in this video. Take it to its logical extreme though and you're back to blades of some sort!

    • @christopherjackson2157
      @christopherjackson2157 2 роки тому +1

      It arguably could have made sense in some extreme circumstances back when Intel was limiting everyone to 4 cores per socket. For customers looking to run a couple of hundred or thousand cores it could save them the cost of building a new physical space. But that was quite a while back now lol.

    • @Cynyr
      @Cynyr 2 роки тому +2

      Everything old is new again.

  • @somehow_not_helpfulATcrap
    @somehow_not_helpfulATcrap 2 роки тому +3

    What do you hear when you put your ear up next to a 1U server fan?
    Nothing from then on.

  • @nihalrahman7447
    @nihalrahman7447 2 роки тому +15

    Wendell and LTT anthony should collab. Talk about general server stuff, linux distros and how to dominate the world.

    • @joemarais7683
      @joemarais7683 2 роки тому +7

      That’ll never happen. The powers that be would never let that much nerd power collect in one room

    • @alexmartinelli6231
      @alexmartinelli6231 2 роки тому +3

      That would be EXTREMELY cool. Hope it happens someday

  • @Dexerinos
    @Dexerinos 2 роки тому +1

    I saw that!!! You didnt screew-in the rail screws :P

  • @Paktosan
    @Paktosan 2 роки тому +1

    So this basically is the comeback of the BladeServer just on a smaller scale?
    We still have a six-blade system from Intel in the basement for testing purposes, some features are really cool. Failed node? No worries, the chassis will automatically relocate the virtual drive to a spare blade and boot it back up, almost no downtime.

    • @JaeTLDR1
      @JaeTLDR1 Рік тому +1

      Blades share way more. This is just power and cooling being shared

  • @goblinphreak2132
    @goblinphreak2132 2 роки тому

    I just realized the music you use gives me "contraption zack" vibes. if you remember that game from the dos days.

  • @nicholaswoods9066
    @nicholaswoods9066 7 місяців тому

    Thank you for the informative video,
    Cheers mate

  • @declanmcardle
    @declanmcardle 2 роки тому +2

    @8:20 - "it's an older cord, but it checks out..."

  • @TheBitKrieger
    @TheBitKrieger 2 роки тому +2

    So we came full circle and blade centers are cool again?

  • @KangoV
    @KangoV 2 роки тому

    They are the same cables I have throughout my house :) Cool video :)

  • @majstealth
    @majstealth Рік тому

    this will be a cramped and warm hot-aisle-job to maintain these

  • @R055LE.1
    @R055LE.1 2 роки тому +3

    Haven't blades been following this principle for like.. ever?

  • @zector0
    @zector0 2 роки тому +1

    Imagine how his mind will explode the first time he sees a BladeCenter.

  • @bret44
    @bret44 2 роки тому +2

    Is there a spot for a fourth gpu? Frontier says it uses 4 gpus per cpu, is this the same chassis? Also, what is meant by "Frontier has coherent interconnects between CPUs and GPUs" -wikipedia, Are these interconnects physical?

  • @leviathanpriim3951
    @leviathanpriim3951 2 роки тому

    Wendell and Steve, sit down nerds the chosen ones are on screen

  • @losttownstreet3409
    @losttownstreet3409 2 роки тому

    Floor space was the limiting factor long time ago; now you could put your board with off the shelf components together, run the board in china, run the board to a pic and place factory and you'll get your custom board if you are really tight on space; now is power and cooling the most limiting factor. Think a few years back, where you had to offer each an every costumer a full server as virtualization wasn't a big factor. Now you run 100-400 virtual servers in a 2-4U unit. Before this you put as many FPGA's (those 10000 $-200000$ cpu's) in on case as you physically could and if you really wanted to use huge loads you could always press the real out button in Xilinx Vivado. Now you have access to virtual Cloud. F1-Instances (8000-50000$ CPU's) and virtual cloud GPU.

  • @Phynix72
    @Phynix72 2 роки тому +1

    Reading your thumbnail, Linus is crying over his recent build. From far continent I can hear "Why Wendel ? why ?"🤣

  • @markmulder996
    @markmulder996 2 роки тому +2

    And here is Linus (LTT) just now building five 1u gaming systems ;)

    • @СузаннаСергеевна
      @СузаннаСергеевна 2 роки тому +1

      To be fair a gaming computer doesn’t need redundancy or anywhere near as much cooling, which is what this video is about. Linus outsources the cooling to an external radiator anyway.
      Linus’ new gaming computer is stupid for many reasons, and while the 1U rack case is definitely one of them, a 2U case wouldn’t have been any better. The issue there is insisting on stationary PCs in the first place.
      The premise of the video was that he needed something unobtrusive for his children to game on. Instead of a server closet we know he won’t take proper care of, the solution is to just get them macbooks with thunderbolt docks instead. Plug it in at home and it’s a decent gaming rig, bring it to school and it’s a good study computer. With actually good parental controls. Unless you actually need a full-power workstation, desktop PCs are almost never the right answer today.

    • @markmulder996
      @markmulder996 2 роки тому

      @@СузаннаСергеевна i know, the timing is just funny. one day linus is building five 1u gaming rackmount systems, and the day after there's Wendell saying 1u is dead :)
      But of course it's two entirely different situations, especially since Wendell is talking enterprise, and Linus, as advanced as it may be, is still talking about home usage.

  • @andreas7944
    @andreas7944 2 роки тому +1

    If Wendell says it - I believe it. He might be wrong, but do I really care? It comes down to opinion, and his arguments are reasonable. That is all I care about. Please, Wendell, try having as many children as you can. We need more people like you.

  • @AlwaysStaringSkyward
    @AlwaysStaringSkyward 2 роки тому

    @Level1Techs serious question: why are we using PSUs in servers? We used to have rack or cage level DC power fed to the servers on DC busses. It was safe, centralised, efficient and could be triple redundant. It left 100% of the space in every server for doing work and every server could be yanked out for maintenance without affecting the others.

  • @tvmcrusher
    @tvmcrusher Рік тому

    7:41 From here on out you can hear the maddening sound of an SPC being nearby.

  • @JW-uC
    @JW-uC 2 роки тому +1

    Isn't it just a cut down 2u style "blade server" box? Obviously the blades in this 2u are horizontal and the original blades were vertical (with 8+ blades) and if I recall didn't have space for a graphics card... but still.
    That said, I guess if you put the thing on its side and made the "box" square and then had space for multiple "blades" you'd still not get any extra density because you'd still need multiple sets of redundant power supplies. As backplanes are much less of a thing now, with such high speed serial network cards, you'd also not gain much if you used some kind of backplane system either.

  • @solidreactor
    @solidreactor 2 роки тому +4

    Is there a benefit to go even further with a "4U 4-Node" configuration? Or are there some diminishing returns after a 2U 2-Node config?

    • @WilReid
      @WilReid 2 роки тому +4

      The returns are virtually fully realized with 2U because it gets you 89mm height for decent sized fans. 3U would get you 120mm but servers rely so much more on pressure that going up from 80mm to 120mm fans would see very little benefit. Noise reduction would be most of it and the industry has already come to terms with noise from racks.
      3U or taller would get you full PCI card height perpendicular to the mainboard, but angle adapters and risers have gotten around that for decade now.

  • @ETtheOG
    @ETtheOG 2 роки тому +2

    A "Banquet of Servers" maybe :o?

  • @probusen
    @probusen 2 роки тому +3

    Redundancy is everything, 7x HPE DL360 with dual PSU of 800W has been a life saver many times. EPYC 24 core, 512GB of RAM and 6 1.92GB of Storage in vSAN. No 1U servers will live a long time. :)

    • @jfbeam
      @jfbeam Рік тому

      No *modern* 1U server will live a long time. (I have plenty from the long long ago that still work perfectly. But they don't draw more power than my entire neighborhood.)

  • @movax20h
    @movax20h Рік тому

    The thing is, if you colocate and use a lot of power, it does not really matter if you use 1U or 2U, it going to host you almost the same, because primary cost will be power.
    If you have color or dc, that allow to deliver a lot of power to the rack, then it is not about optimizing cost, but rather just a quest how many you can put in a single rack or few close racks, so they are all connected over very fast network.
    I rent a rack in Germany, and I am limited by space and network. I cannot put more servers, because I do not have enough power in the rack, or ports in the switches. I even have few empty units, because I am at the limit basically. I cannot switch everything from 1U to 2U, but if I can cram more into 1U, by upgrading to higher density, and or replace 2x1U, by 2U that actually is more efficient, I will definitively do it. We use a lot of Kubernetes for compute, Ceph for storage, and few host for virtualization (Proxmox).
    2u dual node, is definitively more interesting than blade systems. Blade systems were always too expensive, requiring too much licensing and special setups. Hybrid like this, without expensive chassis is perfect.

  • @airman_85uk
    @airman_85uk 2 роки тому +1

    Would be nice to know what kind of use cases we could use these servers for in 5/6 years when they get decommissioned and get into the hands of homelabs….

    • @muadeeb
      @muadeeb 2 роки тому

      I have an old 4 node system that I use as a Virtualization cluster

  • @technicalfool
    @technicalfool 2 роки тому

    Always thought "fleet" was already a thing for servers, though maybe a "flight" given they make so much noise you'd think they're going to take off any moment.

  • @JamieStuff
    @JamieStuff 2 роки тому +1

    If rack mount, is it "a scream of servers"???

  • @Technopath47
    @Technopath47 2 роки тому

    All I can think is that the Frontier supercomputer shares a name with the worst ISP I've ever had the misfortune of dealing with.

  • @MarkRose1337
    @MarkRose1337 2 роки тому +8

    Well a server is a box, the plural of which is boxen. And two oxen are called a yoke. So that server could a yoke of boxen. But I suppose for more than two it would be a herd. A herd of boxen.

    • @AndirHon
      @AndirHon 2 роки тому

      box·​en | \ ˈbäksən \
      Definition of boxen
      archaic
      : of, like, or relating to boxwood or the box

    • @MarkRose1337
      @MarkRose1337 2 роки тому +1

      @@AndirHon I prefer the Jargon file definition:
      boxen: pl.n.
      [very common; by analogy with VAXen] Fanciful plural of box often encountered in the phrase ‘Unix boxen’, used to describe commodity Unix hardware. The connotation is that any two Unix boxen are interchangeable.

  • @silverphinex
    @silverphinex 2 роки тому +3

    i cant be the only one who finds the tone of server fans after they come down from full tilt and settle at that lower volume very peaceful. I have fully fallen asleep sitting next to a full rack of servers with their fans at that nice low drown

    • @raven4k998
      @raven4k998 2 роки тому +1

      well, that's why you don't sleep next to that thing cause all it takes is for a heavy workload on that thing to wake you up in the middle of the night🤣🤣

    • @KomradeMikhail
      @KomradeMikhail 2 роки тому

      I fell asleep on a helicopter flight....
      You can get used to anything over time.

  • @Skungalunga
    @Skungalunga 2 роки тому

    So basically we're moving back to blade chassis?

  • @Maxw3llTheGreat
    @Maxw3llTheGreat 2 роки тому

    This one server, in 2u of rack space has more compute power than my entire house with several servers and gaming desktops in it

  • @DMSparky
    @DMSparky 2 роки тому +2

    I’m sorry in advance.
    But can it run Crysis?

  • @GooberBrainTrollingCorp
    @GooberBrainTrollingCorp Рік тому

    7:40 THIS LOOKS AND SOUNDS LIKE AN INTRO TO A HORROR MOVIE

  • @red5standingby419
    @red5standingby419 2 роки тому

    Ok but there are different use cases and needs for servers. We aren't just deploying multi-gpu compute units in the data center. I'm sure 1U will continue to be a thing just fine for a very long time to come.

  • @chrisbaker8533
    @chrisbaker8533 2 роки тому +2

    I like the compute density, but that backwards mounting is a deal killer for me.
    Given how much of a 'rats nest' the rear of a server rack often is, i really don't think i want to deal with that every time i have a failure or need to do something with it.

  • @SlurP667
    @SlurP667 2 роки тому

    *opens server room door* I can hear the children screaming!

  • @jp-ny2pd
    @jp-ny2pd 2 роки тому +1

    Personally I'm a fan of the Supermicro MicroCloud servers for our colo. We deploy the 8-node configuration because we like being able to swap the drives without downing the node or running into spacing issues with PDUs in the back of the rack. The 12 and 24 node solutions are nice but a bit more of a pain to do any sort of maintenance on and less tolerant of rack configurations.

  • @jfbeam
    @jfbeam Рік тому

    2U has always been more efficient... a 2U fan can simply move more air - period. My former employer resisted this almost to their last breath. With 2 150W CPUs in the box, their hand was forced. Originally, the only 2U boxes were because that was the only way to get 2 power supplies, but there are plenty tiny PSU's these days. (the system shown here _could_ be done in 1U, as there are 1K 1U PSU's. but air cooling it would difficult.)
    (To do 1U for our systems would require a load of 15k RPM fans - $30/ea not $3 - and they'll last a year not 3-5. And they needed solid copper heatsinks, which were 100x more expensive than aluminum.)

  • @dangerwr
    @dangerwr 2 роки тому

    (Australian accent) And here we see a wild Wendell in his natural habitat.

    • @timrattenbury4768
      @timrattenbury4768 2 роки тому +1

      Just amazing ain't he

    • @dangerwr
      @dangerwr 2 роки тому

      @@timrattenbury4768 He's fucking adorable.

  • @VjSky
    @VjSky 9 місяців тому

    Isnt this the idea behind blades?

  • @computersales
    @computersales 2 роки тому

    With the prevalence of multi node servers I agree that 1u servers are a dying breed. I am guessing you already covered the topic but do blade servers really have a purpose these days?

  • @mhavock
    @mhavock 2 роки тому +1

    We been using 2U for a while. 1u is for hardware and the other is for making the grilled cheese sandwiches and the top for hot drinks or a hot plate. Boss thinks we are always busy; yeah we are busy running prime & disktest so the food cooks faster. LOL 🤣

  • @Cadaverine1990
    @Cadaverine1990 2 роки тому

    The 2U is honestly dead too, the datacenter I work with is moving completely to HPE Synergy 12000 Frames, these can be configured with 12 blade modules hosting Dual 28 core Xeons with up to 4.5 TB of Ram each and a T4 Accelerator Card. Thus in 10 U's will hold 24 - 28 core Xeons, 54TB of Ram and 12 - T4 Cards. Everything runs on VMs and in the networking of the unit everything has zero trust between the internal machines.
    If the size of the datacenter is a concern though they should be looking into 52U racks. Just doing this will increase the size of your site by around 25%.

    • @jakevanvliet
      @jakevanvliet 2 роки тому

      A 1RU Intel server (thinking Dell PowerEdge 650) can have 2x 40 core Xeon Platinums, 8TB RAM, 3x T4s or A2s, and dedicated 4x 25Gb Ethernet. In 10RU, that's 800 cores (40 cores x 2 sockets x 10 servers), 80TB RAM, 30 GPUs, and 100Gb of dedicated networking per node.
      Different scenarios and use cases call for different requirements. 1RU servers are not dead. 2RU servers are not dead. Blades are not dead. None of them should die - to help give you the ability to get a solution that best fits your environment.

  • @todayonthebench
    @todayonthebench 2 роки тому +1

    In short. The main advantages of blade systems are still relevant.
    Shared redundant power and cooling.
    Though, blade systems also tends to toss on shared management as well as networking.

  • @casperghst42
    @casperghst42 2 роки тому

    What ever happend to the Dell chassies with 4 nodes in them?

  • @asdkant
    @asdkant 2 роки тому +1

    A whole restaurant of servers?

  • @Oil_of_Hope
    @Oil_of_Hope 2 роки тому +1

    At start-up it sounds like an air-raid siren from ww2😃

    • @acubley
      @acubley 2 роки тому

      The V-8 powered ones?

    • @Oil_of_Hope
      @Oil_of_Hope 2 роки тому

      @@acubley ua-cam.com/video/WgaCNEQzL1Q/v-deo.html

    • @acubley
      @acubley 2 роки тому

      @@Oil_of_Hope Ah, ty, thought you might mean ua-cam.com/video/l04qWEEPFEk/v-deo.html

  • @chrsm
    @chrsm 2 роки тому

    Sounds like my colleague's laptop with a "couple" of chrome tabs open

  • @prashanthb6521
    @prashanthb6521 2 роки тому +2

    4U with silent 120mm fans will be nice.

    • @Blacklands
      @Blacklands 2 роки тому +1

      There's a bunch of cases on the market for this now! Some even support liquid cooling. Sliger makes some (expensive though).

  • @GameCyborgCh
    @GameCyborgCh 2 роки тому +1

    a full restaurant of servers

  • @DelticEngine
    @DelticEngine Місяць тому

    How about a Squad of Servers?

  • @beauslim
    @beauslim 2 роки тому

    This is definitely a "why didn't they think of this before" thing. Fans are why 3U is my favourite form factor for DIY rack-case builds. Unfortunately, 3U is kind of a rarity.

    • @cynicaloutlook
      @cynicaloutlook 2 роки тому +1

      They have thought of this before, and at even more density. Dells current line up include the PowerEdge FX, which has 4 slots (half width 1U blades), but he concept goes back a few years with the PowerEdge M-series

  • @deilusi
    @deilusi 2 роки тому

    IMHO, 1u servers are a legacy from an era when CPU and all other pieces used 150W total, with 24 PCIE lanes tops. Right now, 1U stuff is just left for network and any nodes that don't have to go full bore, and biggest ones will move to bigger ones, IMHO 3u, will be next popular size as it's compromise of 2 previous systems together, packed full of devices, either discs or gpu's. Something like mining racks, but standardized as plug and play.
    whatever happens, I will rise a toast to death of those 1u sized screaming monsters, let them burn in hell.

  • @magnawavezone
    @magnawavezone 2 роки тому +2

    I’d agree if you need GPUs in your servers, but that’s still a niche usecase. Otherwise, nothing much I see changes. People have been cramming in super hot cpus in 1U for a long time and they will continue to do so, nothing really has changed. Of course, that’s assuming you don’t just move to AWS or GCP.

    • @jfbeam
      @jfbeam Рік тому

      It's not as niche as it used to be.

  • @Deveyus
    @Deveyus 2 роки тому +1

    Plural of servers? A Ruckus.

  • @KingTheRat
    @KingTheRat 2 роки тому +1

    HP C7000 has entered chat

  • @danmenes3143
    @danmenes3143 2 роки тому

    Well, with those fans, a "cacophony" of servers? Maybe a "din" of servers?

  • @Timi7007
    @Timi7007 2 роки тому +1

    Blade servers all over again^^

  • @212helpdesk
    @212helpdesk 2 роки тому

    Would these be still called "blades" in a chassis?

  • @TheBackyardChemist
    @TheBackyardChemist 2 роки тому

    I mean blade servers have already been a thing since...forever?

  • @djmkrr
    @djmkrr 2 роки тому

    I figure 1U spaces will still be used in racks at the very least

  • @uncivil_engineer8013
    @uncivil_engineer8013 2 роки тому

    A Butler's Pantry of servers