I'd love to see someone do some smoke testing with different types of servers' airflow baffles to visualize the airflow to different parts of the system. These 1U systems are often so dense with so many tiny high pressure fans...
I am worried that if we did that we would need to use different lids. The lids often have holes so we would change airflow unless everything was perfectly molded/ replicated. The other challenge is that many of these will be liquid cooled within 1-3 years. :-)
@@JeffGeerling I investigated smoke testing on desktop cases some years back, and found that all of the then-current "smoke" variants contaminate whatever they blow across to the extent that major cleanup is needed after testing. I wonder if there are now compounds that would make this reasonably non-messy?
For a server in "the home", I would love to know noise vs load. HPE has some servers that are great at fan speed management so that a 20 percent load doesn't blow my eardrums. Some choose up go full out, all the time.
nice psu! wowee, from 20lbs to 1lb!! setting up 4 this, for sure! nice rig! still blown away from the nVidia! I just wander, how popular the 20 gallon liquid cooling troft's are?
I'm impressed the T4 had adequate cooling. This might be a dumb question, but could the difference in power consumption be due to the fans? I imagine the 1U fans have to work a lot harder to cool the same TDP as the 2U fans.
The T4 sips power and runs well even in lower airflow environments. On the fan power consumption, 100%. We used this server versus the 2U version for this piece where we looked at exactly that: ua-cam.com/video/rMJBhYeRIxY/v-deo.html
I wonder what the heat foot print of a full rack of robustly configured 1U servers? 40 servers @208v, 1kW each is about 140 amps of power. How does this heat island affect the racks on either side of it?
40kW/ rack is not really shocking density these days. You are right that many racks cannot handle that much power. This year, that will go up significantly. Hopefully next week we will have a piece showing liquid cooling and a 4U unit that can remove 80kW using a garden hose worth of flow.
We did several of those 8x A100, including liquid-cooled ones last year. We have also done several of the Redstone 4x A100 platforms, and the 8x PCIe GPU systems.
BTW here is the air and liquid cooled ones: ua-cam.com/video/4Np1HnWiHb4/v-deo.html And the Inspur version as examples: ua-cam.com/video/E6HJi0iqYbw/v-deo.html
If you've got the 1u and 2u systems around, what's the noise difference between them, if any? It'd be nice to have an idea what might fit nicely into a home setup. Thanks and have an awesome day :)
Nice review, Thank you. It would be nice to have prebuild 1u, 2u server with quiet fans in it. right now, my home office is so noisy because of dell poweredge r230.
Rack servers are not meant to sit in your home office. If you want/need the horsepower in a quieter system, look at tower FF servers that are meant more for the Small Office environment that are generally much quieter.
@@emtboy9 Um talk about e-waste. Do you think all rack servers stay in racks in DCs...? All that vendors have to do is tune their BIOS to have a better fan curve.
@@emtboy9 totally agree with idea. Even though Tower server is more quieter, I still prefer rack server due to weight & space constraints. That why I think that it would be nice to have prebuilt rack server with quieter fan. I have custom build rack servers too, but I didn't live in US, so newest xeon and server motherboard are a little bit hard to come by.
I am a student and I’ve been learning a lot with python and ml/dl do you think it would be worth it to build a sever with some old xeons and Teslas to mess with neural networks on? Or should I keep them local on my 3090 workstation
@@ServeTheHomeVideo thank you. I love your channel, your energy and the ability to watch a 10 minute video or go to your website for a more in depth view. Great content
I run an AI startup, so I know a thing or two about this: A current 3090 is a great card for ML/DL, I have one in my personal dev workstation. You can do a lot with smaller models and prototyping, way more than with older Teslas and especially CPUs. And for the bigger stuff, if you have a budget, cloud for a one time / short term thing can be a better bet (if you factor in power and time), but if you do this constantly, buying a couple A100 (or soon H100) systems is the best bet, but for learning a 3090 is perfect, no need for anything else I think.
I would like to get me a new server for my home server setup or year replace my current one. The one I have rn is from 2011 so over 10 y/o now. But new servers are way to expensive for me rn.
I just wonder how much maintenance it takes with a 1U compared to a 2U with less rpm on the fans and lower speed of the air? Huw much dust will accumulate and lifetime of the fans over 3-5 years. Someone with experience?
Data centers usually filter to ensure most of the dust is removed. Some still is present. Fans in either case rarely fail. I asked and FB folks said they only kept a few fans on hand in their massive DCs
Where does one even buy these things? I was interested enough to see the price but looks like I can only quote direct from the vendor? Really??? Also, in searching saw they made the DoD ban list for US companies. I guess homelabbers are their only clientele left here in the US...
ehkm... 12nvme ssd, each one up to 60gbps, 60 times 12 ehh... around 700gbt/ps inside the server n only 2x25gigs out? it shouldv 2x200gbe or 1x400gbe at least.
I'd love to see someone do some smoke testing with different types of servers' airflow baffles to visualize the airflow to different parts of the system. These 1U systems are often so dense with so many tiny high pressure fans...
I am worried that if we did that we would need to use different lids. The lids often have holes so we would change airflow unless everything was perfectly molded/ replicated. The other challenge is that many of these will be liquid cooled within 1-3 years. :-)
@@ServeTheHomeVideo true, true... now I guess I'll have to ask all server vendors to start making transparent lids 🤪
@@JeffGeerling I investigated smoke testing on desktop cases some years back, and found that all of the then-current "smoke" variants contaminate whatever they blow across to the extent that major cleanup is needed after testing. I wonder if there are now compounds that would make this reasonably non-messy?
@@d00dEEE LN2
@@d00dEEE A sacrifice must be made
My doctor suggested I watch STH videos for enthusiasm therapy. :P
You are the best man! Love your personality.
Ha! Thanks.
Great review, In depth as always.
Glad you liked it!
For a server in "the home", I would love to know noise vs load. HPE has some servers that are great at fan speed management so that a 20 percent load doesn't blow my eardrums. Some choose up go full out, all the time.
Home is the /home/ directory in Linux. IPMI is the way to control fan speeds. You would not want a high-end 1U server in earshot though.
@@ServeTheHomeVideo Heh, I do, right now. My wife doesn't even mind it. HPE DL360 G7. I've been thinking of replacing it.
You shouldnt look at 1U if you care about your eardrums
Very interesting, thank you!
Man id love to have somthing like this.
nice psu! wowee, from 20lbs to 1lb!! setting up 4 this, for sure! nice rig! still blown away from the nVidia! I just wander, how popular the 20 gallon liquid cooling troft's are?
Nice music choice for the video
All Alex's choice.
I'm impressed the T4 had adequate cooling. This might be a dumb question, but could the difference in power consumption be due to the fans? I imagine the 1U fans have to work a lot harder to cool the same TDP as the 2U fans.
The T4 sips power and runs well even in lower airflow environments. On the fan power consumption, 100%. We used this server versus the 2U version for this piece where we looked at exactly that: ua-cam.com/video/rMJBhYeRIxY/v-deo.html
At 2:36 or so, the high density and regular SFF pictures are switched.
Hey Mark - did you enter our giveaway last week?
2:11 the got the pictures wrong, lol. The 10 and 12 2.5" bay models are transposed
I run supermicro E1.L 32x 30T QLC units for storage at my company.
What’s the reason/advantage of 10 SFF model compared to the 12 drive variant you have shown?
Lower cost and with 10 SFF you can get more airflow to the rear of the system.
I wonder what the heat foot print of a full rack of robustly configured 1U servers? 40 servers @208v, 1kW each is about 140 amps of power. How does this heat island affect the racks on either side of it?
40kW/ rack is not really shocking density these days. You are right that many racks cannot handle that much power. This year, that will go up significantly. Hopefully next week we will have a piece showing liquid cooling and a 4U unit that can remove 80kW using a garden hose worth of flow.
as far as i know you can roast chips n fry bbq top of rack!
13:24 If only it worked that way in consumer hardware... *cough* Dell
You gotta review the 1 million $$$ machine from LTT! That thing looks crazy!
We did several of those 8x A100, including liquid-cooled ones last year. We have also done several of the Redstone 4x A100 platforms, and the 8x PCIe GPU systems.
BTW here is the air and liquid cooled ones: ua-cam.com/video/4Np1HnWiHb4/v-deo.html
And the Inspur version as examples: ua-cam.com/video/E6HJi0iqYbw/v-deo.html
If you've got the 1u and 2u systems around, what's the noise difference between them, if any? It'd be nice to have an idea what might fit nicely into a home setup. Thanks and have an awesome day :)
Nice review, Thank you.
It would be nice to have prebuild 1u, 2u server with quiet fans in it. right now, my home office is so noisy because of dell poweredge r230.
Rack servers are not meant to sit in your home office. If you want/need the horsepower in a quieter system, look at tower FF servers that are meant more for the Small Office environment that are generally much quieter.
@@emtboy9 Um talk about e-waste. Do you think all rack servers stay in racks in DCs...? All that vendors have to do is tune their BIOS to have a better fan curve.
@@emtboy9 totally agree with idea. Even though Tower server is more quieter, I still prefer rack server due to weight & space constraints.
That why I think that it would be nice to have prebuilt rack server with quieter fan. I have custom build rack servers too, but I didn't live in US, so newest xeon and server motherboard are a little bit hard to come by.
The T4 looks like it is reliant on the host for cooling? (No fans on the card?)
Yes. That is typical for server accelerators.
@@ServeTheHomeVideo IPMI? PLDM? (asking for my accelerator friends...)
You have this video and Linus' Video released on the same day. Im not shitting on this video but still.
We did that stuff last year: ua-cam.com/video/E6HJi0iqYbw/v-deo.html
Even air cooled v. liquid cooled: ua-cam.com/video/4Np1HnWiHb4/v-deo.html
There's a typo on the title of one of the last charts (performance)
I am a student and I’ve been learning a lot with python and ml/dl do you think it would be worth it to build a sever with some old xeons and Teslas to mess with neural networks on? Or should I keep them local on my 3090 workstation
I would probably keep them local. Old Xeons have the challenge that they are so much less efficient than newer chips. Plus saving money is good!
@@ServeTheHomeVideo thank you. I love your channel, your energy and the ability to watch a 10 minute video or go to your website for a more in depth view. Great content
I run an AI startup, so I know a thing or two about this: A current 3090 is a great card for ML/DL, I have one in my personal dev workstation. You can do a lot with smaller models and prototyping, way more than with older Teslas and especially CPUs. And for the bigger stuff, if you have a budget, cloud for a one time / short term thing can be a better bet (if you factor in power and time), but if you do this constantly, buying a couple A100 (or soon H100) systems is the best bet, but for learning a 3090 is perfect, no need for anything else I think.
@@jannikmeissner thank you. I am still learning but as I get better I may consider some a/h100
@@peelthebananna9827 yes, buy those if you get money for your work or have the company you work for buy them, for learning an RTX3090 is perfect
Can the internal M.2 modules be raided via Intel’s VROC?
We did not have a VROC key in this (and normally we do not test servers with them) but there is a header.)
I would like to get me a new server for my home server setup or year replace my current one. The one I have rn is from 2011 so over 10 y/o now. But new servers are way to expensive for me rn.
I just wonder how much maintenance it takes with a 1U compared to a 2U with less rpm on the fans and lower speed of the air? Huw much dust will accumulate and lifetime of the fans over 3-5 years. Someone with experience?
Data centers usually filter to ensure most of the dust is removed. Some still is present. Fans in either case rarely fail. I asked and FB folks said they only kept a few fans on hand in their massive DCs
Where does one even buy these things? I was interested enough to see the price but looks like I can only quote direct from the vendor? Really??? Also, in searching saw they made the DoD ban list for US companies. I guess homelabbers are their only clientele left here in the US...
ehkm... 12nvme ssd, each one up to 60gbps, 60 times 12 ehh... around 700gbt/ps inside the server n only 2x25gigs out? it shouldv 2x200gbe or 1x400gbe at least.
Epic.
Epyc
@@wewillrockyou1986 Epycn't
That was 2 videos ago :-)
@@ServeTheHomeVideo not AMD Epyc but but the other epic. :-)
Not bad