The cooling is completely an issue with these 1U 4 GPU systems. One of our customers had one built for them and kept having over heating issues. We built them a few 4U 8GPU systems afterwards at a cheaper price and proper performance. Even with super low temps in the data center under full loads 24/7 starts taking the toll on those systems.
I feel like this system would cool a ton better if they had used a PCB for power distribution. The DC power cables eat a ton of space side to side directly in the fan path. Having a flat pcb with with only the PCI-E data cables on top would clean up the air path significantly. Then just have short cables for each GPU/motherboard.
@@savagedk We have something very cool coming along those lines that will make you re-think piping nightmare. It was going to be shown at SC21 but it looks like that will not be in-person so working on finding time to get hands-on (and hopefully one for the lab)
The box is an Aputure P300c that is being used over products. You can sometimes see two others without softboxes (they are perpetually on backorder for months) on either side of the frame.
I cant wait for systems like this to go on ebay for cheap, i'd be hoping for 16-24 EDSFF in PCIe 1x speed. I dont need the speed, but i would like the density
Denvera, Naples is already priced down which is what Jeff at Craft implements as (I assume) the neighborhood ISP; game server, virtual (engineering) desktop . . . mb
@@mikebruzzone9570 Nice to see you Mike. I'm sure its starting to come down in price, but i'm referring to ebay steals like my 2010/2011 Dell CloudEdge C2100 that i got in late 2015 for only $115. 12 cores, 24 threads, 12 LFF bays, and 18 RAM slots(only 4 populated when i bought it) IIRC original price was somewhere in the neighborhood of 15k with storage and RAM
@@denvera1g1Got it a second hand 'salvage' system deal. Naples 32C like Jeff showed are sitting on the shelf for 10 weeks straight and at least one CPU is now offered $495 in an $800ish ask environment. My bet Naples generation is NOW very negotiable. Whole servers I have no idea but I'd also say very negotiable for at least Naples and Xeon Skylake. If you resell any large volumes 100+ let me know and I'll pencil out a middle man price offer, for the supplier, and you can let me know if they bite on it. mb
Patrick, i'd love to see you get together with Craft Computing to make a video on using the Nvidia(no longer Tesla) A100 for gaming. Craft computing has already done videos on using older Tesla models for gaming, but i think a video titled something like "Gaming on Nvidia's secret $10k graphics card" would do well You're probably just as capable as Jeff, but i think a collab might bring you to a wider audience and to people who've been looking for a video like this from the people behind the previous videos on the subject. Though, a collab with LTT, Paul's Hardware, Jay's 2 cents, or Lyle might net more audience
@@JeffGeerling I to would like to see this. Heck, take the LTT approach. Virtualized the GPUs on the DGX1 for the equivelant of 10 3090s or over 18 3070s. Use TuringPI to host remote gaming clients for the new 7-20 gamers, 1 case video, play from home edition
@@Hellenrosehart The red lighting in the background does not help. Originally it was just going to be white, but then I made a last second change before filming.
Evil lair vibes can totally be an aesthetic choice for the background. I think it's the red combined with all the metal bars that comes across as sort of aggressive.
In my experience the sole gpu at the back of the case needs to run with significant power limits in order not to thermal throttle. What I would like to see in 1U is 6 single slot GPUs (T4, A10 etc) in the front, then the whole case can be normal depth and all 6 gpus get fresh air from the front. Why is no one offering something like this? Supermicro now has something similar, but even they do only 5 single slot per machine and place one at the back ...
in this case it looks like the rear gpu gets the air from the front ssd's, so it should™ be better than if they were gpu's back-to-back. the ssd's shouldn't output that much heat.
I have to give Asrock credit. William is sending me a new BMC FW module for my x470d4u that is already out of warranty because flashing it to 2.32 seems to have bricked it. All he asked was that I send the original back to him so he can reproduce the issue.
Injest temp would be crucial for system and card stability. I sell water cooled (rear door heat exchangers) racks that can handle almost 70kW... DC are going to have to adapt higher density computing is inevitable due to cost effectiveness and competition motivation
I am not sure what you mean by this. You can have non-PLP (read focused) drives or there are quite a few M.2 22110 SSDs with PLP if you need that feature.
@@ServeTheHomeVideo Among homelab'ers, it is increasingly popular to buy cheap laptop grade M.2's and use them for enterprise workloads, without realizing the dangers. And sadly i am seeing this more and more in production workloads as well. For long, it has been quite "hard" to do this, as it required bifurcation support, or expensive PCIe adapters with PCIe switches. Newer servers have offered M.2 support, but only for boot drives, eliminating the risk of major data loss. But adding such a thing like this M.2 to EDSFF adapter, just begs for people to use cheapo consumer drives in servers, with data loss and performance issues as the result. People don't seem to understand that there is a reason Samsung EVO's aren't used in production systems.
All Epyc by product generation in the WW channel today Milan = 7.52%, Rome = 59.55%, Naples = 32.93%. All Ice in the channel today is 6.7% of all Epycs and 77% of all Milan. Cascade Lake + R in the channel today is 885% more or un units 985 XCL for every 1 Epyc all gens. Here's my cost : price / margin assessment in comment string today at Seeking Alpha. AMD WW channel last week, Intel WW channel last week and GPU Today Turing through Ampere will reported on briefly. Intel Xeon Ice Lake average weighed 1K = $2351.18 at MR = MC = Price for a 50% discount off 1K = $1175.58 Intel Cascade Lakes 1K = $2609.61 / 2 = $1304.81. I estimated Intel could sell XCL in a full line procurement on depreciated 14/12 nm process for as low as $521.92 but would rather get $913.36 that is 1K < 65% discount. On combined XCL/XSL full line procurement I estimated Intel could sell in high volume for as low as $366.81 per unit see AMD and Intel at my Seeking Alpha Blog Spot titled 'Server Today'; AMD Milan average weighed 1K = $3440.12 at divide by 3 stakeholder * 1.5 equals OEM at (< 50%) P = MR = MC @ $1702.06. On divide by 3 TSMC Milan plus package house price to AMD is range $1146.70 to $1433.38 taking into account TSMC 25% price increase over stakeholder / 3. When TSMC raised price by 20% in q1 AMD q2 full product line price increased by 26%. AMD knew about second q4 TSMC price increase at q1 price increase? I think so. So if Intel is going to lower Xeon price (notation: the SA article I commented postulated a Xeon price drop) the current discount off Ice 1K may be 27% that places a full line procurement of Ice in parity with Milan on a per unit purchase basis. AT MC = MR = Price occurring before peak Ice production Ice cost may be around $587. At cost x6 aiming for economic profit Ice are probably produced on average for $470. Intel will aim for x8 economic profit where average marginal cost drops to $294. TSMC 7 nm for Rome full run marginal cost range $353.89 down to $270.73 on learning where TSMC price to AMD is $707, perhaps as low as $540 at run end but's this 'profit max' scenario is not an economic profit. TSMC raised price to secure economic profit that pays for CapEx at competitive 'profit max' will only pay for some R&D. At TSMC price increase Intel entered the foundry business. Intel wants an economic profit but can sacrifice on margin or bundle deal which is why 'bunde deal' needs its cost line. Intel will retain the supply advantage. Intel also has a lot more to trade in terms of cross product categories volume which is again why 'bunde deal' needs its cost line. AMD at 5/4/3 nm factor cost increase is beholden to TSMC pricing. For the mass market I foresee 7 nm legacy product for a long time on depreciated CapEx [equipment] especially for consumer dGPU market where system bus width and VRAM (or HBM commercial) is more important for subsystem performance than the GPU itself. AMD will have difficulty sacrificing margin, has less to trade, and will rely more on bundle deal which is why 'bundle deal' needs its cost line. Intel really has no reason to drop price, aggressively, however I can see Intel wanting to price below 'in-parity' with AMD on OEM and Business of Compute price making leverage. This may result in an intel price performance compromise under Milan say by 27% which is where it is now; AMD at $1706 and Intel at $1304. If AMD drops to Intel's $1304 they cut their net by 24% and this then becomes a slippery slope AMD cannot afford to play on foundry pricing. Intel wanting margin and needing to increase revenue x2 through the next four years, "dropping price" sounds like fake news or a PR system 'propagandist manipulation' to me. Mike Bruzzone, Camp Marketing
This video reminds me of a question I have. What exactly is the reason for continuing push density so hard in data centers? You can both simplify this whole system by just letting it be larger. Plus the power costs that you add by needing to do things like run the VRM mosfets outside their optimal efficiency, using an array of fans that must be using in excess of 40 watts and maintaining a lower ambient temperature.
Because real-estate is expensive. Just imagine the price for doubling your datacenter space (and lead time) to use an equivalent 2U system. vs using this 1U system.
@@suntzu1409 Running Win 10 on proxmox with GPU passthrough of a 6900XT It runs really well :) Some say 98-99% bare metal performance. I wouldn't know, I have not bench'ed the card in bare metal config :) I have 3x R9 290's lying around that might aswell be put to good use. Creating 3x more win10. I can invite 3 people over for a night of world of tanks or whatever else :) 4 gamers 1 EPYC.
As I do not currently own a business I do not feel comfortable filling out a form, however I would like to have kind of a estament as to the price of one of these. They look so cool to have.
Is the final product going to be painted? There is rust all over this thing. A brand new product shouldn't look like it is from Mad Max with a Tetanus risk.... 😞
If you check out the EDSFF video, you are probably thinking about the E1.L drives. There were also the original "Ruler" drives (Intel P4500 Rulers) that were more similar to E1.L but not the same.
I have two home servers with Asrock server boards the IPMI AST2500 is terrible for any task power commands barely ever work installing OS is painfully slow with encryption of ISO’s. Dell IDRAC or HP iLO is miles better. I guess in the end you get what you pay for. Not to mention the stability of these systems.
That is a tough one. Sometimes companies have custom consumer cards made for servers like this, but usually these require passively cooled data center GPUs. It may be worth looking at some of the 2U/ 4U options.
Yea, I noticed that too... I was thinking that if I bought a barebones system like this, I'd take it apart and have it powder coated with some obnoxious orange or color or something like that.
That EDSFF carriers are neat, even for Workstations
The cooling is completely an issue with these 1U 4 GPU systems.
One of our customers had one built for them and kept having over heating issues. We built them a few 4U 8GPU systems afterwards at a cheaper price and proper performance.
Even with super low temps in the data center under full loads 24/7 starts taking the toll on those systems.
I feel like this system would cool a ton better if they had used a PCB for power distribution. The DC power cables eat a ton of space side to side directly in the fan path. Having a flat pcb with with only the PCI-E data cables on top would clean up the air path significantly. Then just have short cables for each GPU/motherboard.
@@ionstorm66 direct chip liquid cooling does not have such weaknesses
Direct chip liquid cooling works best for such systems
Watercooling /s :) Ohhh the piping nightmare!
@@savagedk We have something very cool coming along those lines that will make you re-think piping nightmare. It was going to be shown at SC21 but it looks like that will not be in-person so working on finding time to get hands-on (and hopefully one for the lab)
@@ServeTheHomeVideo 3M Novec? :)
Seeing the box behind Patrick’s head made me think of the king from Katamari Demacy
The box is an Aputure P300c that is being used over products. You can sometimes see two others without softboxes (they are perpetually on backorder for months) on either side of the frame.
06:59 Say what? ah jaja, great video of course, greetings from Paraguay.
I cant wait for systems like this to go on ebay for cheap, i'd be hoping for 16-24 EDSFF in PCIe 1x speed. I dont need the speed, but i would like the density
Denvera, Naples is already priced down which is what Jeff at Craft implements as (I assume) the neighborhood ISP; game server, virtual (engineering) desktop . . . mb
@@mikebruzzone9570 Nice to see you Mike. I'm sure its starting to come down in price, but i'm referring to ebay steals like my 2010/2011 Dell CloudEdge C2100 that i got in late 2015 for only $115. 12 cores, 24 threads, 12 LFF bays, and 18 RAM slots(only 4 populated when i bought it) IIRC original price was somewhere in the neighborhood of 15k with storage and RAM
@@denvera1g1Got it a second hand 'salvage' system deal. Naples 32C like Jeff showed are sitting on the shelf for 10 weeks straight and at least one CPU is now offered $495 in an $800ish ask environment. My bet Naples generation is NOW very negotiable. Whole servers I have no idea but I'd also say very negotiable for at least Naples and Xeon Skylake. If you resell any large volumes 100+ let me know and I'll pencil out a middle man price offer, for the supplier, and you can let me know if they bite on it. mb
Patrick, i'd love to see you get together with Craft Computing to make a video on using the Nvidia(no longer Tesla) A100 for gaming. Craft computing has already done videos on using older Tesla models for gaming, but i think a video titled something like
"Gaming on Nvidia's secret $10k graphics card" would do well
You're probably just as capable as Jeff, but i think a collab might bring you to a wider audience and to people who've been looking for a video like this from the people behind the previous videos on the subject. Though, a collab with LTT, Paul's Hardware, Jay's 2 cents, or Lyle might net more audience
I had a beer in my fridge for a collab with @craftcomputing for some time (eventually it had to get consumed.) Hopefully one day we will get to it.
dude, that would be epyc.
Even better, get that card running on a Pi!!!
@@JeffGeerling I to would like to see this. Heck, take the LTT approach. Virtualized the GPUs on the DGX1 for the equivelant of 10 3090s or over 18 3070s. Use TuringPI to host remote gaming clients for the new 7-20 gamers, 1 case video, play from home edition
@@JeffGeerling i see a video soon with an nvidia a100 running on a Pi
Really like the new set.
Thanks. Still a work-in-progresss but this is the first server review video that was done exclusively at the new location.
It does have some evil lair vibes though
@@Hellenrosehart The red lighting in the background does not help. Originally it was just going to be white, but then I made a last second change before filming.
Evil lair vibes can totally be an aesthetic choice for the background. I think it's the red combined with all the metal bars that comes across as sort of aggressive.
@@Hellenrosehart yeah but red is very contrasty with the overall blue theme
Wow this rack rocks man. I would use the GPU slots to put nvme raid and dang it will fly.
In my experience the sole gpu at the back of the case needs to run with significant power limits in order not to thermal throttle. What I would like to see in 1U is 6 single slot GPUs (T4, A10 etc) in the front, then the whole case can be normal depth and all 6 gpus get fresh air from the front. Why is no one offering something like this? Supermicro now has something similar, but even they do only 5 single slot per machine and place one at the back ...
in this case it looks like the rear gpu gets the air from the front ssd's, so it should™ be better than if they were gpu's back-to-back. the ssd's shouldn't output that much heat.
I have to give Asrock credit. William is sending me a new BMC FW module for my x470d4u that is already out of warranty because flashing it to 2.32 seems to have bricked it. All he asked was that I send the original back to him so he can reproduce the issue.
Injest temp would be crucial for system and card stability.
I sell water cooled (rear door heat exchangers) racks that can handle almost 70kW...
DC are going to have to adapt higher density computing is inevitable due to cost effectiveness and competition motivation
A note about PLP would have been appropriate when talking about adding M.2's to servers, as anything but boot drives.
I am not sure what you mean by this. You can have non-PLP (read focused) drives or there are quite a few M.2 22110 SSDs with PLP if you need that feature.
@@ServeTheHomeVideo Among homelab'ers, it is increasingly popular to buy cheap laptop grade M.2's and use them for enterprise workloads, without realizing the dangers.
And sadly i am seeing this more and more in production workloads as well.
For long, it has been quite "hard" to do this, as it required bifurcation support, or expensive PCIe adapters with PCIe switches.
Newer servers have offered M.2 support, but only for boot drives, eliminating the risk of major data loss.
But adding such a thing like this M.2 to EDSFF adapter, just begs for people to use cheapo consumer drives in servers, with data loss and performance issues as the result.
People don't seem to understand that there is a reason Samsung EVO's aren't used in production systems.
Wow can't wait to get into the meat of this video! VERY interested.
Apparently the video could not wait either. It went live 2 hours before the main site review was scheduled to go live due to a time zone error :-)
All Epyc by product generation in the WW channel today Milan = 7.52%, Rome = 59.55%, Naples = 32.93%.
All Ice in the channel today is 6.7% of all Epycs and 77% of all Milan.
Cascade Lake + R in the channel today is 885% more or un units 985 XCL for every 1 Epyc all gens.
Here's my cost : price / margin assessment in comment string today at Seeking Alpha. AMD WW channel last week, Intel WW channel last week and GPU Today Turing through Ampere will reported on briefly.
Intel Xeon Ice Lake average weighed 1K = $2351.18 at MR = MC = Price for a 50% discount off 1K = $1175.58
Intel Cascade Lakes 1K = $2609.61 / 2 = $1304.81. I estimated Intel could sell XCL in a full line procurement on depreciated 14/12 nm process for as low as $521.92 but would rather get $913.36 that is 1K < 65% discount. On combined XCL/XSL full line procurement I estimated Intel could sell in high volume for as low as $366.81 per unit see AMD and Intel at my Seeking Alpha Blog Spot titled 'Server Today';
AMD Milan average weighed 1K = $3440.12 at divide by 3 stakeholder * 1.5 equals OEM at (< 50%) P = MR = MC @ $1702.06.
On divide by 3 TSMC Milan plus package house price to AMD is range $1146.70 to $1433.38 taking into account TSMC 25% price increase over stakeholder / 3.
When TSMC raised price by 20% in q1 AMD q2 full product line price increased by 26%. AMD knew about second q4 TSMC price increase at q1 price increase? I think so.
So if Intel is going to lower Xeon price (notation: the SA article I commented postulated a Xeon price drop) the current discount off Ice 1K may be 27% that places a full line procurement of Ice in parity with Milan on a per unit purchase basis.
AT MC = MR = Price occurring before peak Ice production Ice cost may be around $587. At cost x6 aiming for economic profit Ice are probably produced on average for $470. Intel will aim for x8 economic profit where average marginal cost drops to $294.
TSMC 7 nm for Rome full run marginal cost range $353.89 down to $270.73 on learning where TSMC price to AMD is $707, perhaps as low as $540 at run end but's this 'profit max' scenario is not an economic profit.
TSMC raised price to secure economic profit that pays for CapEx at competitive 'profit max' will only pay for some R&D. At TSMC price increase Intel entered the foundry business.
Intel wants an economic profit but can sacrifice on margin or bundle deal which is why 'bunde deal' needs its cost line. Intel will retain the supply advantage.
Intel also has a lot more to trade in terms of cross product categories volume which is again why 'bunde deal' needs its cost line.
AMD at 5/4/3 nm factor cost increase is beholden to TSMC pricing. For the mass market I foresee 7 nm legacy product for a long time on depreciated CapEx [equipment] especially for consumer dGPU market where system bus width and VRAM (or HBM commercial) is more important for subsystem performance than the GPU itself.
AMD will have difficulty sacrificing margin, has less to trade, and will rely more on bundle deal which is why 'bundle deal' needs its cost line.
Intel really has no reason to drop price, aggressively, however I can see Intel wanting to price below 'in-parity' with AMD on OEM and Business of Compute price making leverage.
This may result in an intel price performance compromise under Milan say by 27% which is where it is now; AMD at $1706 and Intel at $1304. If AMD drops to Intel's $1304 they cut their net by 24% and this then becomes a slippery slope AMD cannot afford to play on foundry pricing.
Intel wanting margin and needing to increase revenue x2 through the next four years, "dropping price" sounds like fake news or a PR system 'propagandist manipulation' to me.
Mike Bruzzone, Camp Marketing
This video reminds me of a question I have. What exactly is the reason for continuing push density so hard in data centers? You can both simplify this whole system by just letting it be larger. Plus the power costs that you add by needing to do things like run the VRM mosfets outside their optimal efficiency, using an array of fans that must be using in excess of 40 watts and maintaining a lower ambient temperature.
Because real-estate is expensive. Just imagine the price for doubling your datacenter space (and lead time) to use an equivalent 2U system. vs using this 1U system.
Neat! Any chance that a Supermicro 2114GT-DNR will make its way to the STH Lab in the future?
One day I want to do those but we do not have one in the lab at this point.
Yay! My new Gaming computer! Some people yell HEEYHOOO, I yell VFIOOOOO :)
Haha fps goes brrrrrrr
@@suntzu1409 Running Win 10 on proxmox with GPU passthrough of a 6900XT It runs really well :) Some say 98-99% bare metal performance. I wouldn't know, I have not bench'ed the card in bare metal config :)
I have 3x R9 290's lying around that might aswell be put to good use. Creating 3x more win10. I can invite 3 people over for a night of world of tanks or whatever else :) 4 gamers 1 EPYC.
with this amount of power finding a sutable PDU is going to become a majour concern.
Hi Patrick! Could y'all do a video on what the point of inferencing and training are?
I am not sure we would do a video just on that subject. Maybe we can go into it a bit more on a future video.
As I do not currently own a business I do not feel comfortable filling out a form, however I would like to have kind of a estament as to the price of one of these. They look so cool to have.
now you can finaly run minecraft raytraced in a 1U system and with only 1 cpu
wish 2-4u servers weren't so much more than standard atx chasis though
they missed a trick by not calling their server division asrack
Ha!
Ass-rack?
Now I See why Rack Units has a specific power limit.
Image this 4kw*47=188kw max.
Is the final product going to be painted? There is rust all over this thing. A brand new product shouldn't look like it is from Mad Max with a Tetanus risk.... 😞
I always believe the ruler ssds we're much longer
If you check out the EDSFF video, you are probably thinking about the E1.L drives. There were also the original "Ruler" drives (Intel P4500 Rulers) that were more similar to E1.L but not the same.
@@ServeTheHomeVideo thank you. Is that longer rule still going to be standard?
Your corporate smile is freaking me out
I was using my serious/ angry face for this one.
im blue from office sorry i couldnt learn
Looks like I might stick to supeemicro
pat is ' Stocky" af !!
I have two home servers with Asrock server boards the IPMI AST2500 is terrible for any task power commands barely ever work installing OS is painfully slow with encryption of ISO’s. Dell IDRAC or HP iLO is miles better. I guess in the end you get what you pay for. Not to mention the stability of these systems.
Looks like a job for a rack air conditioner.
Can this fit consumer GPU such as rtx 3080? I need to build game server. And I am using rtx 3080
That is a tough one. Sometimes companies have custom consumer cards made for servers like this, but usually these require passively cooled data center GPUs. It may be worth looking at some of the 2U/ 4U options.
wow... nice..
Expensive systems, and companies opt to skimp on the air ducts
I just like how the system is rusty.
Yea, I noticed that too... I was thinking that if I bought a barebones system like this, I'd take it apart and have it powder coated with some obnoxious orange or color or something like that.
Oh wow, that power supply has C20 socket
That is pretty common on these higher-end servers. I just see it as somewhat normal now so do not mention it, but maybe I should.
@@ServeTheHomeVideo You did mention the wattage requirements and considerations, so I believe it pretty well covered in general
Are you s u r e this is for the home?
Rusty looking drive slots @3:30 ....eww.
tx02h
#vum.fyi