HUGE! 1PB+ 60-bay Storage Server from AIC
Вставка
- Опубліковано 29 лип 2024
- We check out a 60-bay storage server from AIC, the AIC SB407-TU. This can easily handle over 1PB of storage, multiple NVMe SSDs, lots of memory, and networking. Note we have a bit more detail for this one on the STH main site just due to the timing of filming and when parts arrived.
STH Main Site Article: www.servethehome.com/this-aic...
STH Top 5 Weekly Newsletter: eepurl.com/dryM09
----------------------------------------------------------------------
Become a STH YT Member and Support Us
----------------------------------------------------------------------
Join STH UA-cam membership to support the channel: / @servethehomevideo
STH Merch on Spring: the-sth-merch-shop.myteesprin...
----------------------------------------------------------------------
Where to Find STH
----------------------------------------------------------------------
STH Forums: forums.servethehome.com
Follow on Twitter: / servethehome
Follow on LinkedIn: / servethehome-com
Follow on Facebook: / servethehome
Follow on Instagram: / servethehome
----------------------------------------------------------------------
Timestamps
----------------------------------------------------------------------
00:00 Introduction
01:06 AIC SB407-TU Hardware Overview
09:42 System Setup OS TrueNAS Scale, Ubuntu, and RHEL
11:47 Power Consumption
14:21 Key Lessons Learned
16:25 AIC SB407-TU Wrap-up
----------------------------------------------------------------------
Other STH Content Mentioned in this Video
----------------------------------------------------------------------
- NVIDIA T4 Review: www.servethehome.com/nvidia-t...
- NVIDIA T4 Analysis: www.servethehome.com/analysis...
- NVIDIA BlueField-2 DPU: • ZFS without a Server!?...
- Dell XE7100: • Over 1PB of Storage De... - Наука та технологія
Can you give some plusses/minuses against something like the 45Drives XL 60 chassis and this? One advantage to the 45Drives unit is the trayless drive design, and I also like the larger, slightly quieter fans up front (though the whole unit is super loud still... it has 40mm server fans in the 1U PSUs), but I also liked how this system has 2x2 PSUs and a layout that allows those NVMe drives in the back (huge for cache and other more advanced storage layouts...).
But hardware-wise, are there any major advantages to a chassis design like this, versus a more custom / 'bare' 45Drives design?
This is another level. 45Drives is a single socket and still PCIe Gen3. So it is very limiting for applications like Ceph where you ideally target 1 core/ drive (e.g. you need 60 cores plus a few for management/ control plane.) Even small things like having the LED status light s on the front of this one for each drive so you know exactly the status of each is nice when dealing with lots of these. Fit and finish is generally better.
Still, the big thing is that this is a different class of device. This is a dual socket PCIe Gen4 platform so from a raw performance perspective, it is not close. That becomes more useful the more you do with the boxes.
I hope you feel better. I thought I showed you this one while you were here :-)
@@ServeTheHomeVideo You mentioned it but it was still inside a box, and a wee bit too heavy to pull out for a quick look :)
The dual socket feature is nice-though using standard eATX layouts, one may be able to upgrade the thing aftermarket-something I plan on doing long-term :)
It does look like this system uses a lot of standard components as well, which I like a lot. For many orgs, having the ability to get another 5 years out of hardware through upgradability is a nice feature you don't get with some of those OEM systems...
@@JeffGeerling
I think that the upgradability aspect of it REALLY depends on the kind of organisation you work for/at.
It's not uncommon, for example, for hyperscalers and HPC installations to basically rip out everything from a bank of racks and completely swap out with the latest and greatest.
And then you have like some small and medium businesses that might prefer going the upgrade route instead of replacing their servers.
And then you have financial institutions that often times, might not deploy the latest and the greatest due to legacy compatibility issues.
Having worked with AIC for a long time I am happy to see them getting some exposure. In my experience AIC products are functional, well built and easy to use. What they don't have is any luxuries. The drive bays are functional but doesn't feel as solis as say HP or Dell counterparts. Even so they are way more durable than you might first think when handling them. All in all I think that's a good way to go in equipment like this. You're not trying to impress the technicians with butter smooth drive insertion. You don't get a complete solution served straight up, but you pay a lot less and the hardware is flexible enough to make what you need out of it.
Great content, Patrick, and I appreciate that you respond to the comments which is a huge value-add for everyone that reads them.
Merry Christmas!
DAMN !! thats a beast !! Good video Patrick !
Heh. And I'm proud, because I bought 8TB HDD for the price of 6TB. Still have to mount it though. Last time I did this, HDD's were going of SCSI/IDE - if my memory isn't completely broken. 2 decades ago.
I'm confused... you mentioned 8 NVNME drive, but 2.5" form factor. Is that just a SSD but with a NVME data/power connection instead of SATA?
Yes. If you look up 2.5" U.2 SSDs that is a fairly well established form factor. It has been out for over a decade so we are now seeing EDSFF start to take its place
Amazing review. Thank you
and here i am wondering if i need/want 6 or 8 bays, just to make future upgrades easier and to have the option for a "quick swap" backup drive.
Very nice.
I know we have racks full of the Cisco 56 bay systems which work quite nice as well. They support dual nodes which is great to be able to split the storage between systems. Can use 40Bays for one node and 16 for the other or 54 for one and 2 for the other.
Why yes, Patrick, that actually does sound like a lot of fun.
I love how Patrick always pointing out little quality-of-life features on servers, as someone who recently got into managing on-prem servers, I can say that the little things do actually make a big difference at the end of the day.
So much!
How are the rails? A hard disk goes bad, I need to swap it - do I have to switch down the entire box to do a HDD swap? Can I leave it in the rack while doing this? Thanks!
Side note, while these servers that are large / loud are cool... would you say this keeps to your focus of STH, or is this just exploring to explore?
I watch because I want to dream that maybe one day a decade from now Ill be able to pick this stuff up off ebay for dirt cheap, like I do now with decade old equipment. I just wanted to say I love your enthusiasm. Thanks for the videos and the money you spend to ~~inform~~ entertain us.
Thanks for the kind words. I was pretty tired for this one.
With how small and fragile modern edge-mount VGA connectors are I would actually prefer the PCI slot VGA solution. You don't have to worry about ripping the VGA socket off the motherboard if you don't get the thumbscrews all the way undone. I didn't think they could make VGA connectors more of a PITA, but someone managed to.
Fair point.
looking good! thanks for an amazing video as always. I wonder if there is an older version of something like this with 40-50 drives but quiet and cheap for home lab? :)
Not 100% sure. At some point, drives being accessed are noisy.
@@ServeTheHomeVideo well not 100% quiet, but something that does not use tiny 5000 RPM fans :)
@myplayhouse is playing with an older HP unit thats capable of 70 SAS drives. Got it : Its a HP D6000 disk array. Connect it via SAS to a quiet server and you have the homelab version. There should be some units on the used market.
@@christophkarliczek2951 thanks! is the actuall unit prety quiet?
Similar question as Jeff. SuperMicro also have dual socket 60-bay/90-bay storage server. Aside from AIC's 8 x nvme bays, there is not much difference between these two.
It has been some time, but we looked at it in the Cascade generation (2020.) See: ua-cam.com/video/iwEKC63XqiM/v-deo.html
love it
Disappointed with the bandwidth, I get it's not SSD but I suppose even with that many drives it would be hard to get a lot of throughput. Did you include NVMe cache/log in that 25Gb?
No, and that was off of each bank of drives.
Thanks Patrick & STH Team for the great info. The design is absolutely nice & functional. Can you suggest the same top loading case but 30 Drives other than 45 Drives? Also, Maybe this is dumb but I'm wondering since it has 8 NVMe other than the 2 onboard can you suggest a software solution that supports NVME-oF so you can make these drives available to an ESXi host. I don't know if trueNAS Core or scale supports NVMe-oF?
Thanks again
Probably best to look at a front and rear 36-bay 4U. With only 30 drives as a requirement there is no need to go to a top loading design. We have not tested NVMe-oF with TrueNAS Scale since typically we just use Ubuntu for that.
@@ServeTheHomeVideo the 36 drives case from supermicro is hard to cool due to the 12 bays in the back that is why they use 7 fans which makes it noisy plus you can't use a full height expansion card. While a 30-bay top loading case will be more silent if it is paired with right fans & you can use full height cards and as a bonus it is shorter in depth which makes it a good option for home labs.
how is the raid 0?
Wow you could have impressive VDI platform using gpus and dpus really cool
Top loading? So if 1 fails, you gotta pull the entire thing outta the rack?
Yes. Top loading is a huge market these days.
I haven't used this system but they are usually on ball-bearing sliders with cables long enough that you can slide them out powered up then you hot-swap the failed drive. When I worked in a data center years ago the constraints were physical space followed by sufficient power at the rack. As density and power usage goes up then you need to amp up cooling.
I have a counter-offer for you Patrick. Supermicro X10DRI + 2xe5-2699v4 + ds4246 + ds4486 + ConnectX4 EDR(100G). Power is ~800W, Noise ~ 56dB, and price is below $1500 without drives. Oh yeah I can get ~ 38Gb/s. Planing on testing nvmes in raid 0 to saturate that 100G.
When will ssd nvme become affordable to use as home nas for video hobbiests who record lots of GoPro
It is not so bad if you edit on SSDs locally but then use a disk-based NAS for holding old footage.
would not mind someone making something like this but sacrificing a drive or two per row to give more space for airflow between the drives. the amount of watts used for just fans in these systems is a tad ridiculous (have no idea how much it is in this particular system though)
Some of these top loading systems actually have the SAS expanders in cartridges that sit along with the drives.
If you are ever looking for a fun video, it might be interesting to see what the best mass storage solution is for a home game setup. I have tried my hand at PCIe x8 cards that have 3x8 SAS connectors on them... but I could never get them working. Wasted a lot of time and money, wondering if you have some experience to share...
The STH forums are a good place to start with for that. Actually, setting up old SAS1 arrays is how STH started in the late 2000's.
@@ServeTheHomeVideo Haha, monster shoes that I follow in then - thanks for the advice, I will take a look.
13:50 => Keeping the compute closer to storage.
Why does the motherboard io shield look rusty?
That is lighting. I checked it before this went live when I saw the photos
@@ServeTheHomeVideo so the text (id usb mgmt com) etched in the shield isn't that color? 3:41
Yeah, obv wouldn't keep it that way but wow it looks cool vertical
Hi. Can to make a video on how people make money with these servers. I'm interested in building servers and playing with hardware and built 4 servers. But how can I make money, online preferably, with these stuff.
Reminds me of Nexsan's SATABeast or ATAbeast.
Gee that would be too big to run Ceph or other SDS.. but probably fine for local filesystems. Maybe even minio?
These (and larger) are pretty common in Ceph deployments.
@@ServeTheHomeVideo yes, but not if you use it for block storage - at least not if you have latency requirements. (Two cores per OSD and two OSDs per drive, that would require 240 cores)
So is Optane about to become like Betamax? Did Intel shut down the tech entirely or did they sell the patents? If I were Apple I would gobble every single 3DXpoint patent since that would not only reduce latency but also remove the issue of constant memory swapping on that infamous 256GB module on Macbook Air. Apple could use on die memory, and as first level storage Optane Dimms on a potential Mac Pro. Imagine iPhone 20 with 3DXpoint storage 🤓
The Glorious Complexity of Intel Optane: ua-cam.com/video/dOV3gGncGU8/v-deo.html
This type of tech is going to CXL (albeit without 3D XPoint since the fab is closed for that.)
How does this system compare to the Supermicro 60-drive systems?
I would think that they're about "on-par" (i.e. in the same class) relative to each other, no?
Also, were you ever able to run a benchmark test across ALL 60 drives, regardless of how you have set them up (e.g. four 15-drive vdevs or ten 6-drive vdevs in TrueNAS)? (I'm not super concerned about the configuration, but rather more concerned with how much data can you push to the drives, across all 60-drives, in total?
This is a bit more cost-optimized. We did not do the Ice Lake top loader, but we looked at the previous generation Xeon Scalable one: ua-cam.com/video/iwEKC63XqiM/v-deo.html
@@ServeTheHomeVideo
Thank you!!!
When are you gonna review 6413 and 6412 it's has been like a month...
Alex has the video, just waiting on it to be edited it is one of the next two he has in queue.
Ma mama raised no fool, that's a CHIA farming rig right here :p
Ha!
I was really exitied for a moment when I thought they built the hba into the backplane and those cables would just be Pcie but alas not
I agree, that would be super cool as well. It took like 4 weeks to get the Broadcom 9500-16i's we used in this.
🎉
Seagate IronWolf Pro :D but seriously they should send you some Mach.2 drives to test that bandwidth
That is a decent on the single drive used in this video. We have dead Seagate drives we use in the studio since they already do not work. For actually running it, we were using different drives. :-)
@@ServeTheHomeVideo Only one other option then :D
@@dashtesla well there is WD and Toshiba.
@@ServeTheHomeVideo 😅
Am I missing something? It seems Epyc has way more PCIe lanes and they are ver4. Why storage vendors keep using Intel ? What gives ? Oh, Epyc is also cheaper. Every time I'm like oh no, another Xeon crap. Sorry, just curious.
Part of this is getting 16 memory channels. Actually, if you saw our QAT piece on Ice Lake, there are some instances where Ice Lake's extra instructions help a lot even without the QAT accelerator. Genoa closes that gap, but chips are very hard to get still
@@ServeTheHomeVideo Got homework to do. Again LOL
I am not sure how this fits the "serve the HOME" channel ;) Not that many houses would require 60 drives + 8 NVMe drives. Not that many small and medium size business, either ;)
Home is the /home/ directory in linux.
@@ServeTheHomeVideo :) Good point :) I thought, thanks to tiny-miny-micro that the channel is mostly about home networking :) This clarifies a lot!
@@rklauco Yea the channel is an offshoot of the STH main site which is probably the largest server review site in the world. TMM was a series started during the pandemic when companies could not ship servers so we bought the 1L PCs on ebay to keep content going.
@@ServeTheHomeVideo Brilliant idea - certainly got me interested.
Ok
I hate expanders, I want one channel per drive. With the prices for all this having the extra HBA's is not that big of a killer and the performance gains can be significant.
If just that got the EPyC treatment....
and proper airflow, that front panel doesn't have too much of air passage ... :(
and then it could have direct access to all of the drives ... so much missed opportunity here
😮
This better not awaken something in me 👀
👍
Lucky you did not brake the motherboard and io by placing it like an idiot at an angle.
Ha! Motherboards and systems are fine like this. They need to be more durable than setting down on the side. They need to be designed for shock forces during shipping, on sides/ angles, even in padded boxes it is more force than we have here. We have done this with much larger systems and this one does not even have huge heatsinks/ cards mounted like with GPU systems.
@@ServeTheHomeVideo It's actually written explicitly on the case how to lift it and how to position it. Nowhere shall you find that it should be positioned at an angle or upright especially loaded. The mere chance it did not happened to you, does not imply it won't. Testing it could lead to injury, showing it is up par with that idiot linus.
Interesting rig, but it looks geared more towards enthusiast and not a datacenter.
You cant even run MS Flight Sim on that storage so weak. It wont even download half of the data. JK It is a total of 2.6 PB.
At Datacenters for that kind of software they use professional storage servers from NetApp. Not these cheap toys.
Mostly joking.
But it can sound like one!
@@ernestoditerribile yeah, Netapp is made of a special hardware sauce, completely and totally different from this one, truly
60 bays? imagine stuffing tjhat with 60 of those 100TB 3.5" sata ssd's that linus showed off years ago. meanwhile me: it's niiice but i cant even get to use 2TB unless i record and document everything i do. :')
I record everything. From the websites I visit (in their entirety, and in a HTML browsable format on my hdd's), to music, to videos and movies, documents, PDF's, eBooks, pictures and etc. I have around 1.5 to 2 terabytes of data, with 24TB of free space. If we can all record everything, we can effectively decentralize the internet. I need access to stuff in the absence of the internet. We depend on it too much.
@@dieselphiend to be fair, i do most things decentralized too using roxmox nowadays, but what i meant by recording everything is keeping logs/video footage of what i did throughout every day. Im all for decentralisation. Its why i preferably use newpipe to locally archive youtube diy tutorials and such. But 2TB isstill plenty of space for mefor everything i do.
@@8bitbunny_VR You don't record music, too?
@@dieselphiend i back up cd's i buy. i prelisten to artists through online services like soundcloud/youtube, if i find artists that pop out,m i buy the cd's or even vinyl records of said artists from time to time, and achive it in wav format for personal use.
@@8bitbunny_VR I see.
remember when this channel used to be about repurposing older enterprise hardware for home use for cheap?
now it's lets spend $15,000 on a redonkulous server.
I mean the STH main site is the largest server review site. YT is still very small in comparison, but we have been reviewing new servers for 10+ years on the main site.
First
Congratulations!
I think the technical term for 1 petabyte is a "Linus"
i see drives overheating with this dinosaur - why not split things up and not put all your eggs in one basket - it will break and be a nightmare - guaranteed
These types of systems are deployed en masse for years. We have reviewed others with 100 drives.
Unless you've actually SEEN this overheat, why even comment? Sounds like you don't know what you are talking about at all.
@@CLHatch65 everything breaks - for smb this is a non starter - you need 3-4 for redundancy if you value your data so costs are a factor too - just build your own storinator from 45 drives if you need something like this - they have done all the work for you - save money with proven parts and learn some things too #mtbf
@@shephusted2714 You are making no sense. You said "everything breaks". Which means your "better" idea of making your own 45 bay one will break also.
I'll take two thanks...
Petabyte,Petabyte. Petabyte,Petabyte! whew!! 1.only! 2.just get'n this started! give me *SAS* !!!🙃🥳
all those fans! what's shaking? the ground! na.....🤠