Over 1PB of Storage Dell EMC PowerEdge XE7100 Review
Вставка
- Опубліковано 13 бер 2021
- STH Main Site Article: www.servethehome.com/dell-emc...
STH Merch on (Tee)Spring: the-sth-merch-shop.myteesprin...
STH Top 5 Weekly Newsletter: eepurl.com/dryM09
In our Dell EMC PowerEdge XE7100 review, we see how this 5U system handles 100x 3.5" HDDs, with flexible CPU, GPU, and SSD options - Наука та технологія
Here I am putzing around with my 16 hard drives thinking I'm hot stuff...
No, you can't attach 100 drives to a cm4
You are!
Somewhat different concepts though. The RPi w/ RAID was very cool!
@@ServeTheHomeVideo I agree, I'd love to have something like this myself
@@ikkuranus *so far* 🤪
Linus is calling Dell now asking if he can trade in his gold Xbox controller for this fully decked out
New New New Whonnock (Jake: Whonnock 4!)
@@francismendes well, this would be more like super petabyte project... Whonnock is PCIE SSD only
@@francismendes New Petabyteproject...
@@Momi_V you're right... Whonnock is geared towards performance, not raw storage space...
He's already got 1 PB boxes.
The forbidden Plex server
All in 4k
The vibration from this thing has to cause small eartthquakes
Hyped to see that server in 4-5 years on the used market 👌👍
@@psori depending on your place in the world 15,000kwh is less than $1500/usd which if you are using this for business or are a power user and need the space isn't a bad deal.
@@ndragon798 4500€ in germany, but that is on consumer rate, highpower users pay less, because "reasons"
@@majstealth I love how you marked the reasons with the " " because that is exactly how it works in germany. Companies over everything.
@@psori
Solar arrays are a thing.
I said that 4 years ago, had to install 2 racks of InfiniFlash units @ 500k euro each.
Don't hold your horses, it's still pretty steep :/
Nice addition to my home lab 😁
Damn, finally a sever big enough to hold all my por... Projects, all my projects!
P... Plex Media Server
Geez that's enough for all of my P...PDF files.
Ah yes more space for my portable media library
Let's be honest - *what's the difference?*
Cat pictures bro'!
Ok but where's the satisfying sound of how this beast starts up )
dude, its been 5 years since I've been in a DC, I've not touched servers like this in forever. Now working for a hyper scaler for 4 years, and being significantly abstracted from DC ops, has me so jealous of both you and DC ops in general. I seriously miss this and am very jealous of you getting access to this.
Boss: "We've got a new server delivered, go move it to server room, unwrap and rack it..."
Trainee: o_O
This box looks like an ideal cache/proxy for streaming services, lots of spinning rust, with lots of NVME, and multiple low profile GPU to handle trans-code offloading.
This is legit just the perfect "I got all in one system and can upgrade with just another one" Instead of having to adjust your structure
@@Alpine_flo92002 yep, as few as possible to maintain a minimum redundancy level is good enough
just get a disk shelf and plug it into the gpu box of doom. it'll be fine
Excellent content, as always!
"Next, get ready for MILAN" OOOH YEA !!!
And thanks Patrick, as a tech writer I know what you mean by "Excellent product let down by marketing".
I've been a number of times explaining to PR people their own products and why they could probably sell 5x more products if they actually made marketing that makes sense.
It is a passion thing. People passionate about technology understand it and can communicate it.
@@ServeTheHomeVideo Well then you have lost the passion...
In Intel's case it's excellent marketing let down by Product! 😁
''Recording this at 4am before I had coffee!'' Could have fooled me!! :)
From Patrick in 2021, the rear is where the real magic happens.
I remember when i got my 5x Dell EqualLogic PS6100XV with all 600GB 15.5K SAS drives which was a 120 drives and was so excited to populate it. Fast forward 8 years and now they sit in my garage collecting dust because i can't even give them away lol. they had 3x PE910's which also are collecting dust. Crazy how it all becomes so absalete so fast. They would be nice plat servers but for that fact i no longer live in the cage at the data center and couldn't even power these baby's up. I'm sure like someone else said in 5 years they'll be barely worth the power it takes to boot them up.
described in the best way ❤️
expensive domino run on the desk there.
It's stressing me out just watching
Another great video! Thanks for creating such technical and informative content!! Its always fun to watch these videos. :)
Again perfect timing. I need something like that at work. But .. Agree, going Intel not Epyc is really disappointing. Dell must be selling these to big boys so they don't care about marketing. I didn't even know such system existed. You are such a good source of info !!!
I agree. Intel CPUs are not very Epyc anymore
CERN: That will only last 1 minute untill we have filled it with one of our LHC test run data...
They can generate like 1TB/s...
Thank you for the video 👌
Interesting. I use ceph at home with 24/36-bay supermicro servers. In 5-10 years once these hit the used market, this kind of system might be a good way to increase capacity without increasing the number of nodes in use significantly.
My TrueNAS is feeling inadequate.
This looks well executed, I couldn't fill 100 drives but I would like something smaller this well executed.
Insane... The company I work for, combined with the last company I've worked for could fit all their data on the four 960GB SSDs from one of the controller modules, in RAID10...
I would love to see this filled with the ExaDrive EDDCT100 (sata) or EDDCS100 (sas) which are 3.5 in formfactor drives which are 100 tb of drive space, too bad they are 40k per drive
We can probably do that for $400k if you want :-)
This makes me think of the scale-up nodes you could make if you added another U (so 6U) and went with EPYC CPUs here instead.
As an example for a data warehouse: 100 HDDs, a node with 2x 64 core CPUs and 4TB of RAM, 4 u.2 SSDs in front for system drives and caching, and a secondary node with E3.S/L ruler SSDs connected by say 4x16 PCIe 4.0 links (64 in total).
I think this is a foreshadowing of the kind of setups we may see with next gen interconnects with CXL where you may have a 4U of HDDs with a link to a 2U control node, with has another link or set of links to SSDs and accelerators.
Keeping compute local to the storage is a great way of gaining efficiency by limiting data movement. I feel there are 2 opposing forces at work in the data centers and clouds: Disaggregation for composability and flexibility (high network infrastructure requirements) VS distributed localized resource groups for efficiency at the cost of some flexibility and up-front knowledge of workloads. Co-localizing resources is also a way to speed up workloads that don't scale out well due to limited parallelism and/or latency sensitivity. I'd be interested to hear Patricks take on that, and what he expects to see. Maybe we will see disaggregation as default for most general workloads and specialized systems for well known and infrequently changing high volume workloads or performance-critical workloads?
Epic review, without an epyc. Very very interesting
You know you are a great UA-camr when you have 50.000 follower, but Dell calls you up to send you a system to review that is most likely in the neighbourhood of 120.000€ =D
...and of course ...when Linus calls, invites and references you all over the place =D
Well done, Sir. Way underrated channel.
Do you still have the UBNT Unifi Leaf that you checked (NOT REVIEWED!) and can you tell any updates on how it works in the meanwhile and if there is already an acceptable firmware for it?
Thanks! Last I saw it was removed as a product
@@ServeTheHomeVideo Oh that's too bad, I thought I had seen it only a few days back in the US EA store. A German store still has it listed as 'coming soon'.
Such an inexpensive offer I might have given a try.
A worthy upgrade for my Poweredge C2100LFF
Would love to see some benchmarks
Great stuff Patrick, now we need a new series based on High-Density Storage with Compute ;)
Yes I'm an AMD Fanboy but really a tech fanboy moreso, so even though this has Intel inside it's still a nice piece of kit....
Just wait for tomorrow's Milan.
@@ServeTheHomeVideo yes indeed :)
Can you slide out and open such a system in operations? I would not dare to do that to replace failed drives...
Yes, I've seen 4u JBOD'S yt my customers and they have a 1u server on top of each box.
If you are talking about deep racks, that side you have in mind? I'm normally fitout our colo with 1200x600 or 800 and 47u.
are those seagates? you'll need 600tb parity to for when you transition to western digital or HGST.
Does anyone know the part number of the toolless drive bays used in this machine?
This is it, the server I'd buy and put in an old broadcast van to make a pirate TV station, it definitely has space to put enough drives to store enough TV shows to last a lifetime!
How much does everything cost, in this video (the full server, and all the disks and the servers?)
Now fill it with 100 of those those 50 TB 3.5" SSDs. Who wouldn't want 5 PB in a single box?
Anybody with a sensible accounting and requisitions department lmao
I wonder, how it performs in terms of performance and data throughput. I mean, those are 100 SAS drives, each connected to a backplane with 12Gbit link, but they have to be connected to a HBA\RAID controller (multiple HBA\RAID controllers?).
Correct me if I'm wrong, but 12000 Mbit divided among 100 drives nets 120 Mbit per drive, which is measly 15 MBytes (120 \ 8).
Typically your backplane to expander to host system bandwidth is not a single SAS3 lane. Even the cables are 4x SAS3 each. Then you have multiple cables and HBAs and expanders as shown in this system. The bigger limitation is on the networking side where you have say 25GbE in the configuration we tested. That is a big part of why the onboard GPU is interesting. One also has to remember all of these drives are not being access simultaneously and there is strictly on-node data movement.
Wow and I was thinking it is just another storage server. Fantastic review. But don't you see the controllers being the main-bottleneck for this system? having 50 hard drives per controller (assuming these 50 drives are shared among many remote servers) sounds like a performance killer to me. What do you think?
It is a bottleneck, but realistically so is the 25GbE that we had. Remember there are 1U 1PB storage arrays these days so disks are basically slow tier and not dense storage at this point.
@@ServeTheHomeVideo I see your point and that is why i love this channel. You guys have such fantastic comprehensive view. Thx a ton.
Interesting they didn't use counter rotating fans considering the static pressure benefits they provide and those choking hazard 100 drives lol.
Please tell me that you configured that in RAID-0
I'd be really curious to see if the Xeons would become the bottleneck as the general/"rule of thumb" recommendation now is one CPU core per HDD. (just to be able to manage the data coming onto and off of said HDD).
I don't think that there's going to be enough RAM, even with the AMD EPYC processors (4 TB of RAM) to be able to run ZFS dedup on this.
One core per hard drive? Never even remotely heard of that before... (And it would likely be news to NetApp, who as of a couple years ago had dual core CPUs in a few of their 28-bay offerings.)
(Have heard of 1 GB of RAM per TB of storage...)
I _REALLY_ want one and fill it with 18/20 TB drives
Hey Patrick, can you review the DDN AI7990X next? ;)
how long can you run them as Raid 0 until the first one breaks?
Hey Patrick. Can you suggest me a high core count 2-4 u server with as less bays as possible for 250TB of storage. It's really hard for me to figure out the right hardware due to its limitations.
Spoiler:- it's for my home usage. I want to put my bills and space less used. 200TB will go as my vault and rest 50TB (42 ish maybe) will be running all my machines. With that high core count.
I'm sure the 100 each 12 TB drives are not cheap coming from Dell; you'd expect them to be $350-400 each for good pricing, meaning they are probably ....$700 each from Dell? :)
Close, $837 so says a quick Google search. So close to $90k just in drives? Do we get a bulk discount?
Hellow .. I want to asking what is better for database server ? Dual Amd epyc or dual xeon platinum ? Thanksyou ..
Maybe a better question for the Milan then Ice Lake launches given that we are at the very end of the cycle. If you have big in-memory or persistent needs, than the 4-socket Cooper Lake system is the best option. Otherwise, a lot is going to change.
The only thing about these vertical drive bay designs is that they're not so hot-swap friendly. Especially if you're running tight cable management and you have to slide the whole server out to pop the lid for a drive swap. Unless the server rails are beefy enough to withstand the 400 pounds while fully extended, but even then I'd be a little scared to stand next to it while servicing.
how do you reasonably get 2kw to a single UPS/outlet in a home?
Okay, haven't got far into the video yet, but how many forklifts required to lift?
Only one :)
this is would be very fun to put together.. for me atleast
me and my friend combined struggling to fill a 6TB HDD, at a cloud server. it gets more annoying when you want to backup the files and it is just clutttered in the bare HDD with no proper software
you should definitely show how difficult the toolless system is, though. Sounds fishy if they pre-did it for you. I've had some toolless systems that were much more work compared to screwing in 4 screws into a metal bracket
I did about 10 of them. First one was slow just figuring it out. #2-10 were fast to get in/out so did not go into much detail about it
Samsung has 30TB 2.5 inch SSD's available, image the system you can build with those in 5U.
can you even pull it from the rack on rails while it's fully loaded without either buckling the cabinet or tipping it over?
Yes. This is basically required to service the top loading bays. Our racks in the data center are bolted to the floor though.
Most likely you're not allowed to mount above the middle of the rack.
I wonder if TruNAS could run on it. Given the raid controllers in it.
Yes cool hardware, but how did you utilize this hardware? ZFS, FreeNAS, ? How did it perform? What about the management interface? The real stuff we need to know assuming the hardware works.
We covered the management which is standard iDRAC 9 on the main site review, and showed in the video this setup with ZFS (albeit ZFS on Linux.) Also discussed that most installations will use Ceph, Gluster, or another scale-out solution.
Imagine that 40k 100TB 3.5inch SSDs x100 in this chasis ...
I would really like to know how a single PERC card can saturate a 25GbE. I've seen the bandwidth specs and I've never seen cards come close to hitting those specs, they must have used Unicorn configurations. An array like this would most likely be running RAID60, unless you don't care about your data. RAID60 would be 4 stripes of RAID6 of 25 drives, that's a lot of work for the PERC to do. I'm going to assume that's a 16 channel PERC card with 12Gb/s channels, but maybe it's 8 channels at 12Gb/s. I've generally been disappointed by RAID controllers, when you lean on them with RAID 6 their throughput numbers go way down. My modest setup went from 16x 2TB SATA2 drives to 8x 4TB SAS3 drives running in SAS2 to a 6805T. With RAID 6 on 8 drives and a 4 channel SAS2 uplink, the best the card can do on sequential is 800MB/s, mixed mode is 400MB/s. Unless there is amazeballs tech in that single PERC (LSI I'm assuming), I have doubts that it would saturate a 25GbE with RAID60.
Hello I just found the dss fe1 riser card assembly in china and the riser itself without and drives only cost $80. And I’ve asked the seller who’s within dell assembly facility. It works on normal server. If you can find a way to ship it from mainland to the states I’d love to help
Are racks and the surrounding infrastructure even designed to carry that kind of weight? How do you even work on a f heavy machine like that? Standard rack height is 42U, so you can have like 7 of these systems in a rack, plus a switch, UPS, and/or a load balancer server or whatever. The vibration on the rack with 700 HDDs must be crazy.
That is a concern with these and is why they are often only installed in half of racks (plus easier for top loading service. Static loading is often OK but dynamic loading is an issue if the racks is full and drives are installed.
What is really funny is two of these assemblies can replace all of Linus's Petabyte Project nodes. XD
Which OS? thx
That system would do for all my storage needs for several years. Unfortunately, Cost of unit Aside, plugging it in would Blow every breaker in my house, on top of melting the >Shudder< Aluminum wiring.......
ya but how much does one of those suckers cost (no drives included) cuz i tried to look it up and i cant find it
I am not sure these are sold without drives.
@@ServeTheHomeVideo you can get most servers bare bones i cant afford 100 hdds off the bat but could work tward it over time. So spitball a price
Also ty for reply =3 im new to owning a server and im so proud of my pile a junk ^...^
so, unraid will run fine on this right?
ROTFL... no
Hi Patrick,
Where to buy this server, please?
How much does it cost?
Looking forward to your reply.
Thanks for the awesome video :)
These you would call up your Dell sales rep for.
@@ServeTheHomeVideo Sorry, we don't have any Dell sales rep in Nigeria. Do you have one that can help me with the purchase and shipment? Thanks 👍
Likely need to find a local Dell rep at Dell.com.
I want this… but all I have is 72TB of random drives in a case I cut with a Dremel to put in extra hot swap bays
Giveaway! Giveaway! Giveaway! 🎉😂
Did you record the video in 30fps? Try 60fps.
1.8 PB not including the 1u storage
at the current commercial limits of 8TB you would max at 800 TB however you might run into a bus speed limitation with gen 3 and gen 4 ssd types
you max out at 100Mbps on 7200 rpm hard drives
can't comment on 10-15 k rpm..
Based on 100 drives @ 18 tb EACH you be looking @ 800-900 TB in caching
So when's the Milan video coming out?
Assuming all goes well today, tomorrow www.servethehome.com/amd-epyc-7003-date-set-for-milan/
For just 4mil you can get 10 PB RAW if you use the 100TB 3.5" SSD. Just imagine. Maybe the next Linus Techtips Project XD.
Those 2.4kw 80 Plus Platinum PSUs. Must sound like a jet engine taking off.
Any bad drives?
Lol now imagine this filled with Exadrive 100TB 3.5" SSDs... and using "regular" mdadm raid rebuild upon multi disk failures ;) Were it not for those advanced FSes/volume managers as in ZFS, regular block rebuilds in these servers once fully filled would take days or weeks ;)
Better than LINUS TECH TIPS Every day of the week.
Crazy cool server. I bet those fans suck back a ton of power too.
WHAT? i cant hear you over the noise of these fans!
19:16 Warning: "When Installing the chassis into the cabinet, there can NOT be HDD inside."
Patrick: We're going to test that
Goggle might be eying this up? Or Amazon maybe? Nice design.
Nah, they custom design their own.
I watch your channel with Ad blocker paused
so basically its a two unit half height blade chasis optimized to hold 100 hds. these kind of things were there for ages i guess. nothing really to boast about.
I really wish they had stuffed more controllers in and built in HA were in it. SOOOOO close....
Highlight of this vidoe: Get Ready For Milan
Launch day review coming up?
Stay tuned tomorrow.
@@ServeTheHomeVideo I will!
One word HPE with Infosight if you want a truly superior product.
I'm working in IT but even i think it is getting really unhealthy the power we use nowadays in all the datacenters AND all the data we collect...... It is really getting out of control..... ( i'm working for a large government institute and sometimes i think to myself, oh my god, another new software project which even collect more data then we already have. We need to draw the line somewhere...
Any chance they will avoid price gouging so that a system like that can be affordable for home use?
I am unsure of how to answer this. Even 100 drives at $400/drive is $40,000. Drives fail over time and you would likely want more than one of these systems so my sense is that these are only purchased by businesses.
@@ServeTheHomeVideo I was thinking more of using cheaper drives, and possibly implementing a way for users to expand their storage over time. For example, every year most people will purchase a few hard drives on black Friday, or during other sales, and it would be cool to have a system that they can expand over the course of multiple years users can add more storage to the same system.
@@Razor2048 Look into using ceph, pair that software with used ebay supermicros and you can achieve this. Ceph is not designed specifically for single-node use but it actually works well
Good job flexing on us :)
410 pounds? Holy shit! How did you get it on the table for this video?
That includes the pallet, boxes, rack rails, foam, power cables and such. There are handles but the key with this, and many large multi-node and/or accelerated systems is to strip the chassis then insert components while in the rack or on the photo table. I have been doing this for years. Pre-pandemic I had a 395lb deadlift but still did this whenever moving a bulky server.
@@ServeTheHomeVideo Interesting experiment would be "deadlift fragile expensive objects". I wonder how much that de-rates a lifter's peak, knowing it needs to be set down gently and not dropped at any point....
My 40TB Plex server is now embarrassed
:P
I'm at 1/10th of a Petabyte - I'm catching up to you! lol
Huge. 👍
LINUS : 2PB SERVER, 100GB LAN
ME: 10TB HOME NAS, 1GB LAN
My new desktop
Ha!
No vibration damping in the carriers, frame, or interconnects... NO. Back to the drawing board boys. I have 4-5 drive units that vibrate the entire desk at high frequency. (I sit them on old mouse pads to help damp it.) 100 drives will shake the welds apart. And the thing looks to be made of tin foil. I have 2U Sun servers made by Fujitsu that could lift a car. (the case alone weighs 100lbs.) [I joke they thought it was a cargo ship]
As he said in the video Dell has surely tested this with clever hardware design and appropriate firmware since they give warranty on the drives (and will even come and swap them for you).
Wondering what the Dell marketing/biz/suits said about this video and his umm "critique" of their marketing?
They are not happy. Such is life.
could be cool to go and one in 5 or 10 years , when they dicomition them ,
Nice server. I can only imagine the cost $$$$$$$$$$$$$$$
ps if someone gave me this server i would screw 2000 screws :)