Building My ULTIMATE, All-inOne, HomeLab Server
Вставка
- Опубліковано 1 чер 2024
- Today I built the ultimate, all in one, HomeLab Home Server to handle everything.
Sliger did send this case to me however asked for nothing in return.
Other 4u Cases
- Sliger CX4150a - www.sliger.com/products/rackm...
- SilverStone RM44 4U - amzn.to/3K0wpmk
- RackChoice 4U - amzn.to/3UB8bEf
Other Parts
- Samsung SSDs - amzn.to/3USTxtj
- Corsair Airflow Case (newer) - amzn.to/44BV0HI
- 10g Ethernet adapter - amzn.to/3wkN0hP
- LSI HAB - amzn.to/3UWBuCN
(Affiliate links may be included in this description. I may receive a small commission at no cost to you.)
Video Notes: technotim.live/posts/ultimate...
Support me on Patreon: / technotim
Sponsor me on GitHub: github.com/sponsors/timothyst...
Subscribe on Twitch: / technotim
Become a UA-cam member: / @technotim
Merch Shop 🛍️: l.technotim.live/shop
Gear Recommendations: l.technotim.live/gear
Get Help in Our Discord Community: l.technotim.live/discord
Tinkers channel: / @technotimtinkers
00:00 - What I want out of a HomeLab Home Server
01:19 - Selecting a case / chassis
02:23 - Use Old case?
02:58 - New or Reuse?
03:33 - Other Case Options (Zack Morris style)
03:51 - Thinking about Hacking this chassis
04:19 - CPU & Motherboard
05:39 - Disassembling
06:48 - Component layout
08:11 - How to get 15 SSDs in here
08:57 - Maybe print some parts?
09:45 - For now, it's jank
10:24 - Test flight
11:02 - Power usage
11:37 - Testing components with an OS
12:18 - Networking
13:02 - Temperature checks
13:30 - Testing GPU
14:45 - SSDs are here
15:19 - Racking Server
15:56 - Weird Gap
16:20 - Selecting the operating system
Thank you for watching! - Наука та технологія
Sorry about the mistake by saying 5.25" drives! While researching and testing, I was trying to figure out how many drives I could fit in the Corsair's 5.25" bays and somehow that got into my script. 🤦♂ In the spirit of mixing things up, let me know what you've mixed up before!
Please try Unraid. It is very different in ways I'd like you to show people. I finished an ITX build on Wednesday and by now (
Friday) I have an entire arr stack with multiple instances of certain containers running. Even while being pretty busy at work.
i was about to comment on that xD yeah i mixed stuff up too i cant come up with anything rn thou
great video btw
Gave up on floppy disks along ago
leaving mistakes in is a surefire way to drive engagement lol ppl love to tell you when you're wrong hahaha
I hate typing "disk" in front of someone at work and accidentally typing "dick"
I, also, hate using 5.25" hard drives. Such a pain ;)
I know, the Quantum Bigfoot, it's a nightmare, it's even not have the ultraDMA mode.....
I had 5.25 on my mind because I was trying to see how many drives I could fit in the Corsair case's 5.25 bays when writing this🙃
@@TechnoTim how many can you fit?
Better than using 8” floppies
@@yuan.pingchen3056 LOL I remember those I only had one in my time... I also remember MFM/Winchester drives from the days of AT/XT days.
Hey Tim, I’m hard of hearing, but I just want to say thank you for your time and effort into adding subtitle😊
No problem! I try my best everywhere, even on websites with A11Y!
Your public library might have a 3D printer if you don't want to purchase one. I use my library's printer often.
5 1/4 DRIVES?? They're 3.5 inch!
Oof! What the heck was I thinking when I wrote this. I think this snuck into my brain because I was playing around with my old Corsair case and was trying to figure out how many drives I could fit in the 5.25 drive bays 🤦♂
@TechnoTim we've all been there, lol!
@@TechnoTim I think I still have some old SCSI or maybe RLL 5.25" HDDs. Just in case you need one that has a sum total of 32Mb. (megabytes).
He was using Quantum Bigfoot drives ;)
Beat me to it!
4:40: x16 will only run as x8 and x8 runs as x4 if you are using a feature that shares the same lane, x16 and NVMe Gen4 slot for example.
It's important to know your mobo limitations like how many PCIe and which features shares the same lanes.
Don’t go with the EVO 870’s, I made the same mistake (They are consumer drives and wear out quickly!) I replaced all of them with the Samsung SM883’s (ZFS pool)
For an ultimate all-in-one homelab server, a hypervisor whiteout even thinking. One solution (nearly) fits all.
200 Watt idle. Where I live (Netherlands) that's about 500€/year.
ouch. typical USA kwh price is .15 USD about 262 USD a year for 200 watt / year
You can thank the failed sanctions on Russia for that.
Proxmox for the win!
No reason why, I just love it.
Also, thanks for the info on the Sliger cases. You just made me spend more money, they look great and are made in the USA for a reasonable price. My Threadripper platform is getting a new home. :)
I’ve run for years an unRaid server which had a w10 VM with GPU and NVME passthrough that I used for playing games and the rest of the system was used for docker stuff: Plex, arr suite, homeassistant and a large etc.
Just make sure you have a Renesas chip based USB PCIe card passed to the VM so you can plug and unplug peripherals without freezing the VM.
I went bear metal on my home lab server for a while, because it could do everything I wanted. However, things changed and now I’ve reinstalled everything on proxmox. The overhead is low and you have the flexibility anything in the future. So I would just install a hyper visor of you’re choosing.
Grizzly, Black or Brown ?
@@Marsh.x Sorry i didn't clarify, black bear metal of course.
Probably too late to the conversation but recommend sticking with Proxmox but using SRIOV to pass through part of the GPU to multiple machines
15:18 When racking servers by myself, I've found that there's usually holes in both the sliding part of the rail in the rack, and the stationary part of the rail (the portion that attaches to the rack itself). Every rail is different, so it always takes some experimentation, but I've found that I can put a spare screw/toothpick/pointy-thing through both holes so that sliding part of the rail doesn't push back while I get things lined up. I do this for both sides but sticking out different amounts so I can line things up one side at a time. Just be sure that the holes you pick in the rail can be reached from the front of the rack.
I moved to a single giant server build a while back from my own giant rack with dell power edge servers. But it was to much power usage. For most use cases in a single server build. I found unraid to be the best base OS for me. 5 years later. Still rock solid and never had any major issues. Kind of on autopilot and it just works.
Been running unRAID for 🤔 3+... I've lost count. 😆 The UI makes it's super easy to manage everything.
Yes! Thanks for the change in content
Same here. 😁👍
Taking a play from my corporate sys admin world. Separate storage and compute boxes.
Going to build a TrueNAS Scale box as the central storage for the entire homelab. Then use the 2nd unRAID lic to build a fresh compute server. Both will have 10Gbe until i can swap for fiber.
Awesome stuff, thanks for sharing!
Proxmox or xcpng are what I'd go with. It's nice to not have projects competing for ports or service configs.
Strap a fan to that hba. It needs airflow. Look up the cfm needed for it and youll see why, or touch the heatsink after running under load for a while
Perhaps not the best idea to touch it, particularly when it's installed like that where it's bound to get pretty hot
Hey Tim, great build can't wait to see how that works out, Always looking for projects like this. I ended up with the iStarUSA D-410-DE36 case that allows for 36 drives a while back for the hotswap trays.Paired it with NORCO RPC-4224 4U with uses 24 3.5in drives. Purpose was to run Flash NAS and backup to spinning disk NAS. If you end up making 3d printed drive holders for this project, I would love see that adventure.
getting a long pcie riser cable and making full use of the CPU you have seems like the best move imo
I don't think this is the ideal project to try Unraid on, but you really need to try it if you haven't. I always have at least a couple of servers in my rack and love changing things out, but the Unraid box is always there
When you were talking about the power supply with those giant connectors you've never seen before, you really proved to me how old I actually am, so thank for that. LOL. Looking forward to seeing how this turns out.
@5:04 I don't know cpu lane calculation is as simple as you explain. you should probably check motherboard info, how many lanes are used for internals like lan, usb, ...
This describes EXACTLY what I want to do (in the intro anyways). Perfect workout watch.
I'm excited for the software part! What would you say are the disadvantages of proxmox in a build like this? Doesn't that give you more flexibility?
"I want a machine that does everything" 1 minute later "I already have a NAS" :P
To say something more productive: I prefer buying used enterprise drives because their TBW rating is vastly higher. Even compared to high end consumer SSDs it is often between 3 to 9 times higher. And it gets so much worse if you compare against QLC drives. Considering things like write amplification and RAID you can run through a lot more written data than you expect. That is probably a trade-off you made intentionally, considering you have a separate NAS that can serve as a backup target, but it is something to keep in mind for others attempting to replicate it.
Thanks Tim.
With 'only' PCIE gen 3.0, you can hardly call it the 'Ultimate' HomeLab server 2024.
If you do find a way to convert those 3.5" bays to 5.25", Wendell has shown some interesting enclosures that adapt 5.25" bay to 2.5" or nvme flash backplanes
Sliger offers watercooling options for the Threadripper/Epyc line of CPUs in that chassis. I'm not aware of any constraints that would stop you from using random AIOs as long as they aren't too thick for your GPU clearances. There is a build someone did in that case with a thick 360MM cooler and a 3090ti.
I can’t remember landing the plane in Top Gun on the NES. Well done ! 🎉 As for the OS, I would vote for Proxmox or a regular distribution like Debian, Ubuntu or a RHEL clone. You could use Cockpit if you want to spin up VMs on them.
Tim, great video! Have you considered exploring Proxmox on this server and demonstrating GPU passthrough? It would be valuable to see which remote software is optimal for accessing VMs, and perhaps even conduct a gaming test. Looking forward to your future content! Greetings from Bosnia :)
im tempted to threadripper my next all in one homelab build. but this is a good and cheaper alternative I feel
Nice video!
I rackmounted my PC a week ago. The chassis was less than (the equivalent of) £100, including rails, and fits my 3 chunky radiators in too. My office is so much cooler and incredibly quiet now.
5.25" hard drives?? Sign me up!!
Kewl setup, especiallt delivery of ssds is funny 😂 they weren’t stolen
Interesting. I am literally building a AMD Threadripper 7960x using a Sliger 4170i case with the ASetek 836SA AIO cooler. Just waiting on the case to be delivered to start the build.
Try Unraid! I love it and is very user friendly
love the video. just setup some ai local following network chuck video it works awesome. cant wait for your take
I'm about to rebuild my server and I'm actually really curious what OS you're gonna run because it may influence my decision. I really didn't enjoy truenas because setting up apps wasn't intuitive at all. Curious about unraid but not thrilled about the price. Proxmox seems logical but I wanna be able to add more storage later on (Ideally) into the same pool. Cool build! Will be looking forward to the part 2!
Switch to a fiber SFP+ transceiver and it will run much cooler. The RJ45 copper ones run really hot and use a lot of power.
DACs are a nice alternative as well.
I have a dual 10 gig E card in my unRAID and noticed first hand hot toasty it is. Now I wish I didn't give away the SFP card 🤦🏻♂️
I'm curious how your SSD pool works out. I have a Truenas Scale setup with 6 2TB Samsung EVO's and I've struggled to get decent performance. Literally it's less than a single disk. I tried Stiped 2 x (3 raidz1), 3 mirrored vdevs and raidz1 across 5 of them.
Hopefully you can do a follow up video on your SSD pool setup and final performance numbers.
Consumer models performance will drop I believe because of the drives cache, PRO models address this. And enterprise SSD's will be far superior. At the end of the day, you get what you pay for...
You mention the AMD in the script that has more PCI express lanes but in the video showed 20,16 then said "same number" but actually the Ryzen series have Native PCIe Lanes (Total/Usable)
28 , 24.
You mentioned this will not be taking over your NAS role. What about your firewall? I've seen some folks virtualize pfSense or vyos, but I don't think I would be comfortable hosting a firewall/ids/vpn on the same machine in case the virtualization solution has a security issue.
Thanks, Tim. I thought I was over getting traumatized by the carrier landing in Top Gun lol
omg the top gun carrier lmao. I've never once landed it as a kid.
So apparently we are on the same tech wavelength and I just now noticed it, lol. I too just built the a few servers to do all the things. I ended up going with an Epyc system for all 128 of its PCI lanes dumped into a HL15. Another machine made use of the SilverStone RM41-506 4U chassis. I needed the 5.25" bays for the tape drive and an Icy Dock 4x 2.5" ssd hot swap cage, but the GPU just fits :)
I've been eying the dual epyc setups on ebay to eventually migrate everything to. I have absolutely no need for all 128 lanes or cores/threads, but it'll be all I'd ever need and then some.
@@DrDipsh1t Wow yea dual Epyc would be Epic 😂 I am actually making pretty good use of the PCI lanes on my 7282 16c/32t rig. A 16x PCIe card that holds 4x NVMEs, 16 port HBA, Radian RMS-200 Edge Card and a dual port 10gig SPF+ nic. Still have 2 more NVME m.2 slots, 2 OCuLink ports as well as 2 mini sas connectors on the motherboard I could populate. Its an Epyc8d from AsRock Rack board.
If you didn't have the hardware already, I would've recommend with a Threadripper system because it offers more PCIe lanes.
(That's the direction that I'm heading down, except that I'll likely end up with something like an 8U chassis and then using PCIe risers/extensions so that it won't cover the rest of the slots. My other option will be a Supermicro 4U or 5U GPU server, but those are limited to dual-slot-wide GPUs only, which means that my 3090s will be blocking some of the other slots.)
Ooh look at that sliger case! Very pretty! PCI lanes is why I’m still using x299 systems :/
Great video and awesome build Tim. Thank you.
I had the same idea a few months ago (all encompassing build with a lot of PCI), and I ended up going with a barebones Dell Precision T5820. I threw in an Intel Xeon W-2140B and some ECC RAM. Don't sleep on repurposing used workstation hardware!
Love my Sliger Cases. Have the CX4712 for my NAS and CX4150a for my desktop. I will probably buy a CX4200a for a gpu upgrade though. 3090FTW BARLEY fits with a low-profile radiator. It just fits a credit card between them.
I would go with Unraid for a build like this as it's easy to use and have a lot of features and if you really want to tinker, it's running on linux and have a terminal for any custom tinkering :)
Question, is that power supply able to handle the power spikes 3090s are infamous for? Genuine question
Lol that saved-by-the-bell timeout
Same, though it took me a few seconds to realize why it looked so familiar.
Thanks! Yeah, a late edit. I actually deleted the background and then added it back. no regets
15:25 - Interstellar - Docking Scene
LLama3: Endurance Rotation is 37,64RPM. It's not possible
Techno Tim: No. It's neccessary
Pretty case :-)
There are lots of makerspace’s that have 3D printers. Some public libraries as well.
This is the road I've been going down
Custom watercooling would give you back some more lanes. But i am maybe a bit to much in love with wc 😂
100% add this as a worker node to your K8s cluster! No need to have a different management layer!
K8S ALL THE THINGS!!! 🤘
16:55 do it! Unleash that hardware, sir!
So tired of them artificially holding back pcie lanes and also holding back on bifurcation of pcie lanes on consumer cpu's and motherboards.
For the 2.5" drives- print out a bar that goes across the top of the drives and long enough to touch both sides of the case. There would be small ridges printed into the bar at each drive location to keep them in place. That way you don't block air flow. And you could easily lift off if you need access to drives.
If you look at the schematics again iot says it right there which Slots are which for example the first x16 Slot says CPU SLOT6 PCI-E 3.0 x( (in x16) so that is an x8 in a x16 connector which is connected to the CPU and the last x8 says PCH Slot1 PCI-E 3.0 x4 (in x8) so it's a PCI-E via the PCH and only has 4 Lanes. So in any case the GPU only will run on 8 Lanes on any of the x16 connector slots as both are only x8(in x16).
Thank you! Yes, I actually edited out the part where I explained that, I probably should have left it! When I did the math you can see I adjusted for it (8 x 6 = 48, and the last was 4 in an 8)
There are companies that offer 3D printing services; if you are not going to be printing stuff regularly, could be a good idead to look into one of those services.
Great video, love your content.
I'm kind of surprised waiting for Intel 15th Gen. Right now, that seems to be a really losing strategy based on their power problems and lane management. While you can say "only a few more lanes" on the Ryzen, those lanes can have real impact in what's available to you, if you are looking for onboard 10Gb plus multiple PCI-E x16. For most users, you might find that you can get a Threadripper Pro 3, 5, or now 7 series available (new or used) and you can end up with a WRX80 board or WRX90 board and get 128 PCI-E X16 lanes, which will cover absolutely anything you've ever thought about. I would put that into high contention in your list.
Do a Harvester install, opensource kubernetes based IAC for containers and VMs with immutable nodes.
Xeon. ZFS. Consumer SSDs (granted they are dirty RZATs). 10gig Ethernet. AST2500. Anythihg but the RTX 4000 SFF ADA. I see great moments of joy and power efficiency in your future.
I'm just now noticing the shirt. I like it, haha!
I love it!
I'd like to cast my vote for a baremetal setup. I have a Proxmox server myself and decided to turn an old ITX box into a minecraft+calibre server, chose to do baremetal Debian (Crafty Controller setup was problematic for me on Ubuntu). Kind of wondering what all I can run on this, wouldn't mind some inspiration from whatever you come up with. My little setup can't handle much, but I'm dreaming of upgrades to do more in the future.
I'm contemplating similar build, and here are my observations:
for proper local AI selfhosting, having Mac Studio with 64GB unified memory is MUCH more suitable than RTX 3090 which is limited to "only" 24GB vram,
that said nVidia gpus utilizing CUDA are often faster than Apple's neural engine,
all reasonably-priced Xeons are limited to PCIe 3.0 which is not futureproof at all, and already bit of a bottleneck for current gpus,
any strong gpu takes lot of space and covers most PCIe slots on the motherboard, so it's rather difficult to decide on PCIe expansion layout, preferably gpu on "lowest" x16 slot, which then needs bigger than ATX case,
that said some local AI tools can utilize multiple gpus or even multiple computers, so there's a delicate (cost/efficiency) balance between running let's say 4 gpus in one rig, 4 computers connected via 100Gbe together,
latest Windows Server or plain Windows Pro, with WSL (Subsystem for Linux) seems best OS for widest range of local AI applications - arguably better manageable via Proxmox but with undesired performance loss,
to sum it up, one all-in-one server doesn't seem that efficient, depending on what to run on it
Tim ever consider RTX 4000 Ada. Has 20GB RAM and AV1 encoding all in a SFF single slot and 75W power consumption. Should be boss enough to play most games.
Can I please have the stl files for the ssd cage you showed 9:07, or atleast guide me to the closest model for personal use
Try out Unraid. It's imho the best OS for a single system. file shares, VMs, passthrough, containers and ZFS support
"Try" and that's it! Don't get me wrong. Unraid is good but...
ZFS isn't officially support, only community plugin & it's running in userspace! so your SSD's & 10Gbe are going to waste...
So many great open source OS'es to choose from... if money doesn't mean anything to you, by all means
@@kbng02 ZFS on Unraid is officially supported since 6.12, released in june 2023.
ZFS on Linux exists only as a kernel-level module. So what are you even talking about, even with community plugin it was running the same kernel module.
@@kbng02 ZFS is officially supported since last year, and ZFS always runs in kernelspace, you are out for lunch
Tiny subtitling correction at 2:05. I’m pretty sure you just left a gap in speech, rather than starting a new sentence.
Generally though, your subtitles are very good. Thank you for taking the time to do them.
Thank you! All fixed!
I would like to see you give unraid a shot since you haven't yet, selfishly I am curious what your thoughts about it on a setup like this. I started with a single PC like this a decaid ago running Windows Server 12 r2 and Hyper-v. Over the years I upgraded that server and eventually replaced it with a much smaller proxmox cluster. I still have that machine and I am not ready to let it go yet. I have thought about unraid or truenas.
Hi. I would like to know your opinion on how long the SSDs would last compared to regular HDDs on a more intensive writing/reading application. Let's say rtorrent with 2-3TB of data. I have a dedicated HDD just for this purpose that is passed directly to the VM and I am reluctant to change it for an SSD.
Thanks, the way I look at it is that all drives have a lifespan, some fail faster than others, and others out live their lifespan. Since I look at all drives through the same lens (whether SSD or spinners) I just make sure that they have a long warranty and have one spare just in case. These Samsung drives have 5 year warranty and out of the 30+ EVO/Pro Samsung drives I have bought over the years, only 2 failed and Samsung replaced them within a week. For this reason, I typically go with consumer grade.
Supermicro manuals will tell you how the pcie slots are wired, even if you don't use a slot that's say wired for x4 buts its in a x8, you don't magically move those lanes to somewhere else, they are stuck there. I'm willing to bet two of those x8 are actually x4, I haven't looked at the manual for that board but that's more likely.
Nevermind I just saw in the diagram the last slot is wired x4, and those 16s are just x8. So your gpu will most likely get a small bottleneck.
Yea try incus! Changed my homelab life up! Run it on Debian
With the price of electricity I like to keep the 24 hour a day wattage to a minimum , about 50-60 including a rtx2060 super .
I have a Ryzen 5700g with 4 x 4 tb SSDs and 64gb ram , but no ecc memory and only 20 lines .
I plan to build a server with the 5700g and 64 GB RAM as well. Do you have any numbers of the power usage without the video card? I thought about and suggest that it will be around 25-30W but numbers are hard to find online (atleast for me). Would appreciate it if you could help :)
@@pesfreak18 switch all the power boost modes off and which 2 x 4tb SDDs and 2 x nvme 2tb drive , for a small nas on deskmini x300 just set up as a samba server 15 watts according to my power monitor plug on idle.
But my other system on a B550 motherboard , with the gpu and about 10% cpu all the time about 50-60 watts , but I think I must be the only person who is running a 5700g water cooled with a 360mm cooler , but it was a remove my x570 motherboard with a 3950x because I could not get it below 120 watts on idle .
16 core 32 treads was a little over kill for what I wanted my home server to do , in fact I think the 5700g is over kill.
@@peteradshead2383 15 watts seems very efficent. Water cooling is a little bit overkiil for me too but it sounds like a cool project. The Ryzen is maybe overkill but I want to experiment with gameservers and maybe AI in the future. I think I can utilize the CPU well enough with these tasks.
So why are u preffering 48 pcie 3.0 lanes over 20-24 pcie 5.0 lanes?
i would go baremetal and install cockpit and cockpit-machines for the VMs with lxc img
The last PCIe slot on that motherboard is served by the PCH and it’s a 4x, so it doesn’t “count” for the CPU lanes total. ;)
TT's gone back to the 90's... 5.25" drives
Did I miss it? What Supermicro mobo are you using?
That bottom PCIe slot uses lanes from the chipset, as it’s marked PCH, not CPU.
So where did we end up on power usage?
"I dont want anything to be flopping around"... 15 minutes later: Everything is flopping around
I think you might like 3d printing!
so THIS is what that twitter post was about!
Built a homlab server as well - ran Unraid for a while which was in the end a patchwork of tools for functions that a server system should already have integrated. And running from a USB stick is not something I want to rely on.
So I decided to run Proxmox which is the most flexible system in my eyes. It's lightweight, can run VMs or LXCs, passthrough hardware, it's reliable and definitely the more professional choice.
It's basically only the config files stored on the USB stick.
Unraid will load its config into RAM and run from there.
For reference my Unraid system has 19,877 reads and 7341 writes to the USB stick and it's been in operation for 2 years with a lot of changes.
Should my USB stick fail, I can just download the automated backup to a new USB stick, recover my license and I'm up and running again.
@@Hansen999 Thanks for the information! I know the story. Used it for three years-
as an avid Unraid fanboy, dont put unraid on it lol. IMO Proxmox is the way to go. cant beat that flexibility!
Great job, can’t wait for part two. Try unraid
creating a dedicated VM with the GPU is kind of limiting IMO, I find that using the GPU in lxc containers is the best way, you can have multiple containers use it at the same time and even monitor its usage from the host os
Or split the card, but that requires finesse and doesn't work with all cards
@@BoraHorzaGobuchul very few cards suppprt thay and generally they're expensive and require licenses
@@WhyDoesNothingWork it can be done even with some consumer cards. Those are not super expensive. Some more expensive pro ones can be found on the service have market.
Intel cards will perhaps allow this as well.
@@BoraHorzaGobuchul if your talking about sr-iov only the commercial Intel cards do it and the chances of finding a consumer cars that does it is like winning the lotto
@@WhyDoesNothingWork there's a number of consumer Nvidia cards that work, but it does require rtfm and research, not something I'd love to do now. Then again, at a younger age I'd've already done that just for the lulz...
It's hard to believe that single "my humble homelab" video might be the whole reason this video is being made...
3:12 5.25" drives? Don't you mean 3.5" drives?
A lot of multi-drive bays still occupy 5.25" docks.
@@PhotonHerald Yes but this isnt a dock, these are just the drives.