ULTIMATE 40Gb Homelab Networking - UNRAID NETWORK SETUP
Вставка
- Опубліковано 15 жов 2024
- Get 40GbTransfer speeds for the price of 10Gb! If you are doing a new install of a 10Gb Home Network right now, you might want to evaluate setting up a 40Gb backbone that can provide a lot of additional throughput to really max out any machine's storage subsystem. Perfect for #homelab setups, video editing on super-fast shared storage, and virtual machines that perform lightning fast. Be sure to check out some of the hiccups I hit along the way on my Ryzen 5950x Windows 11 machine. I got it to work in the end! #Unraid, easy.
Use our link to get started with Unraid (affiliate link): unraid.net/pri...
Some of the products featured in this video are from our sponsor: fs.com
👇NETWORKING + HOMELAB GEAR (#ad)👇
RACK - StarTech 42U Rack geni.us/42u_Rack
SERVER RAILS + CABLE MANAGEMENT
APC Server Rails geni.us/APC-SE...
Cable Zip Ties geni.us/Cable_...
Monoprice 1U Cable Mgmt geni.us/Monopr...
Cable Mgmt Tray geni.us/Server...
Dymo Label Maker geni.us/DYMO_L...
RAM
DDR4 RAM geni.us/DDR4_E...
DISK SHELF (JBOD) + CABLE
Netapp ds4246 geni.us/netapp...
Netapp ds4243 geni.us/netapp...
QSFP to 8088 (SAS Cable needed for 4246 & 4243 JBODs) geni.us/mCZCP
SERVER
Dell r720 geni.us/OAJ7Fl
Dell r720xd geni.us/5wG9n6
Dell t620 geni.us/dell_t...
HBA
LSI 9207-8e geni.us/LSI-92...
ENCLOSURE
Leviton 47605-42N geni.us/levito...
SWITCH
Dell 5548 Switch geni.us/Dell_5548
Mellanox sx6036 Switch geni.us/Mellan...
Brocade icx6610 Switch geni.us/Brocad...
UPS
Eaton 9PX6K geni.us/Eaton9...
Eaton 9PX11K geni.us/Eaton9...
Be sure to 👉Subscribe👈 for more content like this!
Join this channel to get Store discounts + more perks / @digitalspaceport
Shop our Store (receive 3% or 5% off unlimited items w/channel membership) shop.digitalsp...
Please share this video to help spread the word and drop a comment below with your thoughts or questions. Thanks for watching!
🛒Shop
Check out Shop.DigitalSp... for great deals on hardware.
Amazon Storefront www.amazon.com...
DSP Website
🌐 digitalspacepo...
Disclaimers: This is not financial advice. Do your own research to make informed decisions about how you mine, farm, invest in and/or trade cryptocurrencies.
*****
As an Amazon Associate I earn from qualifying purchases.
When you click on links to various merchants on this site and make a purchase, this can result in this site earning a commission. Affiliate programs and affiliations include, but are not limited to, the eBay Partner Network.
Other Merchant Affiliate Partners for this site include, but are not limited to, Newegg, Best Buy, Lenovo, Samsung, and LG. I earn a commission if you click on links and make a purchase from the merchant.
*****
#homelab #fs
Join this channel to get access to perks:
ua-cam.com/channels/iaQzXI5528Il6r2NNkrkJA.htmljoin
Shop our Store (receive 3% or 5% off unlimited items w/channel membership) shop.digitalspaceport.com/
I run 100 Gbps Infiniband in the basement of my home.
It was mostly used for HPC applications (CFD/FEA -- that sort of thing). Within those IB/RDMA/MPI aware applications, I was getting somewhere between 80-90 Gbps throughput (depends on the problem and the application, and also the size of the problem that it was solving).
Using the IB bandwidth benchmarking tool, I can get up to around 96-97 Gbps.
For me, using said 100 Gbps IB network for storage is really just a fringe benefit as it wasn't deployed with that in mind.
Having said that, I don't have like a pool nor an array of NVMe SSDs (in fact, I try to avoid using SSDs because SSDs are the brake pads of the computer world in that the faster they are, the more you're going to use and more likely you're going to use them, which just leads to wear out.)
So instead, I use 36 HDDs, and as a result of that, my throughput is limited anyways.
But it is nice that I CAN max out at roughly 24 Gbps with eight HDDs, with the nominal average closer to around the 4 Gbps range. But my LTO-8 tape drive can really do only about 200 MB/s (~1.6 Gbps sustained) so the fact that I have so much networking bandwidth/headroom - again -- storage part of this was just a fringe benefit as a result of the HPC micro cluster deployment that I had.
On a $/Gbps basis, 100 Gbps IB is cheaper than even 10 Gbps, even if the NICs, cables, and switches have a higher absolute cost.
(I run it through a Mellanox MSB7890 externally managed 36-port 100 Gbps IB switch.)
I just ran a 10gig Fiber line between my Mikrotik router and Switch and it's awesome!
The speed is addicting. Folks tell me 2.5 and I'm like....nahhh go 10!
@@DigitalSpaceport I just set up my first NAS with a 1GB connection. It's very annoying!
@eazolan yeah at 1Gb link it's not fast enough. I'm not sure why some folks say that's fine, people expect a folder preview to load pretty fast in modern times
@@DigitalSpaceport not fast enough when I have a 1Gbps fiber connection with multiple people streaming outside of my network and inside at the same time. I am going to be adding a Dell emc sc420 to my system shortly
really sweet tutorial, i didn't realize CPU speed had such a big impact on 10/40 gbps ethernet, kinda odd that it doesn't utilize multithreading on the cpu.
With 40Gb core frequency is critical. 10Gb less so. When we get to the shared storage video I'll go over some bottleneck points again.
Something to remember is that these NICS all have ASICs onboard to offload some of the network processing from the CPU to help free up resources.
Yeah I need to dig into tunables but for ETH traffic it still blows the hell out of the processor. Do you have some things I should check in specific?
@@DigitalSpaceport Yea 40G is no joke. It's for sure worthwhile to run a server CPU and board, gotta love those lanes. In regards to the settings It's called a little something different between vendors but what you're looking for is TCP Offload and then depending on what vendor they have configuration guides for things like RDMA, ISCSI, VXLAN offload, etc to be offloaded to the super fast silicon on the NIC. I know for sure Chelsio's cards have native support in BSD based operating systems for a lot of these features. Check the doc from your vendor. As the cards get newer like the 100G and up ones, the onboard hardware is faster and faster with each generation. Older 40G NICs and switches have a bit more latency and less features according to Wendel from L1 and I believe Pat from STH did a comparison on generations of connectivity fairly recently.
This is a limitation of iperf3 it is just single threaded. That's why you can achieve 40GBit/s with 2 instances! Also increasing MTU size also helps with cpu cycles on larger transfers
mmmm.... 40Gb. Spicy!!! Yes please, ill take some of that!!
Damn this is a serious build your making even Dave's garage look slow lol
Great video! Love to see more people getting into "Big boy" networking 40G and up! 2 things, have you considered the jump to 100 with Mikrotik's new $999 4 port 100G switch and do these brocades need any special licensing or anything for L2.5/L3 features? Looking at the POE version of what you got. Thanks.
I do love the power behind that ICX6610 for sure. I have been looking at that Mikrotik 100g powerhouse but I thought it was only 100Gb total switching? mikrotik.com/product/crs504_4xq_in Is this the one you are talking about?
I need to get PCIe4 NVME storage arrays to really utilize 100Gb and I don't have those servers in the racks.....yet
The Brocades you should checkout the STH thread that has massive info on the ICX6610. If you get one that has not been R2'd it may well have all the original license stuff installed. Mine had the PoE, 80 and 160Gb module (which is what they call all that power in the backside) already. I disabled the PoE on mine but I have a 5548p that already is setup for all that.
@@DigitalSpaceport Their test data seems way off. I remember seeing it and thought it looked neat for labbers. I wonder if they missed a capital B somewhere. Used 100G iron is starting to come down in price too thankfully.
my network is junk, need a L3 ROUTER that can handle traffic and vlands. my plans are to join the 10 gb club but now after watching this video, i need to step it up to the 40 gb =) waiting for the DYI video/buy sheet
any luck on making a DYI 40g network for noobs and a buy sheet =)
this is so technical - subscribed
Such a cool video. I adore this shit. I’m gonna build something similar with TrueNAS, gaming PC’s local, remote bare-metal gaming PC, and 3 gaming VMs using a shared Tesla and a dedicated RTX 3060, just for the fun of it and to learn. iPerf is fine for testing, but good luck getting anything remotely close from an UNRAID array unless you carefully lay out the data across all the disks and then query them consistently. Unraid is slow, even with NVMe, for numerous reasons.
That 40gbe is so cheap is pretty crazy and fun. If you hit FUSE yeah its going to cap out very early. Direct to mounts can be faster tho.
Great info and an idea to think about before using 10GB only. I know that my Synology can only do 10GB at the moment, but having the ability to upscale as components increase will allow for up to 40GB to future upgrades. Thanks for the information!!
You should checkout the Petabyte for content storage followup video I have editing right now. You are likely very good with 10Gbit....but 40Gbit does bring another level of performance if you have all flash arrays on your tiered storage.
Great video! Were you able to connect this through Brocade icx6610? I'm getting only 10Gb (I have dual cards) and not sure if there is 40Gb license.
Check the licenses section in the GUI to see what you have. System > General and then in the displayed screen click Config Module on the left hand side. If slot 2 reads "ICX6610-QSFP 10-port 160G Module" with status ACTIVE then you have your rear QSFP slots functional. The 2 on the right hand side are 10gb capable (1 to 4) or single 10gb and the 2 on the left side are the actual 40GB ports.
New camera looking amazing 🤩
TY ser! Z30 is pretty easy to use.
Some motherboards (msi trx40 creator for example) allow you to fix the 2nd pcie slot at X8 and only drop it down if you explicitly set it to bifurcate ....
Yeah that video was before I discovered the Threadripper life.
Looking forward to rdma info with nfs or samba if it’s now available on Linux vs just windows.
Nice video, i bought two card Mcx354A -FCBT but my unraid is not able to discover in network setting , can you suggest how to make it working
Yeah you are going to need to install the mellanox plugin, in the app store, and may need to flash the cards and/or set them to mode 2 operation ETH if they are for some reason in IB mode. The plugin gives you the steps to do that.
@@DigitalSpaceport Thank you for the answere , i already install plugin , but the MCX354 -FCBT is not in the list for the eth mode ,i flash the one for my card at least update, i do not understant how to switch in etherneth mode
Would it be correct to say that the Brocade 6610 can only provide 40gbe to one client when connected to Unraid?
If you connection is Desktop 6610 UnRAID host then yes. If you are looking for more hosts checkout the mellanox sx6036 or its smaller port version.
@DigitalSpaceport I recently bought a SX6036, could you help me configure the switch for ethernet license generation? i have been researching the guide server the home but i am short on time and would like a jump start.
Some switch models require the license others have it already enabled by default. It depends on the manufacturer. If you can't enable eth mode/vpi then your going to have to use the serverthehome forums thread and there is also an eBay seller who sells the licenses, I'll leave it to you on the eBay one if you trust that. I've had 3 of these switches now and I think it's the mellanox branded ones that need licenses applied. The HP ones seem to just work outta the box.
Can we connect the QSFP orange cable directly between two servers bypassing the brocade switch? I should work I think
You can direct connect qsfp and hit 40Gbit speeds for sure
@@DigitalSpaceport Thanks for reply
@@shephusted2714 thank you
Great info. Thanks
So cool.
great device! good server rack
ConnectX-3 + ICX6610, solid combo 1/10/40 combo.
👍
what about thunderbolt,,,,, unraid to pc ... no video about this, ppl not gonna buy that ,, they will use what the motherboard came with ,,, highest is thunderbolt....
This mobo didnt have thunderbolt but I dont think its a neglected topic really with the new MACs is it? I have read a lot about it recently and it seems rather decently performant
What’s the point with Unraid? I have 2 servers running 2 x 10GbE each. Even when I was first copying data where all the disks were being used, I saw a max just over 1.5Gbps. Pointless. Unraid is SO SLOW.
unraid hits FUSE by default so yeah its slow unless you write to a mount direct... which defeats the purpose in no small measure. Performance, TN wins hands down.
@@DigitalSpaceport Skipping Fuse is fine for cache running containers, but for the sort of files you’d leverage 40G for, I think you’d want it on the array using Fuse.
Yeah my current unraid is running if a 2.5gb linked mini nas... ?so at least it's not a problem for me? I'm not hitting above 240mb ever hehe!
2 questions:
1. I got a PCIe 2.5gb network card and put it in my machine but I don't know how to switch my unraid to use it instead of my main one (1gb).
2. Dumb question. How do you get that view on your interface, all I got is digits?
1) You need to make sure it is showing up in your interfaces dropdown here: Interfaces > Interface Assignments > WAN and that is where you set it also.
2) Status Dashboard check the upper right side. There is a + icon. Click that and you have a lot of cool widgets to use.
Looking forward to rdma info with nfs or samba if it’s now available on Linux vs just windows.