"This was a bad idea and It just keep happening, I don’t know what to tell you." Love it and can certainly relate. I'm glad you can get in trouble so I don't have to.
At work we are putting together two vmware clusters using a pair of Aruba 9300 switches (32 ports of 400G) as the backbone. vSAN storage with all NVME is going to love the bandwidth.
Hey thinking of upgrading my home network, will this switch be enough for 1.2Gb Comcast? I don't want to bottle neck it, I'm paying big money each month.
@@letterspace1letterspace266 AI clusters can run on insane connections exceeding the 400G connections used here, but they can also just run on a single machine instead of as a cluster and still perform shockingly well if optimized.
At my work, we are refreshing a dc and where 400gig fits for us is more in the spine to super spine/core layer. We just don’t have hosts doing that high of speed, but most are moving from multiple 10gig connections to 25s. I would be jealous when there is a need for direct host 400gig!
I love you breakdown a switch in parts. Now I understand how it works. And yes 400gb is crazy. But 20years ago 1Gb in a server was crazy to. Thanks for a peak in the further 😎👍
Just more awesome, cool content from Patrick K & STH. Yes it's nerdy. Of course it is! We learned about Mikrotik's 4 port x 10Gb switch here from STH years ago and other OEM 25Gb & 100 Gb switches, too. Where else can you see an FS 64 Port x 400GB switch under the hood. You can count on Patrick. He will remove tell you how many screws it takes to get inside the device. And PK will remove the Heat sink, the CPU, the Fan Modules & PS Modules every time. And you get discussion, use ideas, performance results, connections parts, etc. It doesn't get better than that. STH just helps get my Nerd Habit filled each week along with the other online tech nerd content. Keep up the good work STH!
What a BEAST! I'm glad that you guys went and showed this off. I got trolled by a guy suggesting that I use infiniband (which I'm already familiar with) at home on one of my recent comments. I also understand that it is a total BEAR to set up and get running at the correct speeds, etc. Totally NOT the sort of solution I'd be considering for home use. I've seen other UA-camrs try to install old used enterprise infiniband gear in their home, and it's always funny to watch them suffer. Too bad you didn't show all the fun bits of yourselves wrestling the bear. Although, it's not very exciting for non-nerds to watch. I'm surpised you could even power this thing. You'd need to build an internal Chernobyl just to spin up the fans, alone! (Exageration intentional for comedic effect) I'm afraid the electric company would send an investigator if I installed this in my home.
Mikrotik tomorrow morning be like: This is our CRS440-6QDD-1G-RM new 400 Gbps switch at just 2000$. Harmful joke, we love our Mikrotik friends and hopefully they love us back
I love videos like this. Absolutely crazy. One thing I've always wondered: We hear about the huge undersea cables with huge bandwidths. How does the data get into these cables? Is it stuff like this? Would be cool to see.
Sure love to see this kind of videoes. As one who works at a school with 1G access and 10G between Spine/Leaf, speeds like this is unoptanium for us right now ;)
Being a network guy, I am struggling with the OSFP acronym used here and i am sure when i start ordering these types of connectors/cables for these applications i will be corrected many times in conversation that overlap the two technologies. My one saving grace, 400Gb switching i will certainly see a lot of, 400Gb routing, I'm not so sure I will see a lot of this before I retire in another 10 years
super presentation Patrick! the server space is really moving fast! I'm glad U didn't show us a server bubbling in a cooling vat! but it sounds like this one gets warm. I hope some1 has some winning cooling strategies out there! good luck Pat..
Something I think would be interesting to see is the cost difference of being bleeding edge back in the 100Gb days vs now with 400Gb. I know $60,000 is a lot of money but at the same time it seems cheap for what you were getting and what you could do with it if you have the ability to utilize it.
Hey Patrick, do you not have a subscription system for the website like Phoronix does, so we can read without ada while contributing? I feel bad having uBlock on, specially since I'm transitioning to 10G (switching) and 40G (direct) at home, and it's mostly thanks to you guys and your awesome reviews and posts.
We do not, but if you see from the screenshots we have an external firm sell all of the desktop inventory and most of the mobile. If you are on the desktop you will only see ads from our servers about server, storage, and networking things. We do not have bidding, auto roll video, moving ads, and etc because I do not like them.
I wish I had found this channel so much earlier. I really find your gear overviews entertaining and informative. It's obvious you love what you do, and that sort of enthusiasm is contagious. Even though i will never be able to justify a 60k switch at home, you've provided a whole ton of ideas and concepts that have been useful both at home and at work. Selfishly seeking out your videos focused on more affordable gear though haha
Wow. 9% difference in throughput between ETH and IB. On 100 Gbps IB, the difference that I measured was only about 3%. $6k for 400 Gbps IB is actually NOT that bad when you think about $/Gbps (for a point-to-point connection). I use 1 GbE as the "management" port these days. Server consolidation meant that I was able to SRHINK my network (rather than to expand it). But on my systems that have the space and is able to support it, I have 100 Gbps going to my Ryzen 9 5950X HPC compute nodes and also a 100 Gbps going to my Core i7 3930K which runs my LTO-8 tape backup system. It takes the "stress" off the 1 GbE layer and moves it completely over and the 100 Gbps can handle a lot more things happening at the same time.
@@ServeTheHomeVideo Yeah. I forget what my point-to-point ETH bandwidth losses were compared to a point-to-point IB bandwidth. I don't remember if I ran my testing on the 100 Gbps IB through my switch or if it was a point-to-point test. I would think that the bandwidth losses should be lower on the point-to-point side (because you aren't going through a switch), but I can't say that definitively. Very cool. 100 Gbps IB is fun. People give you REALLY interesting looks when you tell them that's what you run in the basement of your home.
Can you guys talk about the differences between Ethernet and Infiniband networking at some point for those of us who hear the different names thrown around but don't actually understand what the functional differences are?
Maybe a good summer topic. The easy way to think of it is that Ethernet is what just about everything runs on. InfiniBand started as a storage focused high-performance solution and has morphed over the years into a HPC and AI specific low latency / high bandwidth interconnect.
Do you have a post on the main site about network tuning? I’d love to read through it as we’re going to be deploying 100G clients soon. Thanks for the great content!
Hi Patrick, How were you able to connect the OSFP HCAs to the QSFP-DD ports on the switch for the Ethernet testing? I just re-watched the video, starting at the 12:30 mark, where you're referring to "funky optics", I'm assuming these are QSFP-DD w/ MPO on the Switch Side and OSFP w/ MPO on the HCA side? I also read the Main Site Article, where you also talk about having to navigate different signaling speeds in-between the optics. I then looked in the STH forums and read the 2 threads discussing OSFP, and posted on one of them. Any details you can share would be much appreciated.
Several possible reasons... (1) they don't want to expose it to the heat for soldering - that's a massive 24+ layer board, (2) makes it easier to replace a bad one, (3) makes it easier to use a different (newer, older) chip, (4) extra structure to support the heat sink... (ultimately, it's FS, so who knows.)
Great video. I'm going in as an Spectra7 investor assuming that the demo Cisco DAC cable must be made by them. You speak about heatsink attached on the DAC cable, does this mean it's an active copper cable or ACC?
That's a pretty stupid comment. Where are you going to get 50Tb or 100 Tb Internet connectivity for a DDOS attack from *two!devices*? Think before you comment. 🙁
@@bobbydazzler6990 mate I can surely say you never worked as it network administrator. 😂 Do you know how much bandwidth we can get ? You can even combine bandwidth via load balancing. Learn something before you talk to an network administrator.
@@PR1V4TE 1. I've never had the title "Network Administrator". That title is for putzes who spend 80% of their time messing around adding users or printers in Active Directory. 2. According to my resume and LinkedIn profile, I've had the title "Sr. Network Engineer" for a number of global companies. 3. I've forgotten more about Networking than you will ever know. 4. Don't you have some shitty HP Procurve switches to deploy instead of playing around on UA-cam? 🤣 5. Have a pleasant weekend.
The idea that someone might give FS $55k for their whitelabel of some switch where future software support is basically "????" afaict, and they can't even be assed to align the label around the management port properly, is wild to me
I actually had this running in my home network for a video that we never published. Then again, not everyone has ~1600 fiber strands running through their walls.
Honestly, I can not fathom 400GB of network speed. I can easily conceive that it happens and is the result of everything else being bolstered. But to actually functionally use it... I'm noping out of that one.
Am I the only one that just assumes that most "phone a friend" calls on this level end up going to Wendell @Level1Techs? (Yes, I realize it's unlikely in this specific case)
Put Wendell and I together and it is pretty tiring: ua-cam.com/video/oUugk0INTRU/v-deo.html (that was done in his hotel room late after a big steak dinner.)
"This was a bad idea and It just keep happening, I don’t know what to tell you." Love it and can certainly relate. I'm glad you can get in trouble so I don't have to.
That is kindof the point of some of these.
At work we are putting together two vmware clusters using a pair of Aruba 9300 switches (32 ports of 400G) as the backbone. vSAN storage with all NVME is going to love the bandwidth.
Looks like a perfect switch for the pool house
Hey thinking of upgrading my home network, will this switch be enough for 1.2Gb Comcast? I don't want to bottle neck it, I'm paying big money each month.
You may have to wait for the new 800GbE generation later this year. :-)
@@ServeTheHomeVideo i wonder how much bandwidth an AI cluster needs to flow. Is this for serious carrier backbone?
@@letterspace1letterspace266 AI clusters can run on insane connections exceeding the 400G connections used here, but they can also just run on a single machine instead of as a cluster and still perform shockingly well if optimized.
Haha nice
At my work, we are refreshing a dc and where 400gig fits for us is more in the spine to super spine/core layer. We just don’t have hosts doing that high of speed, but most are moving from multiple 10gig connections to 25s. I would be jealous when there is a need for direct host 400gig!
I love you breakdown a switch in parts. Now I understand how it works. And yes 400gb is crazy. But 20years ago 1Gb in a server was crazy to. Thanks for a peak in the further 😎👍
Just more awesome, cool content from Patrick K & STH. Yes it's nerdy. Of course it is! We learned about Mikrotik's 4 port x 10Gb switch here from STH years ago and other OEM 25Gb & 100 Gb switches, too. Where else can you see an FS 64 Port x 400GB switch under the hood. You can count on Patrick. He will remove tell you how many screws it takes to get inside the device. And PK will remove the Heat sink, the CPU, the Fan Modules & PS Modules every time. And you get discussion, use ideas, performance results, connections parts, etc. It doesn't get better than that. STH just helps get my Nerd Habit filled each week along with the other online tech nerd content. Keep up the good work STH!
What a BEAST! I'm glad that you guys went and showed this off. I got trolled by a guy suggesting that I use infiniband (which I'm already familiar with) at home on one of my recent comments. I also understand that it is a total BEAR to set up and get running at the correct speeds, etc. Totally NOT the sort of solution I'd be considering for home use. I've seen other UA-camrs try to install old used enterprise infiniband gear in their home, and it's always funny to watch them suffer. Too bad you didn't show all the fun bits of yourselves wrestling the bear. Although, it's not very exciting for non-nerds to watch. I'm surpised you could even power this thing. You'd need to build an internal Chernobyl just to spin up the fans, alone! (Exageration intentional for comedic effect) I'm afraid the electric company would send an investigator if I installed this in my home.
Mikrotik tomorrow morning be like: This is our CRS440-6QDD-1G-RM new 400 Gbps switch at just 2000$. Harmful joke, we love our Mikrotik friends and hopefully they love us back
If only! But then we would know it would be a 100% hot mess.
When that SoC ceases to cost north of $10k, maybe they will.
Finally some reasonable connection for my truenas at home
I love videos like this. Absolutely crazy. One thing I've always wondered: We hear about the huge undersea cables with huge bandwidths. How does the data get into these cables? Is it stuff like this? Would be cool to see.
Noted. Will see what we can do on this one
@@ServeTheHomeVideo Awesome :D
@@ServeTheHomeVideoییی
@@ServeTheHomeVideo Serve the *home* plans a video on undersea fiber trunk lines. To, you know, serve the Gameboy Color in the home 😅
Sure love to see this kind of videoes.
As one who works at a school with 1G access and 10G between Spine/Leaf, speeds like this is unoptanium for us right now ;)
Being a network guy, I am struggling with the OSFP acronym used here and i am sure when i start ordering these types of connectors/cables for these applications i will be corrected many times in conversation that overlap the two technologies. My one saving grace, 400Gb switching i will certainly see a lot of, 400Gb routing, I'm not so sure I will see a lot of this before I retire in another 10 years
we're deploying 400gbit routing on nokia platform. it's a step up from giant 100gbit lacps to 36 port 400gbit line-cards.
Gotta love these we just picked up one of their 32x100g switches to play around with.
Linus needs to see this
This is too good for Linus
He needs to quadruple this.
Totally agreed with you
Here we go again...😅
Imagine if he drops the chip while talking about its capabilities
I hope I will live long enough to get a 400Gbps Internet connection as a home user.
love the sharpie marks on the socket screws. Did you find a torque spec for the switch-chip socket or did you YOLOOOOO
Yolo.
@@ServeTheHomeVideo This is the way.
super presentation Patrick! the server space is really moving fast! I'm glad U didn't show us a server bubbling in a cooling vat! but it sounds like this one gets warm.
I hope some1 has some winning cooling strategies out there! good luck Pat..
Something I think would be interesting to see is the cost difference of being bleeding edge back in the 100Gb days vs now with 400Gb. I know $60,000 is a lot of money but at the same time it seems cheap for what you were getting and what you could do with it if you have the ability to utilize it.
Well, also hyper-scale pricing. The FB Wedge100 we show sometimes 32x 100G was well under $3k new when FB wsa deploying them years ago.
😅l
P
😊P😊😊😊pp
@@ServeTheHomeVideopp😊 14:10
😊😅😅
@@ServeTheHomeVideo
U
To be
O
ServeTheHome are you at the Gates' home?
Well "home" is the /home/ directory in Linux.
@@ServeTheHomeVideolol, Home is where my user directory lives
You know, I always thought of ServeTheHome as ServeTheHome(lab)... Could be a good spinoff channel? I never thought about ServerThe/home/ until now.
Hey Patrick, do you not have a subscription system for the website like Phoronix does, so we can read without ada while contributing? I feel bad having uBlock on, specially since I'm transitioning to 10G (switching) and 40G (direct) at home, and it's mostly thanks to you guys and your awesome reviews and posts.
We do not, but if you see from the screenshots we have an external firm sell all of the desktop inventory and most of the mobile. If you are on the desktop you will only see ads from our servers about server, storage, and networking things. We do not have bidding, auto roll video, moving ads, and etc because I do not like them.
@@ServeTheHomeVideo great, thank you!
Well, you could also opt for a PowerPC gen 9 pr 10. Try that for size.
Linus be like:
" Ok, I want 4 ! " 🤦🏻♂️😂
I wish I had found this channel so much earlier. I really find your gear overviews entertaining and informative. It's obvious you love what you do, and that sort of enthusiasm is contagious.
Even though i will never be able to justify a 60k switch at home, you've provided a whole ton of ideas and concepts that have been useful both at home and at work.
Selfishly seeking out your videos focused on more affordable gear though haha
:-) thanks. We cover the spectrum and publish at least once a day on the STH main site
Wow. 9% difference in throughput between ETH and IB.
On 100 Gbps IB, the difference that I measured was only about 3%.
$6k for 400 Gbps IB is actually NOT that bad when you think about $/Gbps (for a point-to-point connection).
I use 1 GbE as the "management" port these days.
Server consolidation meant that I was able to SRHINK my network (rather than to expand it).
But on my systems that have the space and is able to support it, I have 100 Gbps going to my Ryzen 9 5950X HPC compute nodes and also a 100 Gbps going to my Core i7 3930K which runs my LTO-8 tape backup system.
It takes the "stress" off the 1 GbE layer and moves it completely over and the 100 Gbps can handle a lot more things happening at the same time.
Well that is also a direct connect IB versus through the switch 400GbE.
@@ServeTheHomeVideo
Yeah.
I forget what my point-to-point ETH bandwidth losses were compared to a point-to-point IB bandwidth.
I don't remember if I ran my testing on the 100 Gbps IB through my switch or if it was a point-to-point test.
I would think that the bandwidth losses should be lower on the point-to-point side (because you aren't going through a switch), but I can't say that definitively.
Very cool.
100 Gbps IB is fun.
People give you REALLY interesting looks when you tell them that's what you run in the basement of your home.
Some die shots of the switch chip would be quite interesting too
Great project. Why do we climb a mountain? Because we can!
Oh to have the money, data closet, and power hookups to run this in my house… I can dream.
IT Support back in the cable lender's org: "Hmmm I wonder why this server is unexpectedly under maintainance?"
Y
Y
Will do chm
1
1
This is not ServeTheHome, this is ServeTheDatacenter...
Time to upgrade my home network I guess
1Tb switches come out next year if you wait. 🤪
A little bit noisy equipment...
Pretty Big Switch!
Can you guys talk about the differences between Ethernet and Infiniband networking at some point for those of us who hear the different names thrown around but don't actually understand what the functional differences are?
Maybe a good summer topic. The easy way to think of it is that Ethernet is what just about everything runs on. InfiniBand started as a storage focused high-performance solution and has morphed over the years into a HPC and AI specific low latency / high bandwidth interconnect.
Do you have a post on the main site about network tuning? I’d love to read through it as we’re going to be deploying 100G clients soon. Thanks for the great content!
NVIDIA and AMD have great guides on this.
@@ServeTheHomeVideo I’ll go hunting then!
So one misplaced can of Jolt can take down all the networking for 256 servers at once. Amusing;-)
Hi Patrick, How were you able to connect the OSFP HCAs to the QSFP-DD ports on the switch for the Ethernet testing? I just re-watched the video, starting at the 12:30 mark, where you're referring to "funky optics", I'm assuming these are QSFP-DD w/ MPO on the Switch Side and OSFP w/ MPO on the HCA side? I also read the Main Site Article, where you also talk about having to navigate different signaling speeds in-between the optics. I then looked in the STH forums and read the 2 threads discussing OSFP, and posted on one of them. Any details you can share would be much appreciated.
it's totally wild that they SOCKETED that switch chip vs soldering it down in BGA format.
Wonder why
Yea crazy, but I also bought a socketed TH4 for the studio backdrop.
Several possible reasons... (1) they don't want to expose it to the heat for soldering - that's a massive 24+ layer board, (2) makes it easier to replace a bad one, (3) makes it easier to use a different (newer, older) chip, (4) extra structure to support the heat sink... (ultimately, it's FS, so who knows.)
So, the nvidia cables you cant use? Kinda confused me on that one. Why would they make a DAC cable that literally cant fit onto the sockets?
Why does a direct-attach cable need cooling? Doesn't it connect directly to the switch chip SERDES?
I'm just here for the awesome enthusiasm!
#PatrickTherapy
More like sick Patrick on this one :-/
Great video. I'm going in as an Spectra7 investor assuming that the demo Cisco DAC cable must be made by them. You speak about heatsink attached on the DAC cable, does this mean it's an active copper cable or ACC?
I am confused. I thought 400G optics uses 100G SerDes times 4 lambdas -- so effectively, with a server, you only get one 100G lane on one lambda?
Nobody is allowed to complain about 10Gbe prices anymore! lol
Very very impressive, THX.
Cables that need heatsinks!?
Thanks for that video but I would be nice to see more usecases
Lets go!
800G has been running for a few years now
800G switches are going to be more common later this year. I think we will have the announcement of one next week
400Gbps Ethernet would be awesome for hyperconverged storage.
When 8-port home version?
Cool, now do the same with 800Gb :)
We have seen a few 32x800G switches on the STH main site. We really need 800GbE NICs and PCIe Gen6 for that
Very interesting! Thx!
I want that heatsink :)
Don't mind me, just sitting here with my plebian 10Gb Mikrotik switch and my cat 6a.
My understanding is Infiniband can do direct memory access and PCI-e over fiber, but when used for internet, TCP IP protocol is emulated.
try it without the switch - 400g will drop in price, will be good for clusters - they will have 800g before you know it #cluster overhead
800GbE will happen but it is hard to go to the node at that speed until PCIe Gen6
wow great!
I love the content you are bringing in. Just think if someone is doing ddos with 2 of this devices. 😂😂 100+ terabits of data coming in. Sheesh.
That's a pretty stupid comment. Where are you going to get 50Tb or 100 Tb Internet connectivity for a DDOS attack from *two!devices*? Think before you comment. 🙁
@@bobbydazzler6990 mate I can surely say you never worked as it network administrator. 😂 Do you know how much bandwidth we can get ? You can even combine bandwidth via load balancing. Learn something before you talk to an network administrator.
@@PR1V4TE 1. I've never had the title "Network Administrator". That title is for putzes who spend 80% of their time messing around adding users or printers in Active Directory.
2. According to my resume and LinkedIn profile, I've had the title "Sr. Network Engineer" for a number of global companies.
3. I've forgotten more about Networking than you will ever know.
4. Don't you have some shitty HP Procurve switches to deploy instead of playing around on UA-cam? 🤣
5. Have a pleasant weekend.
@@bobbydazzler6990 why you are saying whats your day job lol 😂
@@bobbydazzler6990 "don't you have some HP procurve switches to deploy" man you didn't have to straight up insult him
The idea that someone might give FS $55k for their whitelabel of some switch where future software support is basically "????" afaict, and they can't even be assed to align the label around the management port properly, is wild to me
8x 130W blowback fans consumes 1kW (peak) only for cooling, lul. Like an every bladeserver chassis
Garbage in at 400GbE, Garbage out at 400GbE. Buy now!
Weed better. Slowdown. Light humor goes faster, eh?
Switches with socketed processors... I am drooling
looks into wallet: "yeah nah, that isn't in the budget"
No one would ever saturate this in a home network.
I actually had this running in my home network for a video that we never published. Then again, not everyone has ~1600 fiber strands running through their walls.
Now put a 400-gig card in the oldest computer you possibly can, using adapters if necessary - ultimate bottleneck! /s
How is this going to serve the home 😄
When is LMG getting this installed?
Often after we do on the high-end stuff.
I’ll take “Things that I could never afford or handle” for 600 Pat.
Entire DE-CIX probably running on two racks of those XD
Could probably firewall an entire small country with this. Idk.
People will love to have 1GB Upload and download speed yet alone 400GB 😁
Can't wait to do this for 500€ in 2034
I pushed 1.12Tbps a few years ago.
400Gb/s is probably going to be the realistic limit for a few decades.
FASTEST Server Networking 64-Port 400GbE Switch Time!
ServeTheHome
🤣🤣🤣🤣🤣
Honestly, I can not fathom 400GB of network speed. I can easily conceive that it happens and is the result of everything else being bolstered. But to actually functionally use it... I'm noping out of that one.
I don't get this channel - I thought it was all about home network not massive network equipment like this 🤔
😊😊😊😊😊😊😅😊😊😅😊😊
Yea, I'm out. There is nothing about "home" in "servethehome" anymore.
Hm, /home/ in Linux?
Who cares, if you're nerd enough to gave a homelab, you should be nerding out about equipment like this.
@@LtdJorge Exactly.
Am I the only one that just assumes that most "phone a friend" calls on this level end up going to Wendell @Level1Techs? (Yes, I realize it's unlikely in this specific case)
If we have a phone a friend it usually goes to someone at a vendor, hyperscaler, or systems integrator.
@@ServeTheHomeVideo That's like telling a kid his Christmas presents came from Amazon instead of Santa.
Put Wendell and I together and it is pretty tiring: ua-cam.com/video/oUugk0INTRU/v-deo.html (that was done in his hotel room late after a big steak dinner.)
My entire country has 100gbit internet
@LinusTechTips
I am working on getting them another switch that we reviewed in 2021 for their setup.
Extremely powerful, sexy, and crazy device! I love it.