I would love to see what STH datacenter looks like and what's it used for. All this fiber and switching has to do something. Electricity bills have to be paid by something. Absolutely enjoy these. Unfortunately I'm stuck with Cisco crap.
Loving your videos. Nice to finally see enterprise devices get a tear down. Many I know won't even rack and stack or even touch a brown box. I have always loved tearing things apart to see what makes them tick.
As a core switch this would be fantastic, multiple 100G between racks then 25G breakout to servers. It's just time before we can have this kind of stuff in home labs.
This is *not* a core switch, it is a top of the rack switch, though with 96 25Gbps ports more like top of a couple of racks. The rear to front cooling is a dead giveaway on this front. You would expect a number of 100Gbps uplinks to a pair of 32x100Gbps or 64x100Gbps switches operating in a MC-LAG (or MLAG, VLAG etc. depending on your switch vendors naming choice) config. The number of uplinks used depending on the over subscription ratio you feel is acceptable for your use case. Thinking about it you might actually put a couple of these in adjacent racks and do MC-LAG between them and cross wire the servers between the two racks with a LAG, with uplinks as before. Nice level of redundancy there, and 48 servers in a racks is reasonable sweat spot. Currently we do 64 nodes in a rack with the four nodes in 2U chassis, but the cabling is a nightmare and seriously impacts the cooling airflow.
It's basically the same as the S5232f-on if you were to use breakouts for 24 of it's 100gig ports. Breakout cables can either be better or worse for your use case depending on what length of cables make sense. It could be nice to have smaller 1u switch with 4 cables that consolidate down to 1 module if going inside a single rack. Worse if you wanted custom length on each of the 4 cables if distributing to multiple locations. I ended up with 2 of the S5232f-on switches so very familiar with it. I liked the extra 10gig ports it has. Yes the 1u switches scream, especially during boot. Also I have to remember to reboot a second time after any firmware upgrades. It will force the fans to 100% and they do not go back down without an extra reboot. (I have forgotten after doing upgrades remotely and I get that phone call the next day when they can hear it through 2 layers of walls and a hallway)
Interesting, what we did in the data center is used some of the 40g and 100g Arista switches. However, used some FAP's in our patch panels to break out the ports 40g into 4x10g which 3 ports on the switch used up one FAP on the patch panel. Same with the 100g ports those broke down into 4x25g ports. Only big problem with those is the costs of optics and the breakout cables cost a lot of money. Which is why we mostly use DAC's with breakout cables when possible. The other part we only use for customers racks who require ports and a longer distance. 40G AOC cables work nice to our top of rack switches. :)
hey STH, can you send over that switch here in Kenya. i would love to have something like that here. just building a small internet ISP to imporve conenctivity
If you actually had a fully populated switch like this in a rack you would definitely prefer having the out of band management and the serial console on the back of the switch. The cabling at the front of a switch like this is a total nightmare, really it is horrible. It recently took me 10 minutes to get the out of band management ethernet cable plugged into a Lenovo G8296 switch which is very similar to this but 10/40Gbps rather than 25/100Gbps. The serial lead well that is a task for a future visit to the data centre. Ports on the back of the switch would have been a total doddle in comparison.
Usually we use PSU to Port airflow switches, so front management ports are much better. The bend radius that we use in our DCs for fiber mean that there is plenty of room to get to the front ports. Conversely, if a switch is at/ near the top of a rack, and you have to access the ports in the middle of a rack with servers that stick out 0.5m or more past the switch, it can be very hard to get to them. That is why we switched from rear management ports (and even stacking ports) to front ports, save for power, exclusively about five years ago.
@@ServeTheHomeVideo Now try again using it as a top of the rack solution with lots and lots of DAC cables coming out the front. For good measure throw in some 40/100Gbps DAC cables. If I posted pictures of our Lenovo G8296 switches I can assure you that you would be changing your view toot sweet. This is a top of the rack switch that will have very few optics installed in the majority of use case scenarios. If you have lots of fibre coming into the switch then that is not how these switches are designed to be used. Also you of course have some kick step ladders in your data centre don't you? www.laddersukdirect.co.uk/step-ladders/gs-fort-mobile-steps---domed-feet/gse Makes it super easy to get at stuff at the back of the switch 😃 Those along with a pallet lift are essential pieces of equipment for a data centre.
If you have optics, it is quite costly to break out QSFP28 to SFP28 making this form factor much less expensive. If you can use all DACs you are totally correct
That's like a $30,000 switch. I feel that buying that for a persons home is morally wrong. There are desperate people living on our streets!! What's wrong with you? You're not human.
I would love to see what STH datacenter looks like and what's it used for. All this fiber and switching has to do something. Electricity bills have to be paid by something. Absolutely enjoy these. Unfortunately I'm stuck with Cisco crap.
Hey Patrick, just FYI, there's a loose screw near the top right wire connectors at 12:28
Ha! Adrian - found that one when I was putting it back together!
Wow! Good eye!!
Loving your videos. Nice to finally see enterprise devices get a tear down. Many I know won't even rack and stack or even touch a brown box. I have always loved tearing things apart to see what makes them tick.
The power supply upgrade is due to the power-hungry SFP modules, since the QSFP28 breakout on the 1U would be always sort of a passive cable.
I like how he's so enthusiastic when saying the name. And I'm sitting here like hmm yes. Letters
As a core switch this would be fantastic, multiple 100G between racks then 25G breakout to servers. It's just time before we can have this kind of stuff in home labs.
This is *not* a core switch, it is a top of the rack switch, though with 96 25Gbps ports more like top of a couple of racks. The rear to front cooling is a dead giveaway on this front. You would expect a number of 100Gbps uplinks to a pair of 32x100Gbps or 64x100Gbps switches operating in a MC-LAG (or MLAG, VLAG etc. depending on your switch vendors naming choice) config. The number of uplinks used depending on the over subscription ratio you feel is acceptable for your use case.
Thinking about it you might actually put a couple of these in adjacent racks and do MC-LAG between them and cross wire the servers between the two racks with a LAG, with uplinks as before. Nice level of redundancy there, and 48 servers in a racks is reasonable sweat spot. Currently we do 64 nodes in a rack with the four nodes in 2U chassis, but the cabling is a nightmare and seriously impacts the cooling airflow.
Very enthusiastically presented, Bravo!
It's basically the same as the S5232f-on if you were to use breakouts for 24 of it's 100gig ports. Breakout cables can either be better or worse for your use case depending on what length of cables make sense. It could be nice to have smaller 1u switch with 4 cables that consolidate down to 1 module if going inside a single rack. Worse if you wanted custom length on each of the 4 cables if distributing to multiple locations.
I ended up with 2 of the S5232f-on switches so very familiar with it. I liked the extra 10gig ports it has.
Yes the 1u switches scream, especially during boot. Also I have to remember to reboot a second time after any firmware upgrades. It will force the fans to 100% and they do not go back down without an extra reboot. (I have forgotten after doing upgrades remotely and I get that phone call the next day when they can hear it through 2 layers of walls and a hallway)
Usually we use 100GbE switches in the lab with breakouts. This was just an experiment to see if it would be any quieter.
@@ServeTheHomeVideo Noctua all the things.
Interesting, what we did in the data center is used some of the 40g and 100g Arista switches. However, used some FAP's in our patch panels to break out the ports 40g into 4x10g which 3 ports on the switch used up one FAP on the patch panel. Same with the 100g ports those broke down into 4x25g ports.
Only big problem with those is the costs of optics and the breakout cables cost a lot of money. Which is why we mostly use DAC's with breakout cables when possible. The other part we only use for customers racks who require ports and a longer distance. 40G AOC cables work nice to our top of rack switches. :)
same. but switching to Dual 100Gig (redundancy) per server. Nearly the same price as 25Gig and using NVIDIA (Mellanox) switches
Great review, thank you!
In a few years you'll beat yourself up because you did not install 800G ...
Dang, was that thing installed in a barn? I can't believe the amount of dirt in the port holes, get out the pressure washer and hose it down.
Great video!
I've worked in network and security for 10 plus years, I've never seen one like this, must be expensive as hell!
hey STH, can you send over that switch here in Kenya. i would love to have something like that here.
just building a small internet ISP to imporve conenctivity
2U Switch: Where we're going we don't need a stacking cable!
regarding "SFT28 being lower power than QSFP28":
you can get 25G aruba tranceivers that reach 400m on MMF,
but 100G on QSFP28 only reach 100m
Holy smokes how many layers is that PCB? It looks 6mm thick in the B roll!
These all use very thick PCBs.
I saw you pull that baby up and good thing it wasn't Linus doing it! Would of ended up on the floor behind him! XD
What I really want is a 24x2.5/5/10GBASE-T Access Switch with 4x25GbE SFP28 uplinks and basic management for a reasonable price (
The DXS-1210-28T almost fitts the bill if it had 2.5/5GBE fallback and was a little bit more affordable.
My homelab asked when we can get one... second hand.
I got this one secondhand!
@@ServeTheHomeVideo I didn't have the heart to tell it 3k was still out of the question. lol
This network you are working on is gonna fly!
If you actually had a fully populated switch like this in a rack you would definitely prefer having the out of band management and the serial console on the back of the switch. The cabling at the front of a switch like this is a total nightmare, really it is horrible. It recently took me 10 minutes to get the out of band management ethernet cable plugged into a Lenovo G8296 switch which is very similar to this but 10/40Gbps rather than 25/100Gbps. The serial lead well that is a task for a future visit to the data centre. Ports on the back of the switch would have been a total doddle in comparison.
Usually we use PSU to Port airflow switches, so front management ports are much better. The bend radius that we use in our DCs for fiber mean that there is plenty of room to get to the front ports. Conversely, if a switch is at/ near the top of a rack, and you have to access the ports in the middle of a rack with servers that stick out 0.5m or more past the switch, it can be very hard to get to them. That is why we switched from rear management ports (and even stacking ports) to front ports, save for power, exclusively about five years ago.
@@ServeTheHomeVideo Now try again using it as a top of the rack solution with lots and lots of DAC cables coming out the front. For good measure throw in some 40/100Gbps DAC cables. If I posted pictures of our Lenovo G8296 switches I can assure you that you would be changing your view toot sweet. This is a top of the rack switch that will have very few optics installed in the majority of use case scenarios. If you have lots of fibre coming into the switch then that is not how these switches are designed to be used.
Also you of course have some kick step ladders in your data centre don't you? www.laddersukdirect.co.uk/step-ladders/gs-fort-mobile-steps---domed-feet/gse Makes it super easy to get at stuff at the back of the switch 😃 Those along with a pallet lift are essential pieces of equipment for a data centre.
Second comment! Also, that's a lot of bandwidth, that could probably easily handle all the traffic of a small data centre.
Hello world
First!
What a weast of space, in 1u 32xQSFP28 you can have 128 1-25gbps ports why would you buy such big switch?
If you have optics, it is quite costly to break out QSFP28 to SFP28 making this form factor much less expensive. If you can use all DACs you are totally correct
That's like a $30,000 switch. I feel that buying that for a persons home is morally wrong. There are desperate people living on our streets!! What's wrong with you? You're not human.
These were never even close to $30K new. A new 32x 100GbE switch using the same chip has been $10K or less for years.
I really hope this is satire.