A BIG 96x 25GbE and 8x 100GbE port switch! Dell S5296F-ON

Поділитися
Вставка
  • Опубліковано 17 гру 2024

КОМЕНТАРІ • 42

  • @dupajasio4801
    @dupajasio4801 3 роки тому +16

    I would love to see what STH datacenter looks like and what's it used for. All this fiber and switching has to do something. Electricity bills have to be paid by something. Absolutely enjoy these. Unfortunately I'm stuck with Cisco crap.

  • @adriantan8459
    @adriantan8459 3 роки тому +4

    Hey Patrick, just FYI, there's a loose screw near the top right wire connectors at 12:28

  • @arXiv76
    @arXiv76 3 роки тому +2

    Loving your videos. Nice to finally see enterprise devices get a tear down. Many I know won't even rack and stack or even touch a brown box. I have always loved tearing things apart to see what makes them tick.

  • @cameramaker
    @cameramaker 3 роки тому +5

    The power supply upgrade is due to the power-hungry SFP modules, since the QSFP28 breakout on the 1U would be always sort of a passive cable.

  • @bryansuh1985
    @bryansuh1985 3 роки тому +1

    I like how he's so enthusiastic when saying the name. And I'm sitting here like hmm yes. Letters

  • @johnkeen939
    @johnkeen939 3 роки тому +8

    As a core switch this would be fantastic, multiple 100G between racks then 25G breakout to servers. It's just time before we can have this kind of stuff in home labs.

    • @jonathanbuzzard1376
      @jonathanbuzzard1376 3 роки тому +4

      This is *not* a core switch, it is a top of the rack switch, though with 96 25Gbps ports more like top of a couple of racks. The rear to front cooling is a dead giveaway on this front. You would expect a number of 100Gbps uplinks to a pair of 32x100Gbps or 64x100Gbps switches operating in a MC-LAG (or MLAG, VLAG etc. depending on your switch vendors naming choice) config. The number of uplinks used depending on the over subscription ratio you feel is acceptable for your use case.
      Thinking about it you might actually put a couple of these in adjacent racks and do MC-LAG between them and cross wire the servers between the two racks with a LAG, with uplinks as before. Nice level of redundancy there, and 48 servers in a racks is reasonable sweat spot. Currently we do 64 nodes in a rack with the four nodes in 2U chassis, but the cabling is a nightmare and seriously impacts the cooling airflow.

  • @youtubecommenter4069
    @youtubecommenter4069 3 роки тому +1

    Very enthusiastically presented, Bravo!

  • @UpcraftConsulting
    @UpcraftConsulting 3 роки тому +3

    It's basically the same as the S5232f-on if you were to use breakouts for 24 of it's 100gig ports. Breakout cables can either be better or worse for your use case depending on what length of cables make sense. It could be nice to have smaller 1u switch with 4 cables that consolidate down to 1 module if going inside a single rack. Worse if you wanted custom length on each of the 4 cables if distributing to multiple locations.
    I ended up with 2 of the S5232f-on switches so very familiar with it. I liked the extra 10gig ports it has.
    Yes the 1u switches scream, especially during boot. Also I have to remember to reboot a second time after any firmware upgrades. It will force the fans to 100% and they do not go back down without an extra reboot. (I have forgotten after doing upgrades remotely and I get that phone call the next day when they can hear it through 2 layers of walls and a hallway)

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому

      Usually we use 100GbE switches in the lab with breakouts. This was just an experiment to see if it would be any quieter.

    • @UpcraftConsulting
      @UpcraftConsulting 3 роки тому

      @@ServeTheHomeVideo Noctua all the things.

  • @DrivingWithJake
    @DrivingWithJake 3 роки тому +2

    Interesting, what we did in the data center is used some of the 40g and 100g Arista switches. However, used some FAP's in our patch panels to break out the ports 40g into 4x10g which 3 ports on the switch used up one FAP on the patch panel. Same with the 100g ports those broke down into 4x25g ports.
    Only big problem with those is the costs of optics and the breakout cables cost a lot of money. Which is why we mostly use DAC's with breakout cables when possible. The other part we only use for customers racks who require ports and a longer distance. 40G AOC cables work nice to our top of rack switches. :)

    • @bigbohne
      @bigbohne 3 роки тому +1

      same. but switching to Dual 100Gig (redundancy) per server. Nearly the same price as 25Gig and using NVIDIA (Mellanox) switches

  • @jfkastner
    @jfkastner 3 роки тому +3

    Great review, thank you!
    In a few years you'll beat yourself up because you did not install 800G ...

  • @d00dEEE
    @d00dEEE 3 роки тому +3

    Dang, was that thing installed in a barn? I can't believe the amount of dirt in the port holes, get out the pressure washer and hose it down.

  • @LukasHallgren01
    @LukasHallgren01 3 роки тому +2

    Great video!

  • @ahabsbane
    @ahabsbane 3 роки тому +1

    I've worked in network and security for 10 plus years, I've never seen one like this, must be expensive as hell!

  • @oneito947
    @oneito947 3 роки тому +1

    hey STH, can you send over that switch here in Kenya. i would love to have something like that here.
    just building a small internet ISP to imporve conenctivity

  • @IanBPPK
    @IanBPPK 3 роки тому +1

    2U Switch: Where we're going we don't need a stacking cable!

  • @wmopp9100
    @wmopp9100 3 роки тому

    regarding "SFT28 being lower power than QSFP28":
    you can get 25G aruba tranceivers that reach 400m on MMF,
    but 100G on QSFP28 only reach 100m

  • @MarkD26
    @MarkD26 3 роки тому +1

    Holy smokes how many layers is that PCB? It looks 6mm thick in the B roll!

  • @drtweak87
    @drtweak87 3 роки тому +1

    I saw you pull that baby up and good thing it wasn't Linus doing it! Would of ended up on the floor behind him! XD

  • @kwinzman
    @kwinzman 3 роки тому

    What I really want is a 24x2.5/5/10GBASE-T Access Switch with 4x25GbE SFP28 uplinks and basic management for a reasonable price (

    • @kwinzman
      @kwinzman 3 роки тому +1

      The DXS-1210-28T almost fitts the bill if it had 2.5/5GBE fallback and was a little bit more affordable.

  • @oktokt
    @oktokt 3 роки тому +1

    My homelab asked when we can get one... second hand.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому

      I got this one secondhand!

    • @oktokt
      @oktokt 3 роки тому

      @@ServeTheHomeVideo I didn't have the heart to tell it 3k was still out of the question. lol
      This network you are working on is gonna fly!

  • @jonathanbuzzard1376
    @jonathanbuzzard1376 3 роки тому

    If you actually had a fully populated switch like this in a rack you would definitely prefer having the out of band management and the serial console on the back of the switch. The cabling at the front of a switch like this is a total nightmare, really it is horrible. It recently took me 10 minutes to get the out of band management ethernet cable plugged into a Lenovo G8296 switch which is very similar to this but 10/40Gbps rather than 25/100Gbps. The serial lead well that is a task for a future visit to the data centre. Ports on the back of the switch would have been a total doddle in comparison.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому

      Usually we use PSU to Port airflow switches, so front management ports are much better. The bend radius that we use in our DCs for fiber mean that there is plenty of room to get to the front ports. Conversely, if a switch is at/ near the top of a rack, and you have to access the ports in the middle of a rack with servers that stick out 0.5m or more past the switch, it can be very hard to get to them. That is why we switched from rear management ports (and even stacking ports) to front ports, save for power, exclusively about five years ago.

    • @jonathanbuzzard1376
      @jonathanbuzzard1376 3 роки тому

      @@ServeTheHomeVideo Now try again using it as a top of the rack solution with lots and lots of DAC cables coming out the front. For good measure throw in some 40/100Gbps DAC cables. If I posted pictures of our Lenovo G8296 switches I can assure you that you would be changing your view toot sweet. This is a top of the rack switch that will have very few optics installed in the majority of use case scenarios. If you have lots of fibre coming into the switch then that is not how these switches are designed to be used.
      Also you of course have some kick step ladders in your data centre don't you? www.laddersukdirect.co.uk/step-ladders/gs-fort-mobile-steps---domed-feet/gse Makes it super easy to get at stuff at the back of the switch 😃 Those along with a pallet lift are essential pieces of equipment for a data centre.

  • @arjdroid
    @arjdroid 3 роки тому +5

    Second comment! Also, that's a lot of bandwidth, that could probably easily handle all the traffic of a small data centre.

  • @Knightrider159
    @Knightrider159 3 роки тому +5

    Hello world

  • @PaCmEn12
    @PaCmEn12 3 роки тому +1

    What a weast of space, in 1u 32xQSFP28 you can have 128 1-25gbps ports why would you buy such big switch?

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому

      If you have optics, it is quite costly to break out QSFP28 to SFP28 making this form factor much less expensive. If you can use all DACs you are totally correct

  • @JosiahLuscher
    @JosiahLuscher 3 роки тому +1

    That's like a $30,000 switch. I feel that buying that for a persons home is morally wrong. There are desperate people living on our streets!! What's wrong with you? You're not human.

    • @ServeTheHomeVideo
      @ServeTheHomeVideo  3 роки тому +2

      These were never even close to $30K new. A new 32x 100GbE switch using the same chip has been $10K or less for years.

    • @kwinzman
      @kwinzman 3 роки тому +3

      I really hope this is satire.