The 100gig Adventure

Поділитися
Вставка
  • Опубліковано 21 лис 2024

КОМЕНТАРІ • 378

  • @CraftComputing
    @CraftComputing 3 місяці тому +323

    "I ended up upgrading everything to 100Gb, because that's what you do, right?"
    Yes. That is what you do.

    • @SideSweep-jr1no
      @SideSweep-jr1no 3 місяці тому

      Yes. That is what you do.

    • @HoangTheBoss
      @HoangTheBoss 3 місяці тому +1

      @@Prasanna_Shinde did you account for full duplex

    • @gregjensen364
      @gregjensen364 4 дні тому

      I had 10G copper and was looking to move to 25G. As soon as I realized most of the pain would be running fiber, I too, went straight to 100G. Mikrotik for their 16x100G l4x25G switches and a few more Qnap 4x100 8x25 switches to round it out. 100G for roughly 2x 10G cost, and only 20-30% more than 25G.

  • @ryandenotter9064
    @ryandenotter9064 3 місяці тому +713

    Off to convince my wife of how this will significantly improve our quality of life. Wish me luck! :P

    • @Level1Techs
      @Level1Techs  3 місяці тому +167

      good luck

    • @piked86
      @piked86 3 місяці тому

      In just talked my wife into using pfsense to upgrade to 10gig networking by using the promise of a VPN into our network for using self hosted AI. Llama and stable diffusion.

    • @awarepenguin3376
      @awarepenguin3376 3 місяці тому +60

      life is short, need 100g

    • @ciroiriarte8804
      @ciroiriarte8804 3 місяці тому +15

      oh, budget review session

    • @MrMartinSchou
      @MrMartinSchou 3 місяці тому +25

      I know how you feel, except I'm single. I only have 1 computer and 1 phone. A NAS would be semi-useful, but realistically it'd be more useful to have offsite backup.
      But it's cool tech and damnit, I want it!

  • @gingerman5123
    @gingerman5123 3 місяці тому +276

    I'm a network engineer at an ISP. We're starting to install 100g internet connections. Pretty wild.

    • @brians8664
      @brians8664 3 місяці тому +43

      I was just thinking the same thing, we deployed 400Gb backbones years ago. 10Gb wan connections are a dime a dozen now. We’re starting to see dual/quad 100Gb backhauls to cell sites routinely now. Dual is for standard redundancy, quad is for dual-path redundancy.

    • @sznikers
      @sznikers 3 місяці тому +20

      68Mbps here ... 😂😂😂

    • @Maverick00555
      @Maverick00555 3 місяці тому +5

      ​@@sznikers 50mbps😅

    • @peq42_
      @peq42_ 3 місяці тому +2

      judging by how badly affected my ISP was by a DDoS not long ago? I'd say they're still installing 100gbps connections as well(which is funny since they OFFER 1gbps to clients)

    • @brians8664
      @brians8664 3 місяці тому +8

      Sidenote, I always laugh when I see UA-camrs cringe at the price of optics. I always throw up a little when I see the price of a 100 km 100Gb QSFP+ module.
      For most companies, field side amplification has been gone for many years. It’s far cheaper to buy longer range optics and have multiple links.

  • @lennard9331
    @lennard9331 3 місяці тому +293

    "...so I ended up upgrading everything to 100Gbps, that's what you do, right?"
    No, Wendell, most of us don't have the kind of equipment to do that 😂

    • @jonathanyang6230
      @jonathanyang6230 3 місяці тому +22

      i was pretty jazzed when i got myself a 2.5gbe switch

    • @lennard9331
      @lennard9331 3 місяці тому

      @@jonathanyang6230 I'm not a massive home networking guy, so I make do with what I get from my ISP and what on-board solution offers.
      That being said, I'm also on 2.5Gbps right now, as French ISPs have started supporting 2.5Gbps, 5Gbps and even 10Gbps modems for end users at affordable prices. The difference it makes is actually insane! I didn't expect moving from gigabit/WiFi 6 to dual 2.5Gbps/WiFi6e to make such a massive difference, even with the connections between my devices at home.

    • @TheIgor449
      @TheIgor449 3 місяці тому +9

      Me with my Mel- Nvidia 10Gbps sfp+ card thinking it's overkill for at least 5-10 years

    • @r00tyschannel52
      @r00tyschannel52 3 місяці тому +3

      I just recently upgraded to 2.5gbe with 10gbit link between floors. Ah yes, future proofed for a little while at least.
      "You should all be getting 100gbe, it's old hat by now" the fuuuu. Yeah, I know we're talking enterprise, but still.

    • @hugevibez
      @hugevibez 3 місяці тому

      It really isn't that far out of reach, you can find a lot of 100gbps switches for the price of a high-end Ubiquity switch, since the hyperscalars are dumping 100gig en masse. If you are on a 3 node cluster, you can just crossbar them and forego a switch. This means you can spread out buying the NICs and transceivers on the client side, then afterwards buying switches and transceivers on the networking side.
      Edit: I went full balls to the wall, where I upgraded every switch to ONIE/SONiC and my entire network stack came down to about 3000 Euro. I did this because I wanted to learn SONiC and how to build an overlay network. A more reasonable approach would be to find one or two SN2040 switches for redundancy with 8x 100gbps and 48x 25gbps ports, this is more than enough connectivity for any homelab in my opinion. You only need something that has rj45 and POE+ for clients and APs on the side.

  • @bruce_just_
    @bruce_just_ 3 місяці тому +61

    Network engineer here, we’re already on 100G between Metro POP sites and intercapital links for several months already, and now standing up multiple 100G link bundles on intracapital core links.
    Also our colleagues in the internet peering team are running multiple 100G link bundles on our internet borders.

    • @BryanSeitz
      @BryanSeitz 3 місяці тому

      Why so slow?

    • @alexz1232
      @alexz1232 3 місяці тому +5

      @@BryanSeitz A lot of the heavy traffic like streaming services and game downloads will have a local cache in major cities. With good management you only need 100G worth of peering bandwidth per 100K clients.

    • @TheMrDrMs
      @TheMrDrMs 3 місяці тому +1

      sys eng and I dabble in network with the net eng guys. We got some 400gb switches maybe 6mo ago or so. so wild. Then I saw the netflix 400gb over PCI docs, also wild.

  • @floriantthebault521
    @floriantthebault521 3 місяці тому +95

    Nah, you don’t need ASIC for deep trafic analysis on 100Gb/s network. At my place, we do DPI at 100Gb/s with only one CPU (32-cores, though) and 64GB of RAM. Full line-rate certified by Spirent, at 14 millions new sessions per second and 140 mpps. But to do that, we had to redevelop the drivers of the e810 from scratch in Rust for everything to work in userspace (DPDK is… too limited for that). So it’s possible, took us 3 years of R&D, though ;-)

    • @JeffMcJunkin
      @JeffMcJunkin 3 місяці тому +6

      Can you share a link or any context? This is intriguing stuff!

    • @quackdoc5074
      @quackdoc5074 3 місяці тому +2

      out of curiosity, was this done using BPF?

    • @floriantthebault521
      @floriantthebault521 3 місяці тому

      @@quackdoc5074 Nope. We tried using AF_XDP (because of eBPF), but... it didn't scale enough to reach 100Gbit/s full line rate. It started dropping around 40G and we had to throw 40+ codes... To costly. That's why we took the high road and re-developped brand new NIC drivers from scratch for the whole intel family (from the 500 to the 800), only way to achieve true linear scalability.

    • @slidetoc
      @slidetoc 3 місяці тому

      So when will it be on github?

    • @jfbeam
      @jfbeam 3 місяці тому +1

      @@quackdoc5074 Doubt it. BPF is too slow. That's why DPDK came about - it's mostly just a NIC driver in userspace, but you're very limited in what you can do in userspace.

  • @funkintonbeardo
    @funkintonbeardo 3 місяці тому +17

    In a previous job, I worked with 2x400G cards using DPDK. It was glorious to run TRex on the other side of the wire and see 800G flowing through our program.

  • @truckerallikatuk
    @truckerallikatuk 3 місяці тому +109

    Wendel waves around a connectX-5 calling it old... I'm gonna go cuddle my beloved connectX-3...

    • @guspaz
      @guspaz 3 місяці тому

      I recently had to swap out my ConnectX-3 cards with ConnectX-4 cards because nVidia dropped driver support for ConnectX-3 after 2020 (so Ubuntu 20.04 is fine, but 22.04 and 24.04 is a no-go), but still support the latest and greatest distros/kernels with the ConnectX-4. Luckily, 25 gig ConnectX-4 cards are now dirt cheap and are backwards compatible with SFP+, so I could simultaneously fix my driver woes, set myself up for a future 25 gig upgrade, and avoid replacing anything in my network other than the NICs.

    • @samegoi
      @samegoi 3 місяці тому +1

      I installed x-4 cards today :D

    • @BattousaiHBr
      @BattousaiHBr 3 місяці тому +4

      bruh i'm on x-2...

    • @Vatharian
      @Vatharian 3 місяці тому

      I am absolutely happy I got Connect-X 512 under $100/piece few years ago. I started from ConnectX-2. You will get there!

    • @peterpain6625
      @peterpain6625 3 місяці тому

      Those are still doing fine if you update the firmware. We got lots of those in production.

  • @dbcooper7326
    @dbcooper7326 3 місяці тому +36

    My first corporate job had 4Mb/s Token ring networking. What a leap since.

  • @alc5440
    @alc5440 3 місяці тому +59

    God the FEC nightmare is real. I spend days trying to figure out why I couldn't get a Mikrotik router to talk to a Ubiquiti switch at 25 gig and the answer was FEC.

  • @DavidEsotica
    @DavidEsotica 3 місяці тому +32

    Fair to say I didn't understand most of what he was talking about but it was fun to listen to.

  • @abavariannormiepleb9470
    @abavariannormiepleb9470 4 місяці тому +126

    Video/content suggestion: Boot Windows over such a 100 GbE adapter from a ZFS Server and how to get the most performance out of it.

    • @Kfdhjgethfdtgh774rvbjs
      @Kfdhjgethfdtgh774rvbjs 3 місяці тому +2

      +1

    • @BasedPajeet
      @BasedPajeet 3 місяці тому +7

      U need iscsi boot capable motherboard the last time I saw that option it was in an intel nuc.

    • @fujinshu
      @fujinshu 3 місяці тому +3

      @@BasedPajeetYou mean PXE boot, right?

    • @magfal
      @magfal 3 місяці тому +4

      ​@@fujinshuPXE is not the same as iSCSS

    • @nadtz
      @nadtz 3 місяці тому +6

      @@fujinshu No, he meant isci boot and like he mentioned it's a boot option in some motherboard bios (nic also needs to support it). Can be a little quirky to get working (did it with some supermicro motherboards once upon a time) but once you get it working it's pretty neat. That said I don't believe intel still supports it but you can still do something similar with UEFI boot options on hardware that supports it (and for that you will need PXE).

  • @dbattleaxe
    @dbattleaxe 3 місяці тому +27

    You should do a review of QNAP's 4x100GbE + 8x25GbE switch. It's reasonably priced and uses current gen tech, so much lower power/fan noise and has more ports than the Mikrotik 100GbE switch. It won't have all the fancy layer 3 capabilities of the used switches, but I'd like to see how it compares for those of us who care about noise.

    • @jttech44
      @jttech44 3 місяці тому +2

      I'd argue that the mikrotik cloud routers don't actually have usable L3 features, being that even simple things wind up limiting thruput to ~400mbps

    • @dbattleaxe
      @dbattleaxe 3 місяці тому +3

      @@jttech44 Yeah, that was in reference to the used Mellanox, Dell, etc. switches, not the Mikrotik ones. These low cost switches don't really have usable L3 features, but most home labs don't really need those.

  • @makinbacon21
    @makinbacon21 3 місяці тому +5

    We've got that server deployed as a VM host actually. Proxmox, ZFS (RAIDZ2 on SAS HDDs + L2ARC & ZLOG on NVMe). Wonderful piece of hw, though we're likely only getting to 10 GbE this year. Might future proof with a 25 GbE capable switch, but the upstream switch we're linked to only got to 10 GbE recently (low priority, other buildings are up to 25 and 100).

  • @michaelrichardson8467
    @michaelrichardson8467 3 місяці тому +32

    Ahh yes 100Gbit. The real 10Gig

    • @JireSoftware
      @JireSoftware 3 місяці тому

      10 gig was rad for 2014! 10 years later and now it's 100 gig!

    •  3 місяці тому

      nice autism jire @@JireSoftware

  • @q5sys
    @q5sys 3 місяці тому +59

    "Experimental RDMA capable SMB Server that's floating around out there on the internet" ... GO ON...

    • @fuzzydogdog
      @fuzzydogdog 3 місяці тому +9

      Likely referring to ksmbd, it's an in-kernel server that got declared stable late last year. There's a couple threads on the forums about it, but Windows seems to have trouble establishing RDMA connections with it.

    • @q5sys
      @q5sys 3 місяці тому +4

      @@fuzzydogdog I thought of that, but since that has been marked as stable and he said 'experimental' I was thinking maybe he has heard of something else.

    • @rawhide_kobayashi
      @rawhide_kobayashi 3 місяці тому +2

      @@fuzzydogdog man, I've been fighting linux server > windows client rdma file sharing for years. I tried ksmbd before it was 'stable' (but after rdma was supposed to be supported) and it never worked. but now I don't have any rdma-capable connections between windows and linux machines anymore anyway...

    • @NathanaelNewton
      @NathanaelNewton 3 місяці тому

      ​@@q5sys oh hi there

  • @PaperReaper
    @PaperReaper 3 місяці тому +3

    Work at a small cloud provider. We only just upgraded to 100gig in our backbone a year or so ago and are expanding that soonish. A few 100gig switches went through my hands the other day for software updates.

  • @dbattleaxe
    @dbattleaxe 3 місяці тому +16

    I got used 100GbE-CWDM4 transceivers for $5 each off ebay. Those run with LC terminated duplex single mode fiber, which is much easier to deal with than 8 fiber MPO.

    • @danmerillat
      @danmerillat 3 місяці тому

      same for 40gbe-lr4 (lite). They're basically paying you to buy them when you save so much in cable costs going from MPO to LC and you don't have to deal with crossover mismatch.

  • @jamesfmilne
    @jamesfmilne 3 місяці тому +5

    We use Xinnor for NVMe RAID in film post production where bandwidth is more important than data integrity of something like ZFS.

  • @RaidOwl
    @RaidOwl 3 місяці тому +12

    Ahh yes 100gig...my nemesis

    • @seanunderscorepry
      @seanunderscorepry 3 місяці тому +2

      "and if you made it to the end of the video... You are my RaidOwl comment on a level1techs video."

  • @porklaser
    @porklaser 3 місяці тому +3

    25 Gig was easy and worked out of the box.. So naturally I had to go the hard route.
    Hehe. Upgraded the home net to a 10gig backbone and I was feelin' pretty good.

  • @keyboard_g
    @keyboard_g 3 місяці тому +8

    Slaps 100Gbps. You can fit so many UA-cams in there.

  • @niklasp.5847
    @niklasp.5847 3 місяці тому +6

    Remember the intel cards are 100g full duplex, while the Mellanox could push line rate per port if the pcie bus wouldn't limit it. The cx4 is still supported, as it uses the same driver as the cx7. If one does not need the new features like 200g or 400g the old cards are almost as capable. That slcould however not be said for 100g cards from qpogoc which are a pain in the ass compared to mlx and intel
    I would love to see some stuff with dpdk and vpp. A 100G router in x86 is very cool

  • @ingframin
    @ingframin 3 місяці тому +3

    I bought 2 cards with Intel e810 at work. They work like a charm and the driver is open source. Although, you need to compile it yourself for Debian… but for the rest they are basically plug and play. I am very happy with them.

  • @michaelgleason4791
    @michaelgleason4791 3 місяці тому

    I was going to upgrade to 10gb, but went with 25gb, so I get the notion. I just love watching Wendel when he's like a little kid about this stuff. It's so fun and engaging.

  • @vdis
    @vdis 3 місяці тому +5

    100 Gigabit. Gigabit! Well thanks, now I feel really old. 10 megabit coax old.

  • @literallycanadian
    @literallycanadian 3 місяці тому

    Its so funny hearing you talk about how the orange OM1 fibre is dinosaur age. Industrial plants still live on the stuff. Heck the protective relays that control the breakers that protect our electrical grid are still being installed to this day with OM1 fibre.

  • @ewenchan1239
    @ewenchan1239 3 місяці тому +3

    I've been running Mellanox ConnectX-4 dual 100 Gbps VPI Infiniband cards in the basement of my home since December 2018.
    Skipped 10G, etc. and went straight from 1 G to 100 Gbps.
    IB has its quirks but it is anywhere between 1-3% faster than 100 GbE off the same card.

  • @jame358
    @jame358 3 місяці тому +7

    it feels weird working in a bigtech DC and seeing people talk about 100gig, meanwhile im regularly working switches with 32 400gig QSFP ports.

    • @Ex_impius
      @Ex_impius 3 місяці тому +1

      Same thing i said. 100G is not new. We were installing 100G links in google DCs in 2017. There were only a few 100G links on the juniper and cisco routers in the CNR(campus networking room) then but we had them.

    • @darklordzqwerty
      @darklordzqwerty 3 місяці тому +1

      lol im working on 800gig final development phase before production version. seeing this is funny, there's much more exciting stuff.

  • @tomhollins5303
    @tomhollins5303 3 місяці тому

    That switch @15:52 Definitely an IT cabling job.

  • @piked86
    @piked86 3 місяці тому +24

    Wendell is my Mr. Wizard for computers.

    • @vaughn1804
      @vaughn1804 3 місяці тому

      💯👍

    • @ikirules
      @ikirules 3 місяці тому

      it's the Crawly of the computers :D! (tiktok reference)

    • @makeshiftsavant
      @makeshiftsavant 3 місяці тому +1

      In the current year, I don't learn anything from watching anyone else. Love the Wendell evolution.

    • @lilricky2515
      @lilricky2515 3 місяці тому

      "We're going to need another Timmy..."

  • @newstandardaccount
    @newstandardaccount 3 місяці тому +4

    I just upgraded part of my home network from 1 gbe to 10 gbe and it was a huge quality of life improvement. Moving large files to/from my NAS is fast! Upgrading to 100 gbe sounds insane to me.

    • @mrmotofy
      @mrmotofy 3 місяці тому +1

      Need storage speeds to make use of it

    • @newstandardaccount
      @newstandardaccount 3 місяці тому

      @@mrmotofy yes - in my case I'm using ZFS on rotational media. What I've noticed is that for files that are about 7 gigs or smaller, I can copy them to my server at over 1 GB/sec, but eventually the speed drops to the rotational media speed, about 250 MB/sec.
      My guess is that ZFS is caching the writes in some way but once I blow out the cache, it is forced to write at the slower speeds of the media.

  • @nemesis851_
    @nemesis851_ 3 місяці тому

    Omg 😂 25 years ago I worked in the ILEC CO and we had OC-3, all the way down to 1 Meg (and less).
    Here today, we the general public can have OC3 in our hands at home 😊

  • @sventharfatman
    @sventharfatman 3 місяці тому +1

    100g has options for DR to shoot 1310nm 500meter and FR to shoot 1310nm 2km. Both are plenty safe for short runs in the same rack or within the datacenter. Even the 10km optics are unlikely to burn out the receiving side these days. Most of them have a RX window starting at or above the top of the TX window. So should be good to go once you add some loss through connectors.

    • @danmerillat
      @danmerillat 3 місяці тому +1

      Don't they modulate their optical power? 40g-LR4 lite I'm transmitting 0.5 dBm, receiving -1 and it's rated for up to 3.5 (TX and RX)

  • @m4dizzle
    @m4dizzle 3 місяці тому +1

    I love when Wendell gets excited. Personally, I'm impressed when Linux does anything at all 😂

  • @tmfmxo
    @tmfmxo 3 місяці тому

    I like your calm no-nonsense presentation.

  • @ajhieb
    @ajhieb 3 місяці тому +2

    I upgraded everything in my rack to 40Gbe a couple of years ago (and it was pretty dang cheap at the time) and seeing as I don't have and uber-fast kioxia drives, I don't think the jump to 100Gbe is worth the cost for me. Might wait for 400Gbe to come down in a few years.
    My core switch is a Mellanox SX6036 It connects to a Dell/Force-10 S4810 and a pair of Powerconnect 5548 switches (one with POE) My nics are mostly Mellanox ConnectX-3s with a few Chelsio Intel based nics. It all worked together surprisingly well. IIRC for the nics, dac cables and 2 switches I was all in for around $600-$700.

    • @mrmotofy
      @mrmotofy 3 місяці тому

      Your Corn collection must make you really good money

  • @rmp5s
    @rmp5s 3 місяці тому +2

    See?...this is how I end up replacing my server, all my storage, all my networking gear, everything...the chase!! Speed is never enough!! lol

  • @jttech44
    @jttech44 3 місяці тому +3

    Oh mannn yeah, FEC bit me in the ass to the tune of like 4 hours on those XG switches.
    It is fixable in any condition, provided you actually have control over FEC on both ends... it's just a pain, and I'd highly recommend just using what Ubiquiti uses (Base-R aka FEC74 in the rest of the world) and not trying to get the ubiquiti setting to persist.
    I'd also recommend filing a ticket with UI to complain about their nonsensical, nonstandard FEC settings.

  • @jeroen5838
    @jeroen5838 3 місяці тому +2

    I just upgraded to 10 and 2.5… it required special fumbling with driver versions on windows, because of course intel. But it works! Now I need faster WiFi and fiber internet…

  • @DEJ915
    @DEJ915 3 місяці тому +3

    My experience moving from 10g to 25g was interesting since no links would come up by just plugging in. After a long session of messing around with it for a few hours I figured that if I enable FEC then disable it again the links activate. I thought maybe the firmware was the issue on my x695s since extreme mentioned FEC in update notes but the same behaviour was still going on even after updating so just a bit odd. These weren't even LACP ports or anything either just basic ports. (host side was dual-port 25G broadcom NICs on vmware 7)

  • @Chris_miller192
    @Chris_miller192 3 місяці тому +1

    A guide / video on the experimental RDMA SMB server that you mentioned would be lovely.

  • @guspaz
    @guspaz 3 місяці тому +15

    From a consumer perspective, 10 gig hardware (be it NICs, transceivers, or switches) are dirt cheap now. You can get any of those for under a hundred dollars now. But 25 gig and 100 gig, while the NICs/transceivers are affordable or even cheap, the switches are still in the high hundreds to low thousands of dollars. And those switches will probably need a secondary switch to connect 1 or 2.5 gig devices to your network, you can't really get an all-in-one switch like you can for mixing 1/2.5 and 10 gig clients. The costs add up fast and your minimum investment ends up in the thousands of dollars.

    • @mrmotofy
      @mrmotofy 3 місяці тому +1

      Sure, cuz right now I'm looking at an 8 port SFP+ Mikrotik switch for $230

    • @vylbird8014
      @vylbird8014 3 місяці тому

      I have noticed one small problem though: That 10gig hardware consumer significantly more power than regular gig-eth. Notice the great big heatsink on every pci-e 10gbit interface.

    • @mrmotofy
      @mrmotofy 3 місяці тому

      @@vylbird8014 Fiber doesn't

  • @edwardallenthree
    @edwardallenthree 3 місяці тому +4

    I still use old infiniband stuff (40 gig) at home. It's still shockingly fast and low latency. And cheap for all the parts from switches to nics.

  • @marktackman2886
    @marktackman2886 3 місяці тому +2

    Network Engineers support Wendell's 100g journey. Now lets get a desktop system capable of the throughput without an accelerator card.

  • @locusm
    @locusm 3 місяці тому

    Keen to see a video looking at Xinnor having just started down the SPDK route myself.
    Even better if its done in the context of Proxmox and virtualisation in general.

  • @ReeseRiverson
    @ReeseRiverson 3 місяці тому +3

    And here I was thinking about going 25gbit for my Dell storage server and my main desktop. Wendell has made the want for 100gbit more desirable. lol

    • @ewenchan1239
      @ewenchan1239 3 місяці тому +1

      If you're running a point-to-point connection, it's not so bad.
      If you need a 100 Gbps switch in between, THAT is probably going to be your most expensive piece of hardware. (Depends on how many ports you need, but there are, cheaper (absolute price) options, albeit at a higher $/Gbps throughput level.
      I bought my 36-port 100 Gbps Infiniband switch for $3000. $3000 is a lot of money, but that $3000/7.2 Tbps throughput = $0.416666/Gbps.
      You can have cheaper switches, but the $/Gbps will be higher.

  • @HyenaEmpyema
    @HyenaEmpyema 3 місяці тому +4

    I want the 2x 100g port just so I can push 100gb/s back to myself for the thrill of it. Do you have to bolt your chassis down to protect against the inertia of all those bits slamming into the receiver so quickly?

  • @rush2489
    @rush2489 3 місяці тому +1

    we deployed pensando dpus in our latest datacenter refresh (dual 100GB ports)
    + a second connectX-6 dual 100GB card for RDMA support (and dual DPU wasn't yet supported when we designed the hardware refresh)
    Its stupid fast.

  • @russell2952
    @russell2952 3 місяці тому +1

    A few times a year it'd be nice to have that at home, otherwise it's overkill. The most I normally do on my LAN is stream movies.

  • @pixiepaws99
    @pixiepaws99 3 місяці тому

    Big thing is RDMA/ROCE, and thankfully E810 supports that. You can easily do iSCSI with iSER and NVMeoF with RDMA.

  • @Aliamus_
    @Aliamus_ 3 місяці тому

    I semi recently upgraded to 10gbe, and I'm barelly keeping it saturated, one way, 2 single port connectx-3's actually, I'll be happy with this for a long while, doesn't stop me from drooling over this though.

  • @LinniageX
    @LinniageX 3 місяці тому

    I just built an iSCSI setup using Hyper-V and Windows Server as a file server and it is actually really performant. I did have to fiddle with the iSCSI and NIC settings to get the optimal throughput but with 2 10G fiber channels per server, I have a very stable very fast fail-over cluster using built in software. Things have really changed.

  • @CubbyTech
    @CubbyTech 3 місяці тому

    100G 10km / LR optics are super common in my data centers! Used for 3ft to 15 miles! You only have to think about it when you get to ZR optics / 80km reach.
    This is also over single mode fiber.

  • @killroy713
    @killroy713 3 місяці тому

    Been playing alot with 100G at work a few Pure FB //S systems it crazy to see 100 terabyte migrations happen over a lunch break

  • @TechAmbr
    @TechAmbr 3 місяці тому +5

    Good lord, I just upgraded my home network to 2.5gig and now 100gig is a thing? Wendell, how much speed do you need????? Hopefully the Level1 Supercomputer Cluster will be showing up in the Top100 soon lol

    • @guspaz
      @guspaz 3 місяці тому +1

      With 10 gig switches being under a hundred bucks now, 2.5 is old hat ;)

  • @burningglory2373
    @burningglory2373 3 місяці тому +1

    If you do end up using long haul cards you can get attenuators, I jus recommend having a light meter handy so you can see exactly how much you need to attenuate. Not that it makes sense to pay the extra money but if you have the hardware just sitting around you make due with what you got.

  • @HupfderFloh
    @HupfderFloh 3 місяці тому

    I think you're definitely getting into Level 2 territory here.

  • @LanceThumping
    @LanceThumping 3 місяці тому

    I'm jealous of all your toys. I don't have near the performance of storage to even come close to being able to use a single 100G lane.
    Hell I don't think I can saturate my 10G lane that I can't even use atm because of reasons.

  • @theforthdoctor7872
    @theforthdoctor7872 3 місяці тому +1

    For a moment I thought you had discovered a huge lost infocom text adventure.

  • @wabash9000
    @wabash9000 3 місяці тому

    I've been thinking about getting 40gb for home just for the heck of it. The used 10gb hardware is actually quite expensive, but because there was a bunch of datacenters that had 40gb and then upgraded to 100gb or 250gb, the used hardware on ebay is CHEAP. $20 for a dual NIC, $150-250 for a 36 port switch. The same stuff in used 10gb is $100 per NIC and $500 per switch.

  • @awarepenguin3376
    @awarepenguin3376 3 місяці тому +7

    Unfortunately I'm a Mac so we can't have nice 100G. 40G is the max. (cries in speed)

  • @FrenziedManbeast
    @FrenziedManbeast 3 місяці тому +12

    And I'm sitting over here acting smug with my 2.5 and 10 GbE...I don't think I could saturate 100GbE but the 25 would be nice under certain workloads!

    • @nadtz
      @nadtz 3 місяці тому

      Yeah I wouldn't need it most of the time but 25 would be nice. Problem I've found is the affordable 25/40gb switches are older, loud and power hungry. Newer stuff that is a lot more power efficient is still pretty expensive. For now I'll stick with 10 but I'm keeping my eye out for an upgrade.

    • @StaceySchroederCanada
      @StaceySchroederCanada 3 місяці тому

      @@nadtz what is it about the 10 that you don't like?

    • @nadtz
      @nadtz 3 місяці тому +1

      @@StaceySchroederCanada Never said I didn't like it, price vs. performance 10gb is great because it's so (relatively) cheap. It would occasionally be nice to have a faster connection when I'm dumping large amounts of data across my network is all.

    • @peq42_
      @peq42_ 3 місяці тому +1

      to be fair 2.5GbE isn't something to be smug about when all current motherboards come with support for it out of the box xD

    • @FrenziedManbeast
      @FrenziedManbeast 3 місяці тому

      @@peq42_ This is patently false - you can find both AM5 and LGA 1700 mobos that are $160+ with only a single 1GbE port, and that's not even looking at the ghetto chipsets on each platform.
      Even figuring that, I'd reckon most people don't have 2.5 GbE switching/routing infrastructure unless they went out of their way on purpose for it.
      I personally have my workstation and server running on 10GbE, and everything else is 2.5GbE at this point. But I had to sit down and purposely plan/buy all the gear to make the jump possible for my whole house.

  • @marktackman2886
    @marktackman2886 3 місяці тому

    Network Engineers love Intel NICs, 5xx,7xx,8xx generations have been so reliable....snatched 4x x550 from AliExpress.....my home lab has 10% the capabilities but yet still cost 1k for 4x NIC, a 10g switch, and a 10g router. If you want to know the exact setup, lemme know, i'll post it.

  • @kelownatechkid
    @kelownatechkid 3 місяці тому +1

    Great for Ceph! I don't trust my important data on anything else at this point other than with erasure coding in a ceph cluster of machines running ECC lol

  • @smeezer
    @smeezer 3 місяці тому

    16:18 Wendell about to get roasted by Jayz2cents for cable management xD

  • @fuzzydogdog
    @fuzzydogdog 3 місяці тому +1

    Wow, this was the exact video I needed considering I'm starting to plan my 100GbE buildout! Have you had any success configuring SMB-over-RDMA on a Linux host yet? I know ksmbd *technically* supports it, but I haven't seen any evidence of a successful Linux-Windows transfer.

  • @henlego
    @henlego 3 місяці тому +2

    Now that you have 100G, may as well setup a lustre volume

  • @cmkeelDIM
    @cmkeelDIM 2 місяці тому

    As a Muggle in all of this, I looked at the prices of some of these components. NOPE! Fun to see where the bleeding edge is, but DAMN!!!! DEEEEEEEEEP Pockets required!

  • @cmdr_stretchedguy
    @cmdr_stretchedguy 3 місяці тому +1

    I am just starting into the 2.5G realm, just don't have the need or equipment to relay data that fast on my LAN. 2.5G is a good spot since I will be getting 2G fiber WAN soon.

  • @bcredeur97
    @bcredeur97 3 місяці тому

    Use RDMA at work with 25G connections for a storage cluster. It helps even for that!
    It's pretty much the only way to get ALL the IOPS from what I've found.
    Wish Ceph would use it :/ Not much in the open source storage clustering world that can go super fast on a small scale

  • @MatthewSmithx
    @MatthewSmithx 3 місяці тому

    Technically infinband is an IBTA technology and there used to be several vendors who implemented it. Ironically, Intel was one of the founding members of IBTA but abandoned it for omnipath

  • @lolmao500
    @lolmao500 3 місяці тому +6

    Me getting 500mbs internet. Me after getting new internet : realizes that my computer, router and wifi cant handle it and can handle at most 100 mbps... goddamn it

    • @ax14pz107
      @ax14pz107 3 місяці тому +8

      Most likely your computer and switch can handle it fine and it's just your router being the choke point unless you have a seriously old computer and switch.

    • @whohan779
      @whohan779 3 місяці тому +3

      ​@@ax14pz107 Yeah; even Desktop PCs from 2004 having mainboards such as the Gigabyte 915PM-ILR come with Gigabit/s Ethernet. I'd be seriously embarrassed to not have that as a minimum when such boards regularly end up in scrapyards.

    • @danmerillat
      @danmerillat 3 місяці тому

      @@whohan779 he mentioned wifi, so I'm guessing not a hard link.

  • @frankwalder3608
    @frankwalder3608 3 місяці тому

    Even on eBay, those Dell 5200 series switches are around $4,000! There is no way I will need that kind of bandwidth. A lot of things on my LAN do even have 10 gig capacity. So after spending four grand on a switch, and realizing that my local DNS still doesn't work, then what am I going to do?! Just like Patrick from STH, you show a lot of interesting equipment that I can't afford, and would have no use for if I could.

  • @DavidtheSwarfer
    @DavidtheSwarfer 3 місяці тому

    so what started as a "2.5Gb workstation to server" direct link ended up as a 5 port 2.5Gb switch and 3 machines linked at 2.5Gb, with a 4th coming soon. Upshot is I have faster network at home than at work (everything still 1Gb there) . What a wonderful world we live in.

  • @WiihawkPL
    @WiihawkPL 3 місяці тому +1

    having failed to get into even 10gig because of price i'm sure this is worth watching

    • @danmerillat
      @danmerillat 3 місяці тому

      10g copper is expensive for a variety of reasons: hard to drive copper, used market has huge demand since it works (mostly) with existing cat6 wiring, highend boards come with 10g-baseT connections. It cost me far more for a 10gbase-T connection using existing wires than a 40gb-lr4 where I had to run a new fiber drop from my network closet to my office - including the cost of the fiber, keystones, patches. If you are in a position to run fiber it opens up a world of cheap secondhand highspeed networking gear. I spent under $400 total for 40g, and 100g would have only doubled that. (not exactly comparable, 100g point-to-point to my NAS vs 40g with a switch included)

  • @kirksteinklauber260
    @kirksteinklauber260 3 місяці тому +1

    You should consider Mikrotik CRS5xx switch series. They support 100 Gbps, affordable and tons of enterprise features!

  • @AnthonyBrice-jg2ey
    @AnthonyBrice-jg2ey 3 місяці тому

    Thumbnail looks like Wendell's holding a delicious cocktail.

  • @SirDimpls
    @SirDimpls 3 місяці тому +1

    don't know what most of this video is talking about, but the other day I discovered the reason my 2 years old home file server was slow is because I used a really cheap 1m lan cable that could only run at 100mbps, I changed it with another cheap lan cable and magically got 1000mbps upgrade 😂

  • @gammafilter
    @gammafilter 3 місяці тому +1

    We all need a 100 gig ocp3.0 card in our gaming pc's now 😂😂😂. If only the board partners will get there priorities straight 🎉

  • @ghydda
    @ghydda 3 місяці тому

    I think an update on the toilet lid collection is overdue - just to ballance out these latest tech-upgrades.

  • @larslrs7234
    @larslrs7234 3 місяці тому +1

    "Even for Windows systems. You just put a NIC in and it works." Some UDP stats please.

  • @bingo475
    @bingo475 3 місяці тому

    At 2:05 you mentioned the fiber optics cable colors, could you do a video in more detail on the colors and their uses? I work at a company that manufactures and test fiber optics equipment. We use the yellow fiber for 100Gbe and 800/1,600Gbe. The aqua fiber for 400/800Gbe but I have no idea why the 400/800Gbe gets aqua fiber.

  • @variancewithin
    @variancewithin 3 місяці тому

    your intro has 'the lick' in it lmoa. that's hilarious

  • @Marc.Google
    @Marc.Google 3 місяці тому

    Truly a super-janitor.

  • @L0rdLogan
    @L0rdLogan 3 місяці тому

    Here I am considering upgrading to 2.5gig, currently on 1 gig for LAN networking

  • @lemmonsinmyeyes
    @lemmonsinmyeyes 3 місяці тому

    The thing with fibre op tuning/distance setting; I wonder why the onboard controller could not just, 1) detect that a cable was inserted. 2) have a 'self-tuning' mode where it starts with the lowest signal and gradually increases until the signal is detected on the other end. 3) detect when a cable is unplugged and sets the interface to 'reset' mode and run the self-tuning again. 4) do have this work when things are plugged/unplugged without power, there could be a physical switch that is manipulated upon insertion.
    I am dumb idiot, so maybe this exists, or there are significant problems with doing the aforementioned.

  • @AtanasPaunoff
    @AtanasPaunoff 3 місяці тому

    I have upgraded to 100G last year as well. I see you have bought ColorChip model B trnsceivers which are little hotter and with lower temp treshold than model C but I hope they would be fine :) I have both B and C but prefer to use C.

  • @JamesHarr
    @JamesHarr 3 місяці тому

    I hate to nit pick, but we run a lot of 100G-LR4 for short runs so we don't need multiple kinds of optics and the whole "burning out the other optic" isn't much of an issue anymore for decent quality transceivers.

  • @wasab1tch
    @wasab1tch 3 місяці тому +1

    upgrading to 100gig networking just because you can. Bruh I needed this in my life

  • @wannabesq
    @wannabesq 3 місяці тому +2

    Meanwhile I'm just happy most desktop motherboards are coming with 2.5G nics

  • @timothyvandyke9511
    @timothyvandyke9511 3 місяці тому

    Meanwhile my home router runs our home lab and my NAS is on a 1gb Ethernet speed. Thankfully it’s the only machine any work happens on, so nothing needs to talk to outside for large file use. I just docker use everything and run against the disks ☺️

  • @CaptTerrific
    @CaptTerrific 3 місяці тому

    ...and here I just got my 10g up and running!!

  • @WillFuI
    @WillFuI 3 місяці тому +2

    25G to 100G is just the logical step. It’s like 2.5G to 10G

  • @bentomo
    @bentomo 3 місяці тому

    We don't do this because we can, but because we MUST!

  • @PeterTheTyke
    @PeterTheTyke 3 місяці тому +1

    Have you tried the Qnap QSW-M7308R-4X ??

  • @harshbarj
    @harshbarj 3 місяці тому +1

    And here I am looking to jump up to 10GbE. Feels so slow now.

  • @Carambolero
    @Carambolero 3 місяці тому

    wow. awesome tech you-ve deliverred.

  • @TechnomancerStream
    @TechnomancerStream 3 місяці тому +2

    Me over here with an old Gbit server and praying to get 2.5

  • @michaelthompson5177
    @michaelthompson5177 3 місяці тому

    There is a familiar-looking Dell monitor in every corner of the world