I had 10G copper and was looking to move to 25G. As soon as I realized most of the pain would be running fiber, I too, went straight to 100G. Mikrotik for their 16x100G l4x25G switches and a few more Qnap 4x100 8x25 switches to round it out. 100G for roughly 2x 10G cost, and only 20-30% more than 25G.
In just talked my wife into using pfsense to upgrade to 10gig networking by using the promise of a VPN into our network for using self hosted AI. Llama and stable diffusion.
I know how you feel, except I'm single. I only have 1 computer and 1 phone. A NAS would be semi-useful, but realistically it'd be more useful to have offsite backup. But it's cool tech and damnit, I want it!
I was just thinking the same thing, we deployed 400Gb backbones years ago. 10Gb wan connections are a dime a dozen now. We’re starting to see dual/quad 100Gb backhauls to cell sites routinely now. Dual is for standard redundancy, quad is for dual-path redundancy.
judging by how badly affected my ISP was by a DDoS not long ago? I'd say they're still installing 100gbps connections as well(which is funny since they OFFER 1gbps to clients)
Sidenote, I always laugh when I see UA-camrs cringe at the price of optics. I always throw up a little when I see the price of a 100 km 100Gb QSFP+ module. For most companies, field side amplification has been gone for many years. It’s far cheaper to buy longer range optics and have multiple links.
@@jonathanyang6230 I'm not a massive home networking guy, so I make do with what I get from my ISP and what on-board solution offers. That being said, I'm also on 2.5Gbps right now, as French ISPs have started supporting 2.5Gbps, 5Gbps and even 10Gbps modems for end users at affordable prices. The difference it makes is actually insane! I didn't expect moving from gigabit/WiFi 6 to dual 2.5Gbps/WiFi6e to make such a massive difference, even with the connections between my devices at home.
I just recently upgraded to 2.5gbe with 10gbit link between floors. Ah yes, future proofed for a little while at least. "You should all be getting 100gbe, it's old hat by now" the fuuuu. Yeah, I know we're talking enterprise, but still.
It really isn't that far out of reach, you can find a lot of 100gbps switches for the price of a high-end Ubiquity switch, since the hyperscalars are dumping 100gig en masse. If you are on a 3 node cluster, you can just crossbar them and forego a switch. This means you can spread out buying the NICs and transceivers on the client side, then afterwards buying switches and transceivers on the networking side. Edit: I went full balls to the wall, where I upgraded every switch to ONIE/SONiC and my entire network stack came down to about 3000 Euro. I did this because I wanted to learn SONiC and how to build an overlay network. A more reasonable approach would be to find one or two SN2040 switches for redundancy with 8x 100gbps and 48x 25gbps ports, this is more than enough connectivity for any homelab in my opinion. You only need something that has rj45 and POE+ for clients and APs on the side.
Network engineer here, we’re already on 100G between Metro POP sites and intercapital links for several months already, and now standing up multiple 100G link bundles on intracapital core links. Also our colleagues in the internet peering team are running multiple 100G link bundles on our internet borders.
@@BryanSeitz A lot of the heavy traffic like streaming services and game downloads will have a local cache in major cities. With good management you only need 100G worth of peering bandwidth per 100K clients.
sys eng and I dabble in network with the net eng guys. We got some 400gb switches maybe 6mo ago or so. so wild. Then I saw the netflix 400gb over PCI docs, also wild.
Nah, you don’t need ASIC for deep trafic analysis on 100Gb/s network. At my place, we do DPI at 100Gb/s with only one CPU (32-cores, though) and 64GB of RAM. Full line-rate certified by Spirent, at 14 millions new sessions per second and 140 mpps. But to do that, we had to redevelop the drivers of the e810 from scratch in Rust for everything to work in userspace (DPDK is… too limited for that). So it’s possible, took us 3 years of R&D, though ;-)
@@quackdoc5074 Nope. We tried using AF_XDP (because of eBPF), but... it didn't scale enough to reach 100Gbit/s full line rate. It started dropping around 40G and we had to throw 40+ codes... To costly. That's why we took the high road and re-developped brand new NIC drivers from scratch for the whole intel family (from the 500 to the 800), only way to achieve true linear scalability.
@@quackdoc5074 Doubt it. BPF is too slow. That's why DPDK came about - it's mostly just a NIC driver in userspace, but you're very limited in what you can do in userspace.
In a previous job, I worked with 2x400G cards using DPDK. It was glorious to run TRex on the other side of the wire and see 800G flowing through our program.
I recently had to swap out my ConnectX-3 cards with ConnectX-4 cards because nVidia dropped driver support for ConnectX-3 after 2020 (so Ubuntu 20.04 is fine, but 22.04 and 24.04 is a no-go), but still support the latest and greatest distros/kernels with the ConnectX-4. Luckily, 25 gig ConnectX-4 cards are now dirt cheap and are backwards compatible with SFP+, so I could simultaneously fix my driver woes, set myself up for a future 25 gig upgrade, and avoid replacing anything in my network other than the NICs.
God the FEC nightmare is real. I spend days trying to figure out why I couldn't get a Mikrotik router to talk to a Ubiquiti switch at 25 gig and the answer was FEC.
@@fujinshu No, he meant isci boot and like he mentioned it's a boot option in some motherboard bios (nic also needs to support it). Can be a little quirky to get working (did it with some supermicro motherboards once upon a time) but once you get it working it's pretty neat. That said I don't believe intel still supports it but you can still do something similar with UEFI boot options on hardware that supports it (and for that you will need PXE).
You should do a review of QNAP's 4x100GbE + 8x25GbE switch. It's reasonably priced and uses current gen tech, so much lower power/fan noise and has more ports than the Mikrotik 100GbE switch. It won't have all the fancy layer 3 capabilities of the used switches, but I'd like to see how it compares for those of us who care about noise.
@@jttech44 Yeah, that was in reference to the used Mellanox, Dell, etc. switches, not the Mikrotik ones. These low cost switches don't really have usable L3 features, but most home labs don't really need those.
We've got that server deployed as a VM host actually. Proxmox, ZFS (RAIDZ2 on SAS HDDs + L2ARC & ZLOG on NVMe). Wonderful piece of hw, though we're likely only getting to 10 GbE this year. Might future proof with a 25 GbE capable switch, but the upstream switch we're linked to only got to 10 GbE recently (low priority, other buildings are up to 25 and 100).
Likely referring to ksmbd, it's an in-kernel server that got declared stable late last year. There's a couple threads on the forums about it, but Windows seems to have trouble establishing RDMA connections with it.
@@fuzzydogdog I thought of that, but since that has been marked as stable and he said 'experimental' I was thinking maybe he has heard of something else.
@@fuzzydogdog man, I've been fighting linux server > windows client rdma file sharing for years. I tried ksmbd before it was 'stable' (but after rdma was supposed to be supported) and it never worked. but now I don't have any rdma-capable connections between windows and linux machines anymore anyway...
Work at a small cloud provider. We only just upgraded to 100gig in our backbone a year or so ago and are expanding that soonish. A few 100gig switches went through my hands the other day for software updates.
I got used 100GbE-CWDM4 transceivers for $5 each off ebay. Those run with LC terminated duplex single mode fiber, which is much easier to deal with than 8 fiber MPO.
same for 40gbe-lr4 (lite). They're basically paying you to buy them when you save so much in cable costs going from MPO to LC and you don't have to deal with crossover mismatch.
25 Gig was easy and worked out of the box.. So naturally I had to go the hard route. Hehe. Upgraded the home net to a 10gig backbone and I was feelin' pretty good.
Remember the intel cards are 100g full duplex, while the Mellanox could push line rate per port if the pcie bus wouldn't limit it. The cx4 is still supported, as it uses the same driver as the cx7. If one does not need the new features like 200g or 400g the old cards are almost as capable. That slcould however not be said for 100g cards from qpogoc which are a pain in the ass compared to mlx and intel I would love to see some stuff with dpdk and vpp. A 100G router in x86 is very cool
I bought 2 cards with Intel e810 at work. They work like a charm and the driver is open source. Although, you need to compile it yourself for Debian… but for the rest they are basically plug and play. I am very happy with them.
I was going to upgrade to 10gb, but went with 25gb, so I get the notion. I just love watching Wendel when he's like a little kid about this stuff. It's so fun and engaging.
Its so funny hearing you talk about how the orange OM1 fibre is dinosaur age. Industrial plants still live on the stuff. Heck the protective relays that control the breakers that protect our electrical grid are still being installed to this day with OM1 fibre.
I've been running Mellanox ConnectX-4 dual 100 Gbps VPI Infiniband cards in the basement of my home since December 2018. Skipped 10G, etc. and went straight from 1 G to 100 Gbps. IB has its quirks but it is anywhere between 1-3% faster than 100 GbE off the same card.
Same thing i said. 100G is not new. We were installing 100G links in google DCs in 2017. There were only a few 100G links on the juniper and cisco routers in the CNR(campus networking room) then but we had them.
I just upgraded part of my home network from 1 gbe to 10 gbe and it was a huge quality of life improvement. Moving large files to/from my NAS is fast! Upgrading to 100 gbe sounds insane to me.
@@mrmotofy yes - in my case I'm using ZFS on rotational media. What I've noticed is that for files that are about 7 gigs or smaller, I can copy them to my server at over 1 GB/sec, but eventually the speed drops to the rotational media speed, about 250 MB/sec. My guess is that ZFS is caching the writes in some way but once I blow out the cache, it is forced to write at the slower speeds of the media.
Omg 😂 25 years ago I worked in the ILEC CO and we had OC-3, all the way down to 1 Meg (and less). Here today, we the general public can have OC3 in our hands at home 😊
100g has options for DR to shoot 1310nm 500meter and FR to shoot 1310nm 2km. Both are plenty safe for short runs in the same rack or within the datacenter. Even the 10km optics are unlikely to burn out the receiving side these days. Most of them have a RX window starting at or above the top of the TX window. So should be good to go once you add some loss through connectors.
I upgraded everything in my rack to 40Gbe a couple of years ago (and it was pretty dang cheap at the time) and seeing as I don't have and uber-fast kioxia drives, I don't think the jump to 100Gbe is worth the cost for me. Might wait for 400Gbe to come down in a few years. My core switch is a Mellanox SX6036 It connects to a Dell/Force-10 S4810 and a pair of Powerconnect 5548 switches (one with POE) My nics are mostly Mellanox ConnectX-3s with a few Chelsio Intel based nics. It all worked together surprisingly well. IIRC for the nics, dac cables and 2 switches I was all in for around $600-$700.
Oh mannn yeah, FEC bit me in the ass to the tune of like 4 hours on those XG switches. It is fixable in any condition, provided you actually have control over FEC on both ends... it's just a pain, and I'd highly recommend just using what Ubiquiti uses (Base-R aka FEC74 in the rest of the world) and not trying to get the ubiquiti setting to persist. I'd also recommend filing a ticket with UI to complain about their nonsensical, nonstandard FEC settings.
I just upgraded to 10 and 2.5… it required special fumbling with driver versions on windows, because of course intel. But it works! Now I need faster WiFi and fiber internet…
My experience moving from 10g to 25g was interesting since no links would come up by just plugging in. After a long session of messing around with it for a few hours I figured that if I enable FEC then disable it again the links activate. I thought maybe the firmware was the issue on my x695s since extreme mentioned FEC in update notes but the same behaviour was still going on even after updating so just a bit odd. These weren't even LACP ports or anything either just basic ports. (host side was dual-port 25G broadcom NICs on vmware 7)
From a consumer perspective, 10 gig hardware (be it NICs, transceivers, or switches) are dirt cheap now. You can get any of those for under a hundred dollars now. But 25 gig and 100 gig, while the NICs/transceivers are affordable or even cheap, the switches are still in the high hundreds to low thousands of dollars. And those switches will probably need a secondary switch to connect 1 or 2.5 gig devices to your network, you can't really get an all-in-one switch like you can for mixing 1/2.5 and 10 gig clients. The costs add up fast and your minimum investment ends up in the thousands of dollars.
I have noticed one small problem though: That 10gig hardware consumer significantly more power than regular gig-eth. Notice the great big heatsink on every pci-e 10gbit interface.
Keen to see a video looking at Xinnor having just started down the SPDK route myself. Even better if its done in the context of Proxmox and virtualisation in general.
If you're running a point-to-point connection, it's not so bad. If you need a 100 Gbps switch in between, THAT is probably going to be your most expensive piece of hardware. (Depends on how many ports you need, but there are, cheaper (absolute price) options, albeit at a higher $/Gbps throughput level. I bought my 36-port 100 Gbps Infiniband switch for $3000. $3000 is a lot of money, but that $3000/7.2 Tbps throughput = $0.416666/Gbps. You can have cheaper switches, but the $/Gbps will be higher.
I want the 2x 100g port just so I can push 100gb/s back to myself for the thrill of it. Do you have to bolt your chassis down to protect against the inertia of all those bits slamming into the receiver so quickly?
we deployed pensando dpus in our latest datacenter refresh (dual 100GB ports) + a second connectX-6 dual 100GB card for RDMA support (and dual DPU wasn't yet supported when we designed the hardware refresh) Its stupid fast.
I semi recently upgraded to 10gbe, and I'm barelly keeping it saturated, one way, 2 single port connectx-3's actually, I'll be happy with this for a long while, doesn't stop me from drooling over this though.
I just built an iSCSI setup using Hyper-V and Windows Server as a file server and it is actually really performant. I did have to fiddle with the iSCSI and NIC settings to get the optimal throughput but with 2 10G fiber channels per server, I have a very stable very fast fail-over cluster using built in software. Things have really changed.
100G 10km / LR optics are super common in my data centers! Used for 3ft to 15 miles! You only have to think about it when you get to ZR optics / 80km reach. This is also over single mode fiber.
Good lord, I just upgraded my home network to 2.5gig and now 100gig is a thing? Wendell, how much speed do you need????? Hopefully the Level1 Supercomputer Cluster will be showing up in the Top100 soon lol
If you do end up using long haul cards you can get attenuators, I jus recommend having a light meter handy so you can see exactly how much you need to attenuate. Not that it makes sense to pay the extra money but if you have the hardware just sitting around you make due with what you got.
I'm jealous of all your toys. I don't have near the performance of storage to even come close to being able to use a single 100G lane. Hell I don't think I can saturate my 10G lane that I can't even use atm because of reasons.
I've been thinking about getting 40gb for home just for the heck of it. The used 10gb hardware is actually quite expensive, but because there was a bunch of datacenters that had 40gb and then upgraded to 100gb or 250gb, the used hardware on ebay is CHEAP. $20 for a dual NIC, $150-250 for a 36 port switch. The same stuff in used 10gb is $100 per NIC and $500 per switch.
Yeah I wouldn't need it most of the time but 25 would be nice. Problem I've found is the affordable 25/40gb switches are older, loud and power hungry. Newer stuff that is a lot more power efficient is still pretty expensive. For now I'll stick with 10 but I'm keeping my eye out for an upgrade.
@@StaceySchroederCanada Never said I didn't like it, price vs. performance 10gb is great because it's so (relatively) cheap. It would occasionally be nice to have a faster connection when I'm dumping large amounts of data across my network is all.
@@peq42_ This is patently false - you can find both AM5 and LGA 1700 mobos that are $160+ with only a single 1GbE port, and that's not even looking at the ghetto chipsets on each platform. Even figuring that, I'd reckon most people don't have 2.5 GbE switching/routing infrastructure unless they went out of their way on purpose for it. I personally have my workstation and server running on 10GbE, and everything else is 2.5GbE at this point. But I had to sit down and purposely plan/buy all the gear to make the jump possible for my whole house.
Network Engineers love Intel NICs, 5xx,7xx,8xx generations have been so reliable....snatched 4x x550 from AliExpress.....my home lab has 10% the capabilities but yet still cost 1k for 4x NIC, a 10g switch, and a 10g router. If you want to know the exact setup, lemme know, i'll post it.
Great for Ceph! I don't trust my important data on anything else at this point other than with erasure coding in a ceph cluster of machines running ECC lol
Wow, this was the exact video I needed considering I'm starting to plan my 100GbE buildout! Have you had any success configuring SMB-over-RDMA on a Linux host yet? I know ksmbd *technically* supports it, but I haven't seen any evidence of a successful Linux-Windows transfer.
As a Muggle in all of this, I looked at the prices of some of these components. NOPE! Fun to see where the bleeding edge is, but DAMN!!!! DEEEEEEEEEP Pockets required!
I am just starting into the 2.5G realm, just don't have the need or equipment to relay data that fast on my LAN. 2.5G is a good spot since I will be getting 2G fiber WAN soon.
Use RDMA at work with 25G connections for a storage cluster. It helps even for that! It's pretty much the only way to get ALL the IOPS from what I've found. Wish Ceph would use it :/ Not much in the open source storage clustering world that can go super fast on a small scale
Technically infinband is an IBTA technology and there used to be several vendors who implemented it. Ironically, Intel was one of the founding members of IBTA but abandoned it for omnipath
Me getting 500mbs internet. Me after getting new internet : realizes that my computer, router and wifi cant handle it and can handle at most 100 mbps... goddamn it
Most likely your computer and switch can handle it fine and it's just your router being the choke point unless you have a seriously old computer and switch.
@@ax14pz107 Yeah; even Desktop PCs from 2004 having mainboards such as the Gigabyte 915PM-ILR come with Gigabit/s Ethernet. I'd be seriously embarrassed to not have that as a minimum when such boards regularly end up in scrapyards.
Even on eBay, those Dell 5200 series switches are around $4,000! There is no way I will need that kind of bandwidth. A lot of things on my LAN do even have 10 gig capacity. So after spending four grand on a switch, and realizing that my local DNS still doesn't work, then what am I going to do?! Just like Patrick from STH, you show a lot of interesting equipment that I can't afford, and would have no use for if I could.
so what started as a "2.5Gb workstation to server" direct link ended up as a 5 port 2.5Gb switch and 3 machines linked at 2.5Gb, with a 4th coming soon. Upshot is I have faster network at home than at work (everything still 1Gb there) . What a wonderful world we live in.
10g copper is expensive for a variety of reasons: hard to drive copper, used market has huge demand since it works (mostly) with existing cat6 wiring, highend boards come with 10g-baseT connections. It cost me far more for a 10gbase-T connection using existing wires than a 40gb-lr4 where I had to run a new fiber drop from my network closet to my office - including the cost of the fiber, keystones, patches. If you are in a position to run fiber it opens up a world of cheap secondhand highspeed networking gear. I spent under $400 total for 40g, and 100g would have only doubled that. (not exactly comparable, 100g point-to-point to my NAS vs 40g with a switch included)
don't know what most of this video is talking about, but the other day I discovered the reason my 2 years old home file server was slow is because I used a really cheap 1m lan cable that could only run at 100mbps, I changed it with another cheap lan cable and magically got 1000mbps upgrade 😂
At 2:05 you mentioned the fiber optics cable colors, could you do a video in more detail on the colors and their uses? I work at a company that manufactures and test fiber optics equipment. We use the yellow fiber for 100Gbe and 800/1,600Gbe. The aqua fiber for 400/800Gbe but I have no idea why the 400/800Gbe gets aqua fiber.
The thing with fibre op tuning/distance setting; I wonder why the onboard controller could not just, 1) detect that a cable was inserted. 2) have a 'self-tuning' mode where it starts with the lowest signal and gradually increases until the signal is detected on the other end. 3) detect when a cable is unplugged and sets the interface to 'reset' mode and run the self-tuning again. 4) do have this work when things are plugged/unplugged without power, there could be a physical switch that is manipulated upon insertion. I am dumb idiot, so maybe this exists, or there are significant problems with doing the aforementioned.
I have upgraded to 100G last year as well. I see you have bought ColorChip model B trnsceivers which are little hotter and with lower temp treshold than model C but I hope they would be fine :) I have both B and C but prefer to use C.
I hate to nit pick, but we run a lot of 100G-LR4 for short runs so we don't need multiple kinds of optics and the whole "burning out the other optic" isn't much of an issue anymore for decent quality transceivers.
Meanwhile my home router runs our home lab and my NAS is on a 1gb Ethernet speed. Thankfully it’s the only machine any work happens on, so nothing needs to talk to outside for large file use. I just docker use everything and run against the disks ☺️
"I ended up upgrading everything to 100Gb, because that's what you do, right?"
Yes. That is what you do.
Yes. That is what you do.
@@Prasanna_Shinde did you account for full duplex
I had 10G copper and was looking to move to 25G. As soon as I realized most of the pain would be running fiber, I too, went straight to 100G. Mikrotik for their 16x100G l4x25G switches and a few more Qnap 4x100 8x25 switches to round it out. 100G for roughly 2x 10G cost, and only 20-30% more than 25G.
Off to convince my wife of how this will significantly improve our quality of life. Wish me luck! :P
good luck
In just talked my wife into using pfsense to upgrade to 10gig networking by using the promise of a VPN into our network for using self hosted AI. Llama and stable diffusion.
life is short, need 100g
oh, budget review session
I know how you feel, except I'm single. I only have 1 computer and 1 phone. A NAS would be semi-useful, but realistically it'd be more useful to have offsite backup.
But it's cool tech and damnit, I want it!
I'm a network engineer at an ISP. We're starting to install 100g internet connections. Pretty wild.
I was just thinking the same thing, we deployed 400Gb backbones years ago. 10Gb wan connections are a dime a dozen now. We’re starting to see dual/quad 100Gb backhauls to cell sites routinely now. Dual is for standard redundancy, quad is for dual-path redundancy.
68Mbps here ... 😂😂😂
@@sznikers 50mbps😅
judging by how badly affected my ISP was by a DDoS not long ago? I'd say they're still installing 100gbps connections as well(which is funny since they OFFER 1gbps to clients)
Sidenote, I always laugh when I see UA-camrs cringe at the price of optics. I always throw up a little when I see the price of a 100 km 100Gb QSFP+ module.
For most companies, field side amplification has been gone for many years. It’s far cheaper to buy longer range optics and have multiple links.
"...so I ended up upgrading everything to 100Gbps, that's what you do, right?"
No, Wendell, most of us don't have the kind of equipment to do that 😂
i was pretty jazzed when i got myself a 2.5gbe switch
@@jonathanyang6230 I'm not a massive home networking guy, so I make do with what I get from my ISP and what on-board solution offers.
That being said, I'm also on 2.5Gbps right now, as French ISPs have started supporting 2.5Gbps, 5Gbps and even 10Gbps modems for end users at affordable prices. The difference it makes is actually insane! I didn't expect moving from gigabit/WiFi 6 to dual 2.5Gbps/WiFi6e to make such a massive difference, even with the connections between my devices at home.
Me with my Mel- Nvidia 10Gbps sfp+ card thinking it's overkill for at least 5-10 years
I just recently upgraded to 2.5gbe with 10gbit link between floors. Ah yes, future proofed for a little while at least.
"You should all be getting 100gbe, it's old hat by now" the fuuuu. Yeah, I know we're talking enterprise, but still.
It really isn't that far out of reach, you can find a lot of 100gbps switches for the price of a high-end Ubiquity switch, since the hyperscalars are dumping 100gig en masse. If you are on a 3 node cluster, you can just crossbar them and forego a switch. This means you can spread out buying the NICs and transceivers on the client side, then afterwards buying switches and transceivers on the networking side.
Edit: I went full balls to the wall, where I upgraded every switch to ONIE/SONiC and my entire network stack came down to about 3000 Euro. I did this because I wanted to learn SONiC and how to build an overlay network. A more reasonable approach would be to find one or two SN2040 switches for redundancy with 8x 100gbps and 48x 25gbps ports, this is more than enough connectivity for any homelab in my opinion. You only need something that has rj45 and POE+ for clients and APs on the side.
Network engineer here, we’re already on 100G between Metro POP sites and intercapital links for several months already, and now standing up multiple 100G link bundles on intracapital core links.
Also our colleagues in the internet peering team are running multiple 100G link bundles on our internet borders.
Why so slow?
@@BryanSeitz A lot of the heavy traffic like streaming services and game downloads will have a local cache in major cities. With good management you only need 100G worth of peering bandwidth per 100K clients.
sys eng and I dabble in network with the net eng guys. We got some 400gb switches maybe 6mo ago or so. so wild. Then I saw the netflix 400gb over PCI docs, also wild.
Nah, you don’t need ASIC for deep trafic analysis on 100Gb/s network. At my place, we do DPI at 100Gb/s with only one CPU (32-cores, though) and 64GB of RAM. Full line-rate certified by Spirent, at 14 millions new sessions per second and 140 mpps. But to do that, we had to redevelop the drivers of the e810 from scratch in Rust for everything to work in userspace (DPDK is… too limited for that). So it’s possible, took us 3 years of R&D, though ;-)
Can you share a link or any context? This is intriguing stuff!
out of curiosity, was this done using BPF?
@@quackdoc5074 Nope. We tried using AF_XDP (because of eBPF), but... it didn't scale enough to reach 100Gbit/s full line rate. It started dropping around 40G and we had to throw 40+ codes... To costly. That's why we took the high road and re-developped brand new NIC drivers from scratch for the whole intel family (from the 500 to the 800), only way to achieve true linear scalability.
So when will it be on github?
@@quackdoc5074 Doubt it. BPF is too slow. That's why DPDK came about - it's mostly just a NIC driver in userspace, but you're very limited in what you can do in userspace.
In a previous job, I worked with 2x400G cards using DPDK. It was glorious to run TRex on the other side of the wire and see 800G flowing through our program.
Wendel waves around a connectX-5 calling it old... I'm gonna go cuddle my beloved connectX-3...
I recently had to swap out my ConnectX-3 cards with ConnectX-4 cards because nVidia dropped driver support for ConnectX-3 after 2020 (so Ubuntu 20.04 is fine, but 22.04 and 24.04 is a no-go), but still support the latest and greatest distros/kernels with the ConnectX-4. Luckily, 25 gig ConnectX-4 cards are now dirt cheap and are backwards compatible with SFP+, so I could simultaneously fix my driver woes, set myself up for a future 25 gig upgrade, and avoid replacing anything in my network other than the NICs.
I installed x-4 cards today :D
bruh i'm on x-2...
I am absolutely happy I got Connect-X 512 under $100/piece few years ago. I started from ConnectX-2. You will get there!
Those are still doing fine if you update the firmware. We got lots of those in production.
My first corporate job had 4Mb/s Token ring networking. What a leap since.
God the FEC nightmare is real. I spend days trying to figure out why I couldn't get a Mikrotik router to talk to a Ubiquiti switch at 25 gig and the answer was FEC.
FEC my NAS
Fair to say I didn't understand most of what he was talking about but it was fun to listen to.
Video/content suggestion: Boot Windows over such a 100 GbE adapter from a ZFS Server and how to get the most performance out of it.
+1
U need iscsi boot capable motherboard the last time I saw that option it was in an intel nuc.
@@BasedPajeetYou mean PXE boot, right?
@@fujinshuPXE is not the same as iSCSS
@@fujinshu No, he meant isci boot and like he mentioned it's a boot option in some motherboard bios (nic also needs to support it). Can be a little quirky to get working (did it with some supermicro motherboards once upon a time) but once you get it working it's pretty neat. That said I don't believe intel still supports it but you can still do something similar with UEFI boot options on hardware that supports it (and for that you will need PXE).
You should do a review of QNAP's 4x100GbE + 8x25GbE switch. It's reasonably priced and uses current gen tech, so much lower power/fan noise and has more ports than the Mikrotik 100GbE switch. It won't have all the fancy layer 3 capabilities of the used switches, but I'd like to see how it compares for those of us who care about noise.
I'd argue that the mikrotik cloud routers don't actually have usable L3 features, being that even simple things wind up limiting thruput to ~400mbps
@@jttech44 Yeah, that was in reference to the used Mellanox, Dell, etc. switches, not the Mikrotik ones. These low cost switches don't really have usable L3 features, but most home labs don't really need those.
We've got that server deployed as a VM host actually. Proxmox, ZFS (RAIDZ2 on SAS HDDs + L2ARC & ZLOG on NVMe). Wonderful piece of hw, though we're likely only getting to 10 GbE this year. Might future proof with a 25 GbE capable switch, but the upstream switch we're linked to only got to 10 GbE recently (low priority, other buildings are up to 25 and 100).
Ahh yes 100Gbit. The real 10Gig
10 gig was rad for 2014! 10 years later and now it's 100 gig!
nice autism jire @@JireSoftware
"Experimental RDMA capable SMB Server that's floating around out there on the internet" ... GO ON...
Likely referring to ksmbd, it's an in-kernel server that got declared stable late last year. There's a couple threads on the forums about it, but Windows seems to have trouble establishing RDMA connections with it.
@@fuzzydogdog I thought of that, but since that has been marked as stable and he said 'experimental' I was thinking maybe he has heard of something else.
@@fuzzydogdog man, I've been fighting linux server > windows client rdma file sharing for years. I tried ksmbd before it was 'stable' (but after rdma was supposed to be supported) and it never worked. but now I don't have any rdma-capable connections between windows and linux machines anymore anyway...
@@q5sys oh hi there
Work at a small cloud provider. We only just upgraded to 100gig in our backbone a year or so ago and are expanding that soonish. A few 100gig switches went through my hands the other day for software updates.
I got used 100GbE-CWDM4 transceivers for $5 each off ebay. Those run with LC terminated duplex single mode fiber, which is much easier to deal with than 8 fiber MPO.
same for 40gbe-lr4 (lite). They're basically paying you to buy them when you save so much in cable costs going from MPO to LC and you don't have to deal with crossover mismatch.
We use Xinnor for NVMe RAID in film post production where bandwidth is more important than data integrity of something like ZFS.
Ahh yes 100gig...my nemesis
"and if you made it to the end of the video... You are my RaidOwl comment on a level1techs video."
25 Gig was easy and worked out of the box.. So naturally I had to go the hard route.
Hehe. Upgraded the home net to a 10gig backbone and I was feelin' pretty good.
Slaps 100Gbps. You can fit so many UA-cams in there.
Remember the intel cards are 100g full duplex, while the Mellanox could push line rate per port if the pcie bus wouldn't limit it. The cx4 is still supported, as it uses the same driver as the cx7. If one does not need the new features like 200g or 400g the old cards are almost as capable. That slcould however not be said for 100g cards from qpogoc which are a pain in the ass compared to mlx and intel
I would love to see some stuff with dpdk and vpp. A 100G router in x86 is very cool
I bought 2 cards with Intel e810 at work. They work like a charm and the driver is open source. Although, you need to compile it yourself for Debian… but for the rest they are basically plug and play. I am very happy with them.
I was going to upgrade to 10gb, but went with 25gb, so I get the notion. I just love watching Wendel when he's like a little kid about this stuff. It's so fun and engaging.
100 Gigabit. Gigabit! Well thanks, now I feel really old. 10 megabit coax old.
Its so funny hearing you talk about how the orange OM1 fibre is dinosaur age. Industrial plants still live on the stuff. Heck the protective relays that control the breakers that protect our electrical grid are still being installed to this day with OM1 fibre.
I've been running Mellanox ConnectX-4 dual 100 Gbps VPI Infiniband cards in the basement of my home since December 2018.
Skipped 10G, etc. and went straight from 1 G to 100 Gbps.
IB has its quirks but it is anywhere between 1-3% faster than 100 GbE off the same card.
it feels weird working in a bigtech DC and seeing people talk about 100gig, meanwhile im regularly working switches with 32 400gig QSFP ports.
Same thing i said. 100G is not new. We were installing 100G links in google DCs in 2017. There were only a few 100G links on the juniper and cisco routers in the CNR(campus networking room) then but we had them.
lol im working on 800gig final development phase before production version. seeing this is funny, there's much more exciting stuff.
That switch @15:52 Definitely an IT cabling job.
Wendell is my Mr. Wizard for computers.
💯👍
it's the Crawly of the computers :D! (tiktok reference)
In the current year, I don't learn anything from watching anyone else. Love the Wendell evolution.
"We're going to need another Timmy..."
I just upgraded part of my home network from 1 gbe to 10 gbe and it was a huge quality of life improvement. Moving large files to/from my NAS is fast! Upgrading to 100 gbe sounds insane to me.
Need storage speeds to make use of it
@@mrmotofy yes - in my case I'm using ZFS on rotational media. What I've noticed is that for files that are about 7 gigs or smaller, I can copy them to my server at over 1 GB/sec, but eventually the speed drops to the rotational media speed, about 250 MB/sec.
My guess is that ZFS is caching the writes in some way but once I blow out the cache, it is forced to write at the slower speeds of the media.
Omg 😂 25 years ago I worked in the ILEC CO and we had OC-3, all the way down to 1 Meg (and less).
Here today, we the general public can have OC3 in our hands at home 😊
100g has options for DR to shoot 1310nm 500meter and FR to shoot 1310nm 2km. Both are plenty safe for short runs in the same rack or within the datacenter. Even the 10km optics are unlikely to burn out the receiving side these days. Most of them have a RX window starting at or above the top of the TX window. So should be good to go once you add some loss through connectors.
Don't they modulate their optical power? 40g-LR4 lite I'm transmitting 0.5 dBm, receiving -1 and it's rated for up to 3.5 (TX and RX)
I love when Wendell gets excited. Personally, I'm impressed when Linux does anything at all 😂
I like your calm no-nonsense presentation.
I upgraded everything in my rack to 40Gbe a couple of years ago (and it was pretty dang cheap at the time) and seeing as I don't have and uber-fast kioxia drives, I don't think the jump to 100Gbe is worth the cost for me. Might wait for 400Gbe to come down in a few years.
My core switch is a Mellanox SX6036 It connects to a Dell/Force-10 S4810 and a pair of Powerconnect 5548 switches (one with POE) My nics are mostly Mellanox ConnectX-3s with a few Chelsio Intel based nics. It all worked together surprisingly well. IIRC for the nics, dac cables and 2 switches I was all in for around $600-$700.
Your Corn collection must make you really good money
See?...this is how I end up replacing my server, all my storage, all my networking gear, everything...the chase!! Speed is never enough!! lol
Oh mannn yeah, FEC bit me in the ass to the tune of like 4 hours on those XG switches.
It is fixable in any condition, provided you actually have control over FEC on both ends... it's just a pain, and I'd highly recommend just using what Ubiquiti uses (Base-R aka FEC74 in the rest of the world) and not trying to get the ubiquiti setting to persist.
I'd also recommend filing a ticket with UI to complain about their nonsensical, nonstandard FEC settings.
I just upgraded to 10 and 2.5… it required special fumbling with driver versions on windows, because of course intel. But it works! Now I need faster WiFi and fiber internet…
My experience moving from 10g to 25g was interesting since no links would come up by just plugging in. After a long session of messing around with it for a few hours I figured that if I enable FEC then disable it again the links activate. I thought maybe the firmware was the issue on my x695s since extreme mentioned FEC in update notes but the same behaviour was still going on even after updating so just a bit odd. These weren't even LACP ports or anything either just basic ports. (host side was dual-port 25G broadcom NICs on vmware 7)
A guide / video on the experimental RDMA SMB server that you mentioned would be lovely.
From a consumer perspective, 10 gig hardware (be it NICs, transceivers, or switches) are dirt cheap now. You can get any of those for under a hundred dollars now. But 25 gig and 100 gig, while the NICs/transceivers are affordable or even cheap, the switches are still in the high hundreds to low thousands of dollars. And those switches will probably need a secondary switch to connect 1 or 2.5 gig devices to your network, you can't really get an all-in-one switch like you can for mixing 1/2.5 and 10 gig clients. The costs add up fast and your minimum investment ends up in the thousands of dollars.
Sure, cuz right now I'm looking at an 8 port SFP+ Mikrotik switch for $230
I have noticed one small problem though: That 10gig hardware consumer significantly more power than regular gig-eth. Notice the great big heatsink on every pci-e 10gbit interface.
@@vylbird8014 Fiber doesn't
I still use old infiniband stuff (40 gig) at home. It's still shockingly fast and low latency. And cheap for all the parts from switches to nics.
NFS over rdma is very very very fast.
40GB is decommissioned stuff. Get them fast.
Network Engineers support Wendell's 100g journey. Now lets get a desktop system capable of the throughput without an accelerator card.
Keen to see a video looking at Xinnor having just started down the SPDK route myself.
Even better if its done in the context of Proxmox and virtualisation in general.
And here I was thinking about going 25gbit for my Dell storage server and my main desktop. Wendell has made the want for 100gbit more desirable. lol
If you're running a point-to-point connection, it's not so bad.
If you need a 100 Gbps switch in between, THAT is probably going to be your most expensive piece of hardware. (Depends on how many ports you need, but there are, cheaper (absolute price) options, albeit at a higher $/Gbps throughput level.
I bought my 36-port 100 Gbps Infiniband switch for $3000. $3000 is a lot of money, but that $3000/7.2 Tbps throughput = $0.416666/Gbps.
You can have cheaper switches, but the $/Gbps will be higher.
I want the 2x 100g port just so I can push 100gb/s back to myself for the thrill of it. Do you have to bolt your chassis down to protect against the inertia of all those bits slamming into the receiver so quickly?
we deployed pensando dpus in our latest datacenter refresh (dual 100GB ports)
+ a second connectX-6 dual 100GB card for RDMA support (and dual DPU wasn't yet supported when we designed the hardware refresh)
Its stupid fast.
A few times a year it'd be nice to have that at home, otherwise it's overkill. The most I normally do on my LAN is stream movies.
Big thing is RDMA/ROCE, and thankfully E810 supports that. You can easily do iSCSI with iSER and NVMeoF with RDMA.
I semi recently upgraded to 10gbe, and I'm barelly keeping it saturated, one way, 2 single port connectx-3's actually, I'll be happy with this for a long while, doesn't stop me from drooling over this though.
I just built an iSCSI setup using Hyper-V and Windows Server as a file server and it is actually really performant. I did have to fiddle with the iSCSI and NIC settings to get the optimal throughput but with 2 10G fiber channels per server, I have a very stable very fast fail-over cluster using built in software. Things have really changed.
100G 10km / LR optics are super common in my data centers! Used for 3ft to 15 miles! You only have to think about it when you get to ZR optics / 80km reach.
This is also over single mode fiber.
Been playing alot with 100G at work a few Pure FB //S systems it crazy to see 100 terabyte migrations happen over a lunch break
Good lord, I just upgraded my home network to 2.5gig and now 100gig is a thing? Wendell, how much speed do you need????? Hopefully the Level1 Supercomputer Cluster will be showing up in the Top100 soon lol
With 10 gig switches being under a hundred bucks now, 2.5 is old hat ;)
If you do end up using long haul cards you can get attenuators, I jus recommend having a light meter handy so you can see exactly how much you need to attenuate. Not that it makes sense to pay the extra money but if you have the hardware just sitting around you make due with what you got.
I think you're definitely getting into Level 2 territory here.
I'm jealous of all your toys. I don't have near the performance of storage to even come close to being able to use a single 100G lane.
Hell I don't think I can saturate my 10G lane that I can't even use atm because of reasons.
For a moment I thought you had discovered a huge lost infocom text adventure.
I've been thinking about getting 40gb for home just for the heck of it. The used 10gb hardware is actually quite expensive, but because there was a bunch of datacenters that had 40gb and then upgraded to 100gb or 250gb, the used hardware on ebay is CHEAP. $20 for a dual NIC, $150-250 for a 36 port switch. The same stuff in used 10gb is $100 per NIC and $500 per switch.
Unfortunately I'm a Mac so we can't have nice 100G. 40G is the max. (cries in speed)
And I'm sitting over here acting smug with my 2.5 and 10 GbE...I don't think I could saturate 100GbE but the 25 would be nice under certain workloads!
Yeah I wouldn't need it most of the time but 25 would be nice. Problem I've found is the affordable 25/40gb switches are older, loud and power hungry. Newer stuff that is a lot more power efficient is still pretty expensive. For now I'll stick with 10 but I'm keeping my eye out for an upgrade.
@@nadtz what is it about the 10 that you don't like?
@@StaceySchroederCanada Never said I didn't like it, price vs. performance 10gb is great because it's so (relatively) cheap. It would occasionally be nice to have a faster connection when I'm dumping large amounts of data across my network is all.
to be fair 2.5GbE isn't something to be smug about when all current motherboards come with support for it out of the box xD
@@peq42_ This is patently false - you can find both AM5 and LGA 1700 mobos that are $160+ with only a single 1GbE port, and that's not even looking at the ghetto chipsets on each platform.
Even figuring that, I'd reckon most people don't have 2.5 GbE switching/routing infrastructure unless they went out of their way on purpose for it.
I personally have my workstation and server running on 10GbE, and everything else is 2.5GbE at this point. But I had to sit down and purposely plan/buy all the gear to make the jump possible for my whole house.
Network Engineers love Intel NICs, 5xx,7xx,8xx generations have been so reliable....snatched 4x x550 from AliExpress.....my home lab has 10% the capabilities but yet still cost 1k for 4x NIC, a 10g switch, and a 10g router. If you want to know the exact setup, lemme know, i'll post it.
Great for Ceph! I don't trust my important data on anything else at this point other than with erasure coding in a ceph cluster of machines running ECC lol
16:18 Wendell about to get roasted by Jayz2cents for cable management xD
Wow, this was the exact video I needed considering I'm starting to plan my 100GbE buildout! Have you had any success configuring SMB-over-RDMA on a Linux host yet? I know ksmbd *technically* supports it, but I haven't seen any evidence of a successful Linux-Windows transfer.
Now that you have 100G, may as well setup a lustre volume
As a Muggle in all of this, I looked at the prices of some of these components. NOPE! Fun to see where the bleeding edge is, but DAMN!!!! DEEEEEEEEEP Pockets required!
I am just starting into the 2.5G realm, just don't have the need or equipment to relay data that fast on my LAN. 2.5G is a good spot since I will be getting 2G fiber WAN soon.
Use RDMA at work with 25G connections for a storage cluster. It helps even for that!
It's pretty much the only way to get ALL the IOPS from what I've found.
Wish Ceph would use it :/ Not much in the open source storage clustering world that can go super fast on a small scale
Technically infinband is an IBTA technology and there used to be several vendors who implemented it. Ironically, Intel was one of the founding members of IBTA but abandoned it for omnipath
Me getting 500mbs internet. Me after getting new internet : realizes that my computer, router and wifi cant handle it and can handle at most 100 mbps... goddamn it
Most likely your computer and switch can handle it fine and it's just your router being the choke point unless you have a seriously old computer and switch.
@@ax14pz107 Yeah; even Desktop PCs from 2004 having mainboards such as the Gigabyte 915PM-ILR come with Gigabit/s Ethernet. I'd be seriously embarrassed to not have that as a minimum when such boards regularly end up in scrapyards.
@@whohan779 he mentioned wifi, so I'm guessing not a hard link.
Even on eBay, those Dell 5200 series switches are around $4,000! There is no way I will need that kind of bandwidth. A lot of things on my LAN do even have 10 gig capacity. So after spending four grand on a switch, and realizing that my local DNS still doesn't work, then what am I going to do?! Just like Patrick from STH, you show a lot of interesting equipment that I can't afford, and would have no use for if I could.
so what started as a "2.5Gb workstation to server" direct link ended up as a 5 port 2.5Gb switch and 3 machines linked at 2.5Gb, with a 4th coming soon. Upshot is I have faster network at home than at work (everything still 1Gb there) . What a wonderful world we live in.
having failed to get into even 10gig because of price i'm sure this is worth watching
10g copper is expensive for a variety of reasons: hard to drive copper, used market has huge demand since it works (mostly) with existing cat6 wiring, highend boards come with 10g-baseT connections. It cost me far more for a 10gbase-T connection using existing wires than a 40gb-lr4 where I had to run a new fiber drop from my network closet to my office - including the cost of the fiber, keystones, patches. If you are in a position to run fiber it opens up a world of cheap secondhand highspeed networking gear. I spent under $400 total for 40g, and 100g would have only doubled that. (not exactly comparable, 100g point-to-point to my NAS vs 40g with a switch included)
You should consider Mikrotik CRS5xx switch series. They support 100 Gbps, affordable and tons of enterprise features!
Thumbnail looks like Wendell's holding a delicious cocktail.
don't know what most of this video is talking about, but the other day I discovered the reason my 2 years old home file server was slow is because I used a really cheap 1m lan cable that could only run at 100mbps, I changed it with another cheap lan cable and magically got 1000mbps upgrade 😂
We all need a 100 gig ocp3.0 card in our gaming pc's now 😂😂😂. If only the board partners will get there priorities straight 🎉
I think an update on the toilet lid collection is overdue - just to ballance out these latest tech-upgrades.
"Even for Windows systems. You just put a NIC in and it works." Some UDP stats please.
At 2:05 you mentioned the fiber optics cable colors, could you do a video in more detail on the colors and their uses? I work at a company that manufactures and test fiber optics equipment. We use the yellow fiber for 100Gbe and 800/1,600Gbe. The aqua fiber for 400/800Gbe but I have no idea why the 400/800Gbe gets aqua fiber.
your intro has 'the lick' in it lmoa. that's hilarious
Truly a super-janitor.
Here I am considering upgrading to 2.5gig, currently on 1 gig for LAN networking
The thing with fibre op tuning/distance setting; I wonder why the onboard controller could not just, 1) detect that a cable was inserted. 2) have a 'self-tuning' mode where it starts with the lowest signal and gradually increases until the signal is detected on the other end. 3) detect when a cable is unplugged and sets the interface to 'reset' mode and run the self-tuning again. 4) do have this work when things are plugged/unplugged without power, there could be a physical switch that is manipulated upon insertion.
I am dumb idiot, so maybe this exists, or there are significant problems with doing the aforementioned.
I have upgraded to 100G last year as well. I see you have bought ColorChip model B trnsceivers which are little hotter and with lower temp treshold than model C but I hope they would be fine :) I have both B and C but prefer to use C.
I hate to nit pick, but we run a lot of 100G-LR4 for short runs so we don't need multiple kinds of optics and the whole "burning out the other optic" isn't much of an issue anymore for decent quality transceivers.
upgrading to 100gig networking just because you can. Bruh I needed this in my life
Meanwhile I'm just happy most desktop motherboards are coming with 2.5G nics
Meanwhile my home router runs our home lab and my NAS is on a 1gb Ethernet speed. Thankfully it’s the only machine any work happens on, so nothing needs to talk to outside for large file use. I just docker use everything and run against the disks ☺️
...and here I just got my 10g up and running!!
25G to 100G is just the logical step. It’s like 2.5G to 10G
We don't do this because we can, but because we MUST!
Have you tried the Qnap QSW-M7308R-4X ??
And here I am looking to jump up to 10GbE. Feels so slow now.
wow. awesome tech you-ve deliverred.
Me over here with an old Gbit server and praying to get 2.5
There is a familiar-looking Dell monitor in every corner of the world