This is just too fast! 100 GbE // 100 Gigabit Ethernet!

Поділитися
Вставка
  • Опубліковано 26 жов 2024

КОМЕНТАРІ • 408

  • @davidbombal
    @davidbombal  3 роки тому +9

    Menu:
    100GbE network! 0:00
    How long will it take to copy 40Gig of data: 0:22
    Robocopy file copy: 1:08
    Speed results! 1:28
    Windows File copy speeds: 1:59
    iPerf speed testing: 2:30
    iPerf settings: 3:20
    iPerf results: 3:42
    100G Mellanox network cards: 5:14
    Jumbo Packets: 6:04
    Aruba switch: 6:26
    Switch configuration: 7:07
    Back to back DAC 100GbE connection:8:52
    iPerf testing using DAC cable: 10:05
    Windows File copy speeds: 11:00
    Robocopy test: 11:30
    =========================
    Free Aruba courses on Udemy:
    =========================
    Security: davidbombal.wiki/arubasecurity
    WiFi: davidbombal.wiki/arubamobility
    Networking: davidbombal.wiki/freearubacourse
    ==================================
    Free Aruba courses on davidbombal.com
    ==================================
    Security: davidbombal.wiki/dbarubasecurity
    WiFi: davidbombal.wiki/dbarubamobility
    Networking: davidbombal.wiki/dbarubanetworking
    ======================
    Aruba discounted courses:
    ======================
    View Aruba CX Switching training options here: davidbombal.wiki/arubatraining
    To register with the 50% off discount enter “DaBomb50” in the discount field at checkout.
    The following terms & conditions apply:
    50% off promo ends 10/31/21
    Enter discount code at checkout, credit card payments only (PayPal)
    Cannot be combined with any other discount.
    Discount is for training with Aruba Education Services only and is not applicable with training partners.
    ================
    Connect with me:
    ================
    Discord: discord.com/invite/usKSyzb
    Twitter: twitter.com/davidbombal
    Instagram: instagram.com/davidbombal
    LinkedIn: www.linkedin.com/in/davidbombal
    Facebook: facebook.com/davidbombal.co
    TikTok: tiktok.com/@davidbombal
    UA-cam: ua-cam.com/users/davidbombal
    Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel!

    • @rishiboodoo863
      @rishiboodoo863 3 роки тому

      You and Chuck Are My My Inspiration

    • @wyattarich
      @wyattarich 3 роки тому

      I'd love to see if there's a big difference between explorer transfer speeds and Teracopy transfer speeds

    • @lennyaltamura2009
      @lennyaltamura2009 3 роки тому

      It's your IO. The bus on your motherboard and CPU, The data is copying but hits the CPU first. What you need to jack up your speed is to add an enterprise-class raid controller card to your machine with Cache (not software raid). Put a few fast ssd's/nvme/m.2/U.2/optane (whatever fast storage media) on there in raid 0 config, and you will achieve what your expectations were original. RAID controller will handle the io straight to the disk eliminating the CPU from the equation. NeoQuixotic is almost correct (talking about the lanes configuration is part of the problem), but to avoid any could be's /should be's, just add the raid controller with the Raid-0 and whoalla; problem solved. BTW, I love your education videos. Your edu vids are Very, very good. I have one of the cybersecurity ones that you teach. I love cybersecurity.

    • @MichaelKnickers
      @MichaelKnickers Рік тому

      @@lennyaltamura2009which raid controller models would you recommended

    • @kailuncheng6912
      @kailuncheng6912 Рік тому

      Move network card to first PCI slot may solved the problem^^

  • @NeoQuixotic
    @NeoQuixotic 3 роки тому +131

    I think your bottleneck is your PCI Express bandwidth. I'm assuming you have X570 chipset motherboards in the PCs you are using. You have 24 PCIe 4.0 lanes in total with the 5950x and X570 chipset. It is generally broken down to 16 lanes for the top PCIe slot, 4 lanes for a NVMe, and 4 lanes to the X570 chipset. However, this is all dependent on your exact motherboard, so I'm just assuming currently. Your GPUs are in the top x16 slot so your 100g NICs are in a secondary bottom x16 physical slot. This slot fits a x16 card, but is most likely electrically only capable of x4 speeds that is going through the x4 link of the chipset. Looking at Mellanox's documentation the NICs will auto-negotiate the link speed all the way down to x1 if needed, but at greatly reduced performance.
    This PCIe 4.0 x4 link is capable of 7.877GB/s at most or 63.016 Gb/s. As other I/O shares the chipset bandwidth you will never see the max anyways. To hit over 100Gb/s you would need to be connected to at least a PCIe 4.0 x8 link or PCIe 3.0 x16 link. There are other factors as what your PCIe bandwidth is, such as if your motherboard has a PCIe switch on some of its slots. You would want to check with the vendor or other users if a block diagram exists. A block diagram will breakdown how everything is interconnected on the motherboard.
    You could try moving the GPUs to the bottom x16 slot and the NICs to the top slot. Also you could confirm in the BIOS of each PC that the PCIe slots are set to auto-negotiate or manually set it if required, assuming that's on option for your BIOS.
    These NICs are more designed to be used in servers than a consumer end CPU/chipset. The HEDT (High End Desktop) and server CPUs from Intel and AMD have much more PCIe bandwidth to allow for multiple bandwidth heavy expansion cards to be installed and fully utilized.
    Being that I believe iPerf by default is memory to memory copying, you should be able to see close to the max 100Gb/s if you put them in the top slot on both PCs. As far as disk to disk transfers reaching that, you would need a more robust storage solution than what would be practical or even possible in a X570 consumer system.

    • @paulsccna2964
      @paulsccna2964 3 роки тому +1

      I agree. By moving the video card or ensuring the Mellanox card to highest PCI slot speed (possibly manually allocating) that speed to the slot. To ensure he is getting full speed performance for that PCI bus slot. Might be able to get closer to 100 Mbs.

    • @JzJad
      @JzJad 3 роки тому +4

      Yeah wrong configuration for testing the card.

    • @neothermic1
      @neothermic1 3 роки тому +6

      Looking at the panning of the motherboard at 5:00 this seems to be an Asus ROG Strix X570-F Gaming motherboard. (lack of 2 digit POST readout at the base of the board means it's not the other variants of the Strix X570). The GPU is plugged into PCIEX16_1, and the 100G card into PCIEX16_2 - the documentation suggests that when both these slots are occupied the motherboard goes down to PCIE 4.0 x8 on both slots, so in theory this isn't the issue. The PCIE_X1_2 is occupied by a wifi card, and that, from the documentation, steals lanes from the PCIEX16_3 slot (which runs off the PCIE 3.0 lanes anyway), so that also shouldn't be a problem. I would suggest a trip to the BIOS to ensure that the motherboard is correctly splitting the two PCIE 4.0 x16 slots into two x8s correctly, as your explanation matches up with what might be happening, in that the GPU is negotiating an x8 connection but the card doesn't and gets given a x4 bandwidth; server cards sometimes don't like being forced to negotiate for their slots.

    • @neothermic1
      @neothermic1 3 роки тому

      That said, no idea what the _other_ computer is using, so I wager that one might be the one constricting down to a x4 lane, and that's the right answer.

    • @davidbombal
      @davidbombal  3 роки тому +4

      PC Information:
      1 x AlphaSync Gaming Desktop PC, AMD Ryzen 9 5950X 3.4GHz, 32GB DDR4 RGB, 4TB HDD, 1TB SSD M.2, ASUS RX6900XT, WIFI, Mellanox CX416A ConnectX-4 100Gb/s Ethernet Dual QSFP28 MCX416A-CCAT D
      1 x AlphaSync Gaming Desktop PC, AMD Ryzen 9 5900X, 32GB DDR4 RGB, 4TB HDD, 1TB SSD M.2, AMD Radeon RX6800XT, WIFI, Mellanox CX416A ConnectX-4 100Gb/s Ethernet Dual QSFP28 MCX416A-CCAT

  • @Paavy
    @Paavy 3 роки тому +7

    100 Gbps is crazy when I'm still amazed by my 1 Gbps. Love the content, appreciate evrything you do :)

  • @simbahunter
    @simbahunter 3 роки тому +35

    Absolutely the best way to predict the future is to create it.

    • @CesarPeron
      @CesarPeron 3 роки тому

      Or know who creates it 🥴

  • @michaelgkellygreen
    @michaelgkellygreen 3 роки тому +1

    Very well explained and interesting video. Technology advances are coming thick and fast. Speeds we didn't even dream of getting 10 years ago are a reality. Keep up the good work David

  • @69purp
    @69purp 2 роки тому

    This was the craziest setup for my Arch.Thank you ,David

  • @James-vd3xj
    @James-vd3xj 3 роки тому +22

    I would imagine the limitation also involves the HDD/SSD.
    Please make sure to update us when you solve this as all of us would be interested in upgrading our home networks!
    Thanks for the video, and all your encouraging messages.

    • @davidbombal
      @davidbombal  3 роки тому +8

      Agreed James. I would have to test it using Linux to see where the limitation is.

    • @samadams4582
      @samadams4582 3 роки тому +3

      Yes, a top of the line 970 Evo nvme can only push around 25 gigabits/second. A gen 4 nvme may be able to push closer to 50, but this switch isn't geared for servers, it's geared for a network core with many pcs or servers connected at once.

    • @oildiggerlwd
      @oildiggerlwd 3 роки тому +3

      I think LTT had to use threadrippers and honey badger SSD’s to approach saturating a 100gb link

    • @oildiggerlwd
      @oildiggerlwd 3 роки тому

      ua-cam.com/video/18xtogjz5Ow/v-deo.html

  • @daslolo
    @daslolo Рік тому +5

    The DMI is the bottleneck. Move your NIC to pcie_0, the one linked directly to your CPU and if you can, turn on RDMA.

  • @MangolikRoy
    @MangolikRoy 3 роки тому +1

    This video is enriched with some solid information thank you David 🙏
    You are the only that I can comment without any hesitation bcz you always..,... i loss my words

  • @QuantumBraced
    @QuantumBraced 3 роки тому +5

    Would love a video on how the network was set up physically, what cards, transceivers and cables you used.

  • @georgisharkov9564
    @georgisharkov9564 3 роки тому +4

    Another interesting video. Thank you, David

  • @MihataTV
    @MihataTV 3 роки тому +5

    In the files copy test limitation is from storages speed, you can check it with local file copies.

    • @norbertopace7580
      @norbertopace7580 3 роки тому

      Maybe the storage have the problema, but windows use cache on memory on this tasks.

    • @BelowAverageRazzleDazzle
      @BelowAverageRazzleDazzle 3 роки тому

      Doubtful... Storage systems and SSDs are measures in MegaBYTES per second, not MegaBITS... The problem is the buss seeds between ram, cou and the nic.

  • @danielpelfrey1656
    @danielpelfrey1656 3 роки тому

    Awesome to see you talk about high performance computing networking topics! (We met at Cisco live in San Diego a few years ago, and we talked about Summit supercomputer and Cumulus Linux) The first bottleneck was your disks.
    The second bottleneck is probably your PCI slot. Is your motherboard pcie gen 2,3 or 4? How many lanes of do you have on your slot and the card?
    What is your single TCP stream performance? If you remove the -P option, do single stream, then do 2,4,8, and so on.

  • @x_quicklyy6033
    @x_quicklyy6033 3 роки тому

    That’s incredibly fast. Thanks for the great video!

    • @x_quicklyy6033
      @x_quicklyy6033 3 роки тому

      @@davidbombal By the way, do you still use Kali? My Kali Linux randomly shut down and now it won’t boot.

  • @nicholassattaur9964
    @nicholassattaur9964 3 роки тому

    Awesome and informative video! Thanks David

  • @lunhamegenogueira1969
    @lunhamegenogueira1969 3 роки тому

    Great video as always DB. It would be nice to see the speeds that the switch was actually putting out as well without ignoring off course the fact that the OS showed that the card was already running @ 100% of its capacity🧐🧐🧐

  • @napm54
    @napm54 3 роки тому

    The PC is having a hard time processing that big data, that's a big problem :) thanks again David for this hands on video!

  • @uzumakiuchiha7678
    @uzumakiuchiha7678 3 роки тому

    When you are a beginner and things are going above your head but David explains so clearly that you think that yes you know this , then you know how much efforts David himself must've put to learn stuffs and teach them to us
    That too for free
    Thank you Sir🙏

  • @spuriustadius5034
    @spuriustadius5034 3 роки тому +1

    These NIC's are intended for specialized networking applications and not general purpose usage. There ARE products with servers that can make use of them but they're typically for network monitoring, things like line-speed TLS decryption for enterprises, or "network appliances" that analyze IP traffic and also record it (usually, with some filtering conditions, because even a huge RAID array will fill up quickly at such speeds). It's fun to see what would happen if you pop one these into a relatively normal desktop, however!

    • @davidbombal
      @davidbombal  3 роки тому +1

      Agreed. But got to have some fun with such cool switches :)

  • @gjsatru3383
    @gjsatru3383 3 роки тому

    Great David this is very great I just wanted this cause I am in part of India where I have to worry for network.Also I am sorry David for being so late I was just looking at some basics of ssi syntax to ss scripting

  • @MiekSr
    @MiekSr 3 роки тому +1

    Maybe do a file transfer between two server? I'm also interested to see how Intel Desktops handle 100Gb network speeds. Cool vid!

  • @bilawaljokhio7738
    @bilawaljokhio7738 3 роки тому

    Sir david you are such a good teacher i really enjoy your videos i have learned alot from your last video i have started python course love u sir

  • @ImagineIfNot
    @ImagineIfNot 3 роки тому

    Now i just wanna watch videos to give you views and to learn ofc after all the good stuff you're doing to the people...
    You're smart. You are building an actual fanbase that lasts long

  • @konstantinosvlitakis
    @konstantinosvlitakis 3 роки тому +2

    Hi David, really interesting speed test you demonstrated. Could you possibly repeat the test between two linux machines? It is known that networking stack in *nix like systems is highly optimized. Top quality video as always! Thank you very much!

  • @xAngryDx
    @xAngryDx 3 роки тому

    Hey David, thank you for the video. Actually, we are using Aruba L3 - 3810M, L2 - 2930F, WLC 7010 and AP 308. As a hardware it is very nice, and their prices can compete with Cisco. The only issue is the technical support - Might be my area. Thank you again.

  • @naeem8434
    @naeem8434 3 роки тому

    This video is crazy as well as informative.

  • @kungsmechackasher6405
    @kungsmechackasher6405 3 роки тому +2

    David you're amazing .

  • @ImLearningToTrade
    @ImLearningToTrade 3 роки тому

    Interesting I got the notification for this video on my Workphone but it took a full two minutes for this videos to show up on your channel.

    • @davidbombal
      @davidbombal  3 роки тому

      Not sure why UA-cam did that.... but have been seeing strange stuff happening recently.

  • @heathbezuidenhout2551
    @heathbezuidenhout2551 3 роки тому +2

    The Aruba 8360 switch you using if I heard correctly, that an enterprise core switch for data centers and campuses. Not bad at all for a home network

    • @BelowAverageRazzleDazzle
      @BelowAverageRazzleDazzle 3 роки тому

      No kidding right... I wish I just had a couple of 20,000 dollar swiches laying around too...

  • @vyasG
    @vyasG 3 роки тому +2

    Thank You for this interesting Video. Mouth-watering speeds! I would agree with what James mentioned regarding SSD/HDD. I was thinking how the storage devices will handle such speeds. Do you have an array of storage devices to handle such speeds?

  • @timramich
    @timramich Місяць тому

    The card is a PCIe 3.0 card and it is using 8 lanes due to lanes being split for the slot the GPU is in. So the rated 7.88 GB/s of that x8 3.0 PCIe bus is right in line with the 56 gigabits/s. Also, kind of barking up the wrong tree trying to do this with gaming hardware and Windows. Most people run a high bandwidth from just their server and would use something like 10 or 25 gig to their clients.

  • @komoyek
    @komoyek 3 роки тому +1

    thank for this beautiful tutorial

  • @stark6314
    @stark6314 3 роки тому +5

    Really fast ❤️🔥🔥

  • @bahmanhatami2573
    @bahmanhatami2573 3 роки тому +7

    Do these cards support Direct Memory Access (DMA)?
    I've heard that DMA and RDMA are here to solve these sort of problems. As I never had such a sweet issue, I don't know exactly for example, should you enable it or it is enabled by default, or how exactly you can take advantage of it on windows machines...

    • @davidbombal
      @davidbombal  3 роки тому +3

      DMA is enabled on the computers. But, will need to check if that requires a newer version of network card.

    • @James_Knott
      @James_Knott 3 роки тому +2

      I thought DMA was the norm for many years. Not using it would be a real waste of performance even at much lower rates. While I haven't noticed this with NICs, specs for switches often list frames per second, which implies DMA.
      BTW, long distance fibre links often run at 100 Gb per wavelength.

    • @audiencemember1337
      @audiencemember1337 3 роки тому +1

      @@davidbombal doesn't the switch need to be configured for rdma as well? You shouldt be able to see the file transfer in task man if this is working correctly as rdma bypasses the traditional network stack

    • @giornikitop5373
      @giornikitop5373 3 роки тому +1

      @@davidbombal i believe if RDMA was working on all sides, there would have been no utilization in perf monitor nics, as it bypasses completely the cpu and those counters. also cpu utilization would have been minimum.

    • @giornikitop5373
      @giornikitop5373 3 роки тому

      @@davidbombal also as others recommended, make sure the nic is using a full x16 pci-e slot directly to the cpu, not through the chipset. i think it's berrely enough for that 2port 100gbe fd.

  • @szabi0112
    @szabi0112 3 роки тому

    David! This is an awesome video. But there is one thing I don't understand. Why i was able to see on the video "don't say to use linux "?
    It is just a question, not a piss taking. I don't understand?
    I wish you and your family all the very best.

  • @rzjo
    @rzjo Рік тому

    every thing in your setup seems well, you are reaching around a 40% of network resources and to reach around 90% you may need to check the PCIex NVME speed, it must be installed as RAID to increase the throughput and there is a special adaptor for this, hopefully this help you :)

  • @politron73
    @politron73 3 роки тому

    You could try disabling autotuning and large send offload.
    1)
    netsh interface tcp set global autotuninglevel=disabled
    netsh interface tcp set global rss=disabled
    2) Then --> Device Manager --> NICs properties --> Advanced --> Large send offload (IPv4) --> Disabled
    3) Reboot the PC

  • @fy7589
    @fy7589 3 роки тому +1

    I haven't tried such a crazy thing like you did so I might be saying something you already tried but have you tried to raid 0 multiple pcie 4.0 ssds directly thru the CPU lanes? Also on a ryzen 5800x rather than a dual CCD CPU. Maybe the way the IO die works may be limiting your case. It may be splitting the io die into two and while dedicating a single CCD to the task, leaving other to everything else while also having the IO die to distribute its resources evenly per die. You might wanna try it on a raid 0 config directly on the CPU lanes.

  • @channel-ch2hc
    @channel-ch2hc 3 роки тому

    Your are doing a great job. Keep it up

  • @AMPTechGrade
    @AMPTechGrade 3 роки тому

    What I figured, it’s not meant for PC to PC, not yet til silicon can increase bus speeds. MORE meant for multiple PC traffic between 2 switches, benefiting from the 100Gb uplink between the 2. One of the reasons that 10gig ports (& multigigabit) are starting to be useful for. Now we just need multi gig switches for aggregate + remote switches with atleast 1-4 multi gig ports.

    • @davidbombal
      @davidbombal  3 роки тому +1

      Coming in the next video :)

    • @AMPTechGrade
      @AMPTechGrade 3 роки тому

      @@davidbombal oh I’m subscribing lol. For like when you have a network switch in the attic for upstairs clients & another switch in the basement for the downstairs client, definitely a need.

  • @MK-xc9to
    @MK-xc9to 2 роки тому

    Only Win11 and Windows for Servers and maybe Windows for Workstations support RDMA over converged Ethernet ( RoCE ) . With Win 11 it can be that you manually must enable RDMA . Activating only in the Driver can not be enough
    You can use Get-NetAdapterRDMA to check if RDMA is enabled with Enable-NetAdapterRdma -Name "*" you can enable it

  • @ivosarak959
    @ivosarak959 3 роки тому +6

    Issue is likely with your disks. Make memory drives and test memory-memory transfers instead.

    • @leos4210
      @leos4210 3 роки тому

      M.2 nvme ssd

    • @ivosarak959
      @ivosarak959 3 роки тому +3

      @@leos4210 Even that is likely not performant enough to reach 100Gbps speeds.

    • @derekleclair8787
      @derekleclair8787 3 роки тому +1

      Have to use nvme array. Try 3 to 4 drives, you you should not be using iperf and robocopy that’s silly. Also you have to have an rdma connection so make sure that’s available using power shell. I personally have hit well over 25gigabytes per second using multiple connects 3 dual 56 gb cards writing and reading from nvme storage. Memory copy is too slow. Also did this 4 years ago so go back and try this again.

    • @davidbombal
      @davidbombal  3 роки тому

      iPerf in this example is using memory to memory transfer.

  • @Technojunkie3
    @Technojunkie3 3 роки тому +5

    You need PCIe Gen4 to keep up with 100Gbps. The ConnectX-4 NIC is Gen3. The AMD X570 desktop chipset doesn't have enough PCIe lanes to run both your GPU and NIC at full 16 PCIe lanes so the NIC is likely running 8x. You need a more modern NIC and a Threadripper or Epyc board that does PCIe Gen4. Maybe AMD can loan you a prerelease of their next-gen Threadrippers for testing? The current gen will work but...

    • @equilibrium4310
      @equilibrium4310 3 роки тому

      PCI-e 3.0 x 16 is actually capable of 15.754 GB/s or converted to transfer speeds 125.6Gbps

    • @Technojunkie3
      @Technojunkie3 3 роки тому

      @@equilibrium4310 I misremembered. PCIe Gen3 x16 can't sustain both 100Gbps ports on a dual port card. A single port at x16 would work. But he's almost certainly running x8 on that desktop board, so ~62Gbps or about what was shown in the video.
      Now that AMD is merging with Xilinx I think that this is a fine opportunity for AMD to loan out a pair of Threadrippers and Xilinx 100Gbps cards for testing.

    • @James_Knott
      @James_Knott 3 роки тому +2

      When I started working in telecom, way back in the dark ages, some of the equipment I worked on ran at a blazing 45.4 bits/second!

  • @anfxf6513
    @anfxf6513 3 роки тому

    Keep Up Sir We Love Your Videos
    Specially Ethical Hacking Related 🥰

    • @anfxf6513
      @anfxf6513 3 роки тому

      @@davidbombalAwesome

  • @abdenacerdjerrah
    @abdenacerdjerrah 3 роки тому

    Awesome video sensie 👺

  • @JorisSelsJS
    @JorisSelsJS 3 роки тому +2

    Hey David, as a 20 years old Belgian entrepreneur that has founded a networking company your video's are still very helpfull to me in all sorts of ways! I want to thank you for all the amazing content and want to encourage you to keep doing what you do because we all love it! Besides that if you ever want to talk about what we do or are interested feel free to contact me anytime! :)

  • @farghamahsan5034
    @farghamahsan5034 3 роки тому

    David you are awesome for the world. Please make video parts on SFP with details.

  • @XaL47
    @XaL47 3 роки тому +1

    Correct me if i'm wrong, but you could've setup a ramdisk on both machines and go for a nic teaming to push it even further:)

  • @robertmcmahon921
    @robertmcmahon921 3 роки тому

    Speed is defined by latency, not throughput. iperf 2 supports latency or one way delay (OWD) measurements but one has to sync the clocks.

  • @t.b.6880
    @t.b.6880 3 роки тому

    David, limitation of speed might be related to read/write disc operation. Enterprise ssd migt help. Also, check in bios if any power savings mode is activated. Another bottleneck can be dynamic cpu power allocation...

  • @anfxf6513
    @anfxf6513 3 роки тому

    This Is Awesome Sir
    Such a Speed
    I won't Be able To Test It Any Day😥
    Bcz My Pc is Very Low Performing.

    • @anfxf6513
      @anfxf6513 3 роки тому

      @@davidbombal I Hope So Sir

  • @chrism0lza
    @chrism0lza 3 роки тому

    Great video, thanks for sharing this. While I dont think the AMD Ryzen has integrated graphics (from a google, don't run AMD anymore), I'd be interested to see what the performance would be like without the GPU attached to the PCIe lanes, for example using an Intel chip with integrated graphics, would that open up enough PCIe bandwidth to get more throughput?

  • @russlandry995
    @russlandry995 3 роки тому +1

    What HDs are you using? Even pcie 4.0 hds top out at about 7.5 gbs of R/W speeds. Doubt you'll be able to get faster by copying, but you should be able to stream (no clue how you could test at that speed) faster than you are coping

  • @gregm.6945
    @gregm.6945 3 роки тому +2

    the copying window @ 11:20 shows 2.23GB/s. Doesn't the 2.23GB/s represent 2.23Giga *bytes* /second, not 2.23Giga *bits* /second? i.e: (ie: uppercase B = bytes, lowercase b = bits). This would mean your throughput for these files is actually 2.23GB/s * 8 = 17.84 Giga *bits* /second or 17.84Gb/s.. Sadly, still nowhere near that 55Gb/s from iperf though

  • @ayush_panwar1
    @ayush_panwar1 3 роки тому +1

    That speed is awesome even its not the maximum , also can you make videos on SOC and blue team career opportunities.🤗😇

  • @sagegeas9205
    @sagegeas9205 3 роки тому +3

    How ironically fitting is it that the most you can get at that 6:30 mark is 56Gbits per second...
    How far we have come from the simple modest and humble 56k modems... lol

    • @davidbombal
      @davidbombal  3 роки тому +1

      lol... now that is a great comment!

  • @FliesEyes
    @FliesEyes Рік тому

    My thoughts would be the slot the adapter card is using. Consumer motherboards tend to have specific configurations to PCIe lane allocation an bifurcation settings in the bios.
    I hope to do some similar testing on Z790 motherboard in the near future.

  • @KenSherman
    @KenSherman 2 роки тому +1

    As we watch this in the early 2020s - the new "Roaring 20's" - we will look back at this in the future & marvel how far we've become when these 100G speeds (among other things) will be as prevalent in our homes.😉 Do you agree David B (and son)?

  • @AshishChandra14
    @AshishChandra14 3 роки тому

    iperf3 is single threaded, use iperf2 for multi-threaded testing. Or Run multiple iperf3 on different port number like (Server-Client) 5201 and 5202. Add up for throughput

    • @davidbombal
      @davidbombal  3 роки тому

      Good suggestions but I tried that already. Made no difference in my tests. This is the best that I was able to get.

  • @mihumono
    @mihumono 3 роки тому +1

    Is the card pcie x16? Maybe it is running in x8 mode. Also how fast is Your RAM(SUBTIMINGS?)?

  • @igazmi
    @igazmi 3 роки тому +1

    My guess would be to check whether all required LANES from the 100g eth card are granted.

  • @carl4992
    @carl4992 3 роки тому

    Hi David, if you haven't already, try turning off interrupt moderation on both adaptors.

  • @samislam2746
    @samislam2746 3 роки тому +1

    Thanks for sharing this

  • @werdna_sir
    @werdna_sir 3 роки тому

    Bloody hell that's faster than the network backbone that I recently built at work.

  • @bobnoob1467
    @bobnoob1467 3 роки тому

    Awesome video. Keep it up.

  • @JeDeXxRioProKing
    @JeDeXxRioProKing 3 роки тому

    Hi David , Thanks For Video Aruba has great Networking gear *_* , for the performance you will improve a lot of performance if you use a Fast Hard drive , this is the main problem use SSD like Samsung SSD 970 Evo Plus

    • @samadams4582
      @samadams4582 3 роки тому +2

      I've have a 970 Evo plus and can't get anywhere close to 100 ge throughput. There is no SSD that can push around 12.5 gigabytes per second.

    • @JeDeXxRioProKing
      @JeDeXxRioProKing 3 роки тому

      @@samadams4582 Yes you are right there is always limitation and that limitation depend on your need to. if you use for example 4 SSD drive as RAID-0 .. then what ? let me tell that you will get more performance.

    • @samadams4582
      @samadams4582 3 роки тому

      @@JeDeXxRioProKing Check out this video about NVME Raid 0 Performance. These are 2 PCIe 4.0 NVME Drives in RAID 0 on a Ryzen 9 3900x. You can see that the write performance is different than the read performance.
      ua-cam.com/video/Ffxkvf4KOt0/v-deo.html

  • @educastellini
    @educastellini 3 роки тому

    -Great video teacher David.
    -So for work and depending on the use a machine like this is justified but for a game it's not worth it yet.
    -What happens if you play using Windows and even if you have the best machine today, the fastest Hard Disks on common PCs, which are SSD M2 at most, transfer 2Gb so even in a Raid of these devices they still wouldn't use it close to the maximum capacity of a 10 Gig network, imagine a 100 Gig network, considering only the disks without thinking about the PC's bus speed...!?!?!?!?
    -Now if it's a small office if they had NAS, or Server Xeon then that justifies a lot.
    -PCs with Chinese boards even using Xeon with two processors that are selling for games games are not made for multi-processors so they just end up using one of them and the other is unused, so in games it doesn't work but in virtualization which is the our business this type of board and home PC architecture with Xeon Multi-processors works, but the bus doesn't use such a network yet.
    -This level of network only Server of the big Xeons with buses far above home PCs, but with them at most a network of 10 Gigs.
    -It's right in my house that has a bunch of Raspberrys, NAS, PCs all in a 10 Gig network, it's a dream, imagine a network like that 10 times faster.... LOL
    -Thank you for showing the new technologies Professor David.
    PS: On Linux we would monitor the performance better... LOL

  • @paulsccna2964
    @paulsccna2964 3 роки тому

    Most likely the PCI bus is the limit on the PC. If possible you might be able to tweak and ensure that the 100 Gbs Ethernet card is actually running at the full speed, (For example if the slot allows 1x, 2x, 3x or higher.) Also, some motherboards, will "steal," or allocate PCI slot speeds for other devices, like M.2. And, I am assuming you are using and M.2 for this testing. There could be a limit to the data transfer rate on the M.2 chip. These might be good places to start. It might be possible, on a modern AMD motherboard, to specifically allocate or assign PCI bus speeds to a specific PCI slot. The down-side, might be giving up performance on some other aspect of the motherboard, that depends (or steals, PCI bus speed). Regardless, 50 Gbs to 55 Gbs is really good. But, as you have demonstrated, there are limiting factors. Many applications might not even be designed to handle such these speeds, and only result in buffering. Certainly, for pushing data around, it is neat, perhaps gaming? I look forward to a follow up on this topic. One more thing. I wonder if there is way, such as Cisco to (not even sure you can turn those features on in a gradular way, such as cut-through switching), to rip more speed out of the switching, of the switch? As you mention, most likely the bottle neck is the software and the mobo.

  • @TooliusTech
    @TooliusTech 3 роки тому

    Try enabling RDMA please. Windows workstation / enterprise should get you there ! Would love to see the results :)

    • @davidbombal
      @davidbombal  3 роки тому +1

      Already enabled in this example 😔

    • @TooliusTech
      @TooliusTech 3 роки тому

      @@davidbombal Thank you soo much for the response.. this is exactly what i am working on too ! building a 100gbe home lab and the only place i have seen it get close to the 100gig speed is on linux. Im actually trying to see if i can serve out 5GB/sec per client to 3 clients so that they can all play back and color grade 8k EXR or DPX files over the network from a fast nvme based NAS. Following this with very keen interest :)

    • @TooliusTech
      @TooliusTech 3 роки тому

      @@davidbombal Also another thing that i had noted was that when i tried windows server 19 on the server and windows 10 workstation on the client machine and had RDMA working , it would not show any usage in task manager for the ethernet. Only place i could see RDMA and transfer speeds was under resource monitor :)

    • @TooliusTech
      @TooliusTech 3 роки тому

      @@davidbombal Also i do think like the others have said that you might be PCI-E bandwidth limited. If you can test the cards at full X16 bandwidth , you might go faster. I have been testing on threadripper and do not have PCI-e limitations :)

  • @soundserie
    @soundserie 3 роки тому

    Super video. Maybe use RDMA technoloie. RDMA help with CPU form 100% to 10%.

  • @magneticshrimp7429
    @magneticshrimp7429 3 роки тому

    ~56Gbps is pretty much exactly what is practical to get through 8 lanes of PCIe Gen 3. On Ryzen generally 8 lanes is the most you will get when you also have a GPU installed (if you have a nice motherboard with the required PCIe switches)
    A newer 100G card with PCIe gen4 could archieve full bandwidth using 8 lanes.

    • @sevencolours5014
      @sevencolours5014 3 роки тому

      Which one is that? The card.

    • @RepaireroftheBreach
      @RepaireroftheBreach Рік тому

      @@sevencolours5014 I have a Gen 4 PCIe card. Tried both the qnap CXG-100GSF2-CX6 and the Mellanox 100G (mcx623106an-cdat) card on a Asus Zenith II Extreme Alpha w/ threadripper and made sure the card was in a 16x slot and functioning at 16x per the BIOS, and still have the same problems as the official video post. I cannot get faster than about 50 GB/s. Maybe once I saw 55-56 GB/s but generally have the same problem. I wonder if he ever fixed it??

  • @norbertopace7580
    @norbertopace7580 3 роки тому

    With all due respect, the problem is not windows, or not at all. The maximum bandwidth cannot be reached because the copy tasks cannot be parallelized, you can put many copy tasks at the same time but you cannot parallelize 1 copy at the processing level, so if you put the CPU performance view and look at the cores, you may notice that one of them will be at 100% and that is why the turbo speed of the processor reaches its maximum peak, since it does not have other cores running at full capacity. To solve this problem, the only way is to have an ultra-fast processor in each core or that the network card can take care of 100% of the copy traffic and the theoretical maximum would be reached.

  • @nettyvoyager6336
    @nettyvoyager6336 3 роки тому

    your isp will love you lmao im doing 300 gig a month lmao

  • @LuK01974
    @LuK01974 3 роки тому

    Ciao David, problem need to analyzed in deep.
    1st speed limitation of you hdd/sdd/nvme
    2nd driver of your nic card , use the driver from vendor.
    For testing full speed try to use ram disk on all your two pc and copy from ram disk to ram disk using robocopy.

  • @tikshuv-ccna
    @tikshuv-ccna 3 роки тому

    What about copying more than 1 file at the same time?
    What about copying from MAC, from LINUX, from old DOS, WIN XP, WINServer using XENON.
    What about FIBER?
    What about downloading a file using FTP, Download manager from a local server?
    So eventually how can we use all Bandwidth? Who can use all of it?

  • @andreavergani7414
    @andreavergani7414 3 роки тому

    I have got the same problem in Windows only with 10GbE. Changing Jumbo frames in every node of network seems not doing better.
    Have suggestion?
    Support your great work. Ciao
    PS: im so jelous about that Aruba switch ahah :)

  • @ali0ghanem
    @ali0ghanem 3 роки тому

    In my home network i have adsl 4 mbps 😂😂😂😂 without local network just wifi from D-link modem. Thank you mr.david

    • @davidbombal
      @davidbombal  3 роки тому +2

      I have known that feeling. Bad Internet is a pain

    • @ali0ghanem
      @ali0ghanem 3 роки тому

      @@davidbombal 😘😘😘😘

  • @dadsview4025
    @dadsview4025 3 роки тому

    Unless you are using a Ramdisk you can't assume it's not the I/O to the drive. The fact that CPU is not at 100% indicates it's an I/O bottleneck. It also could be the PC Ethernet interface.
    Is the 10ge a built into the motherboard? Are you using physical drives? It would be helpful to give the motherboard and 10ge interface specs. I would also examine the throughput curve over time which would reveal any caching delays i.e. does the performance increase or decrease during the transfer? I did this sort of optimization on networks when 100mbits was fast ;)

  • @LloydStoltz
    @LloydStoltz 3 роки тому

    maybe the north bridge and the hardrives are the limiting factors, if you can try to setup multiple NVMe drives acting as one drive

  • @Mr.Ankesh725
    @Mr.Ankesh725 3 роки тому

    Good knowledge of the video
    Love for India ❤️❤️

  • @rahultatikonda
    @rahultatikonda 3 роки тому

    WE ENJOY YOUR VIDEOS SIR THANKS FOR YOUR TEACHING SIR

  • @LazyMax902
    @LazyMax902 3 роки тому

    You are using a PCI Gen 3 SSD; The maximum speed they can achieve is 32gbps.
    Upgrade to a PCI Gen 4 SSD and you can get 64gbps. The new SN850 SSD is great. Make sure you have a 500 series chipset to support the full bandwidth.

    • @davidbombal
      @davidbombal  3 роки тому

      PC Information:
      1 x AlphaSync Gaming Desktop PC, AMD Ryzen 9 5950X 3.4GHz, 32GB DDR4 RGB, 4TB HDD, 1TB SSD M.2, ASUS RX6900XT, WIFI, Mellanox CX416A ConnectX-4 100Gb/s Ethernet Dual QSFP28 MCX416A-CCAT D
      1 x AlphaSync Gaming Desktop PC, AMD Ryzen 9 5900X, 32GB DDR4 RGB, 4TB HDD, 1TB SSD M.2, AMD Radeon RX6800XT, WIFI, Mellanox CX416A ConnectX-4 100Gb/s Ethernet Dual QSFP28 MCX416A-CCAT

  • @Piglet6256
    @Piglet6256 3 роки тому

    Could also be the network cards? sure it says 100GbE on the box, but there is many networking hardware on the market not delivering what's promised on the box as we all know :)
    I'm sure the systems can handle these speeds and process it since it's x64 Architecture so you should be ok with the bus speeds.

  • @RepaireroftheBreach
    @RepaireroftheBreach Рік тому

    David, were you ever able to fix this limitation? I have the same problem, but I have a Asus PCIe Gen 4 motherboard running a threadripper on Windows 11 22H2, with newer NICs such as the QNAP CXG-100G2SF-CX6 and the Mellanox (mcx623106an-cdat) with different 100G cables, etc. Also tried switching PCI slots with the GPU and verifying the NICs are running at 16x in the Bios and still cant break the 50-55 GB/s limit. Did you ever figure this out?

  • @ABUNDANCEandBEYONDATHLETE
    @ABUNDANCEandBEYONDATHLETE 3 роки тому

    I have a 3970x with 128GB ram. Send it over I'm a network engineer as well. I'll run these test with you in SF if you need. 👍🏼😁 (Haven't watched the whole thing yet)

  • @curtbeers1606
    @curtbeers1606 3 роки тому

    Would be interesting to run them both with Linux to see if there is an OS issue. I agree with one comment about the bottleneck possibly being the hdd/ssd or the bus speed of the expansion port the nic is installed.

  • @jaimerosariojusticia
    @jaimerosariojusticia 3 роки тому

    Back in the day, Windows had a couple of settings, registry settings that we could use to speed up the network traffic. Just with Windows XP, Microsoft started to "regulate" network traffic with some system DLL files. I'm afraid we can't make such changes easily anymore. I may suggest to get rid of the Microsoft bloat-ware and crap-ware to start with. Also, disabling the native "Windows Defender Firewall" will indeed speed up things.

    • @James_Knott
      @James_Knott 3 роки тому

      Compared to Linux, Windows tends to have poorer performance on the same hardware.

  • @jahilbanda1540
    @jahilbanda1540 3 роки тому +3

    Amazing :o)

  • @davidb_thetruth
    @davidb_thetruth 3 роки тому

    I want that home network David… But in the mean time, if you have any “old/spare” equipment you want to get rid of, just send it my way. I’ll gladly pay the shipping and handling. 😊

  • @Hartley94
    @Hartley94 3 роки тому

    Great content as always, thank you.

  • @woodant1981
    @woodant1981 3 роки тому +2

    The PCI-e slot has never been challenged like this😂

    • @davidbombal
      @davidbombal  3 роки тому +2

      lol... great comment!

    • @17sylargino
      @17sylargino 3 роки тому

      Did he add a special network card?

    • @woodant1981
      @woodant1981 3 роки тому

      @@17sylargino yeah!! 10000+ speed doesn't come standard unless you have a mid-high end enterprise grade machine

  • @Shadowdane
    @Shadowdane 3 роки тому

    It's a limit of your PCIe bus! That NIC is PCIe 3.0 which would require a full x16 connection to achieve 100Gbps. It seems it's dropped to a x8 PCIe connection which you can likely check this in the BIOS. The CPU only has 24 PCIe lanes available it's shared between the GPU, Storage, Network cards and other devices. You could try to set your GPU or other devices to a 8x or 4x PCIe mode to have enough bandwidth available to the NIC for a full PCIe 16x connection.
    PCIe 3.0 x16 connection = 15.754GB/s or ~126Gbps
    PCIe 3.0 x8 connection = 7.877GB/s or ~63Gbps
    PCIe Bandwidth Specs
    en.wikipedia.org/wiki/PCI_Express#History_and_revisions

  • @unknown-sc6if
    @unknown-sc6if 3 роки тому

    Its HDD/SSD limitation . Unless you had real NVME RAID 10 or a Huge like 50 - 100 SSD'S mounted where it could reach 4x speed then it should be able to handle 100Gbit.

  • @yashwantreddyr8286
    @yashwantreddyr8286 3 роки тому

    Woooww...that's awesome🔥🔥🔥

  • @sayedsekandar
    @sayedsekandar 3 роки тому +1

    Todays topic gives the feeling of Data Center.

  • @bibhashpodh1074
    @bibhashpodh1074 3 роки тому

    Great video😍

  • @VincentYiu
    @VincentYiu 3 роки тому

    Have you tried messing with network congestion protocols?

  • @giosal8822
    @giosal8822 3 роки тому +1

    Robocopy is faster ... but HOW LONG does it take to figure out the syntax and type that long command? I'd have the Windows copy done in seconds, and still be trying to figure out the Robocopy syntax, haha

    • @davidbombal
      @davidbombal  3 роки тому

      Lol…. Good comment. But you could Use scripts and keep the commands in a document. Much easier if you have a lot of different directories to copy 😀

  • @FYDanny
    @FYDanny 3 роки тому

    YES! I love networking!

  • @mranthony1886
    @mranthony1886 2 роки тому

    The limiting factor would be your SSD a 500GB Nvme Drive is 32GBps So perhaps ZFS may help