How to size your bandwidth for your storage subsystem | The 2-5-9 Rule

Поділитися
Вставка
  • Опубліковано 13 жов 2022
  • In this video, I'm going to show you how to size your bandwidth for your storage server build. People often ask me questions like:
    "Do I have enough bandwidth for my storage setup?"
    "If I upgrade to SAS-3 HDDs, do I need to upgrade my LSI HBA SAS controller?"
    I usually answer these questions using the 2-5-9 Rule. This rule comes from the reference performance limits of the 3 types of SAS/SATA storage devices. For spinning HDDs, these typically max out at around 2Gbps. For SATA or SAS-2 SSDs, these typically max out at around 5Gbps. And for SAS-3 SSDs, these typically max out at 9Gbps. Hence, the 2-5-9 Rule. Using this information, I'll walk you through an example with a 24-bay Supermicro 846 SAS-2 expander backplane (BPN-SAS2-846EL1).
    Hopefully this video helps you guys figure out if your hardware is sufficient for your I/O bandwidth needs! :-)
    Timestamps:
    0:50 - Explaining the 2-5-9 Rule
    8:57 - Example with BPN-SAS2-846EL1 backplane
    15:41 - PCIe bandwidth considerations
    Links to videos mentioned:
    Cost of SAS vs SATA Hard drives - • Cost of SAS vs SATA Ha...
    How to understand performance - • How to Understand Perf...
    If you'd like to support this channel, please consider shopping at my eBay store: ebay.to/2ZKBFDM
    eBay Partner Affiliate disclosure:
    The eBay links in this video description are eBay partner affiliate links. By using these links to shop on eBay, you support my channel, at no additional cost to you. Even if you do not buy from the ART OF SERVER eBay store, any purchases you make on eBay via these links, will help support my channel. Please consider using them for your eBay shopping. Thank you for all your support! :-)

КОМЕНТАРІ • 99

  • @jsclayton
    @jsclayton Рік тому +15

    I've spent so, so many brain cycles making sure I don't have bottlenecks and this is hands down the best explanation I've heard. Thank you!

  • @QrchackOfficial
    @QrchackOfficial Рік тому +4

    One more point - taking aside the fact that it's sequential transfers, you're also using full duplex SAS connections here. If you're using SATA drives, and going the older enterprise SATA HDD route (like the SATA variant of HGST Ultrastar 7K4000 that I go for), then the HDD itself caps at 125MB/s sequential, or 1 Gbps per HDD. And in real life (aka everything other than moving huge files, 20GB+ each) you'll see more like 30-60MB/s, so 1/4 to 1/2 of that, bringing it to more like 0.25/0.5Gbps per HDD. At that point, 48Gbps aka 2-port SAS2008 HBA is easily enough for 96 to 192 drives - and for situations where you really hammer a single drive sequential, you're going to have the spare bandwidth anyway since the other drives are unlikely to go full tilt sequential at the same time. Not to mention unless you're doing 10Gb networking, you're limited to 125MB/s over the network anyway. No need to sweat it at all.

  • @soundfire79
    @soundfire79 Рік тому +6

    I believe the most critical part of sizing you bandwidth is understanding the speed limits of whatever software storage configuration you go with. The calculations you’ve done here assumes that all you drives are getting hit at the same time separately with full bandwidth which almost never happens. Figure out what your theoretical max would be for your unraid, windows storage spaces, truenas, etc. and also what level of raid you are using. Also, how much speed do you actually need? How many users do you have doing what work? Large file transfers is the only type of work which benefits from huge bandwidth. I very much enjoy your videos. I have learned a lot from your channel.

  • @MarkDeSouza78
    @MarkDeSouza78 Рік тому +5

    Couldn’t you also have used 2 controllers (pci 2) instead of a newer controller (pci 3)

  • @jonathanbuzzard1376
    @jonathanbuzzard1376 Рік тому +2

    What you miss in sizing all this is what's the connection out the back of the server. No point in having wizzbang speeds in the server if you are only on a 1Gbps connection for a storage server. I would at this point note that a dual port SAS2 card would have 8 lines of dual ported 6Gbps so 96Gbps of throughput to the drives is more than good enough for a NAS with a 10Gbps connection.

  • @shephusted2714
    @shephusted2714 Рік тому +1

    this is just plain nonsense - smb mkt wants to go to 100gbe and nvme 10g and 2.5g are decent fallbacks - the biggest bottleneck is isp bw generally - for compute and r proxy apps you want as fast as you can manage

  • @Maxx-qq8lh
    @Maxx-qq8lh Рік тому +2

    Interesting to contast this design limit of SAS-3 of around 10Gbps (actually an ENTERPRISE storage concept) with the 32Gbps plafond of NVMe Gen3 x4 PROSUMER devices (e.g. Samsung SSD 970 Evo Plus, Samsung 970 PRO). And there are PCIe Gen3 x8 AOC (add-on-cards) available such as the PM-1725 that can reach 64 Gbps - Especially in very conservative business settings, RAID still seems the way to go, but that technology appears to me a dead end, a "ghost from the past", trying to solve yesterday's problems (bad sectors, crashed HDDs, data integrity) with today's technology. Future solutions to these storage persistence and data integrity problems might look entirely different, without having to endure the performance problems and overhead introduces with RAID management. Using Flash RAM the way we used magnetic discs increasingly proves to be a rather wasteful effort !

  • @andre_warmeling
    @andre_warmeling Рік тому +1

    Then plug it all into an Atom server!!!

  • @binks3371

    how about nvme ?

  • @malexejev
    @malexejev Рік тому +2

    Interesting (to me) IO b/w topic is socket-association of nvme drives. like, we may have enough pci-e lanes for each drive but cross-cpu bus (for example, QPI) may have less bandwidth. and the question is how to organise disk-to-sockets and stripes-mirrors in file system to have best possible practical performance from filesystem. network is not involved here, I'm talking just about local storage.

  • @saccharide
    @saccharide Рік тому +29

    5:55

  • @NTATchannelNickTaylor

    Very good explanation. I don't need anything at the moment but I saved your EBAY store just in case. Listening to you here I know if I have a compatibility question I will get a usable answer.

  • @UnkyjoesPlayhouse
    @UnkyjoesPlayhouse Рік тому +2

    I learned a lot from this video, had to rewind a couple of times to wrap my head around what you were saying, but great video!

  • @jimholloway1785
    @jimholloway1785 Рік тому +1

    Best explanation of how your bandwith needed for what drives/ssd's.

  • @Maxx-qq8lh
    @Maxx-qq8lh Рік тому +2

    @ArtofServer - big thanks for yet another marvellous tutorial on bandwidth management. You also show how to grapple with systemic side effects in the domain of storage. System theory teaches us to expect any bottleneck to MOVE once you address the original stumbling bloc and bandwidth management is certainly no exception. Best to you from Vienna/Austria, Maxx_1150

  • @MacroAggressor
    @MacroAggressor Рік тому +1

    Excellent video. I'm just diving into the world of enterprise hardware, and this demystified the entire SAS interface for me. Thank you!

  • @prahapivo

    Great video, it helped me out a lot and was much clearer than any other explanation that I've seen. 😊

  • @KrumpetKruncher
    @KrumpetKruncher Рік тому +1

    Awesome breakdown and good reference! Thank you!

  • @lpseem3770
    @lpseem3770 Рік тому +1

    I thought You were done with recording. Glad to see You back.

  • @pavoutsinas
    @pavoutsinas Рік тому +1

    Masterclass should really offer you a spot on their app. Great info, great explanations. I learned a lot from all your videos. You ebay shop is a great resource as well. Thank you!