How to size your bandwidth for your storage subsystem | The 2-5-9 Rule

Поділитися
Вставка
  • Опубліковано 3 гру 2024

КОМЕНТАРІ • 107

  • @jsclayton
    @jsclayton Рік тому +22

    I've spent so, so many brain cycles making sure I don't have bottlenecks and this is hands down the best explanation I've heard. Thank you!

    • @ArtofServer
      @ArtofServer  Рік тому +3

      Glad it helped! Thanks for watching! :-)

    • @thewhitefalcon8539
      @thewhitefalcon8539 6 місяців тому +1

      Conversely, I'm okay with bottlenecks to save money because I want capacity, not throughput.

    • @r2db
      @r2db 5 місяців тому

      The way that most people use that term makes no sense. There is always going to be a bottleneck in some specific use case, and depending upon what you do with the system that condition may occur frequently or almost never. There will always be a "weakest link" in the system, and whether we are talking about spinning drives or SSDs there will always be rather significant variability with regards to performance depending upon sequential or random, reads or writes - at least once you exhaust the cache or for read cases where the cache guesses incorrectly. If you optimize for the best case scenario (sequential operations on spinning disks or reads on SSDs) you are spending more on the rest of the system, but your storage devices are the weak link for the worst case scenario (random operations on spinning disks or writes on SSDs). If you optimize for the worst performance of the storage devices then you can be reasonably sure that you will be able to maximize the capabilities of the controllers, but in the best case scenario you are leaving some performance of the storage array on the table.
      If you are running a dual processor server, then you may even need to consider which processor PCIe lanes go to which storage controller, and which CPU is running which VM. Is that a real world problem for most people? No. It is typically merely a theoretical problem. But, as before, in specific use cases it may become a real world problem.

  • @saccharide
    @saccharide 2 роки тому +33

    5:55 SAS and SATA use 8b/10b encoding so it's multiplying or dividing by 10 not 8. It actually makes it really easy to convert. examples: 2Gbps=200MB/s, 6Gbps=600MB/s, 12Gbps=1.2GB/s

    • @wayland7150
      @wayland7150 2 роки тому +7

      Yes I always use 10 rather than 8. Bits per second is generally the speed on the wire where as bytes per second is the amount of usable data delivered after stripping off the protocol.

    • @uiopuiop3472
      @uiopuiop3472 4 дні тому

      (top secret) remember the lenovo secure encapment profile also uses 2 bits for ecc

  • @Ramsas154
    @Ramsas154 4 місяці тому +4

    I love small channels like this! You saved me so much time learning about storage, it just clicks!! I salute you man!!

  • @soundfire79
    @soundfire79 2 роки тому +6

    I believe the most critical part of sizing you bandwidth is understanding the speed limits of whatever software storage configuration you go with. The calculations you’ve done here assumes that all you drives are getting hit at the same time separately with full bandwidth which almost never happens. Figure out what your theoretical max would be for your unraid, windows storage spaces, truenas, etc. and also what level of raid you are using. Also, how much speed do you actually need? How many users do you have doing what work? Large file transfers is the only type of work which benefits from huge bandwidth. I very much enjoy your videos. I have learned a lot from your channel.

    • @ArtofServer
      @ArtofServer  2 роки тому +5

      That's also a great point that probably deserves it's own video(s). However, I consider "application" performance as something that sits on top of the hardware layers. This was focused mostly on just the hardware. But you are correct in that a holistic analysis is important.

  • @jimholloway1785
    @jimholloway1785 Рік тому +1

    Best explanation of how your bandwith needed for what drives/ssd's.

  • @UnkyjoesPlayhouse
    @UnkyjoesPlayhouse 2 роки тому +3

    I learned a lot from this video, had to rewind a couple of times to wrap my head around what you were saying, but great video!

    • @ArtofServer
      @ArtofServer  2 роки тому

      thanks unkyjoe! glad this was helpful! :-) Sorry, I keep missing your live streams!

  • @NTATchannelNickTaylor
    @NTATchannelNickTaylor Рік тому +3

    Very good explanation. I don't need anything at the moment but I saved your EBAY store just in case. Listening to you here I know if I have a compatibility question I will get a usable answer.

  • @MacroAggressor
    @MacroAggressor Рік тому +1

    Excellent video. I'm just diving into the world of enterprise hardware, and this demystified the entire SAS interface for me. Thank you!

  • @prahapivo
    @prahapivo Рік тому +1

    Great video, it helped me out a lot and was much clearer than any other explanation that I've seen. 😊

  • @lpseem3770
    @lpseem3770 2 роки тому +1

    I thought You were done with recording. Glad to see You back.

    • @ArtofServer
      @ArtofServer  2 роки тому +2

      Yeah, was just really busy and couldn't find time to do videos. I'm glad to be back. Thanks! :-)

  • @Maxx-qq8lh
    @Maxx-qq8lh Рік тому +2

    @ArtofServer - big thanks for yet another marvellous tutorial on bandwidth management. You also show how to grapple with systemic side effects in the domain of storage. System theory teaches us to expect any bottleneck to MOVE once you address the original stumbling bloc and bandwidth management is certainly no exception. Best to you from Vienna/Austria, Maxx_1150

    • @ArtofServer
      @ArtofServer  Рік тому +1

      Thanks. I hope this was helpful to you. :-)

  • @pavoutsinas
    @pavoutsinas Рік тому +1

    Masterclass should really offer you a spot on their app. Great info, great explanations. I learned a lot from all your videos. You ebay shop is a great resource as well. Thank you!

    • @ArtofServer
      @ArtofServer  Рік тому

      Thank you for your kind words. I'm happy to know my videos have been helpful to you! Thank you for watching and supporting my store! :-)

  • @chrisparkin4989
    @chrisparkin4989 Рік тому +1

    Thank you, such a fantastic video and so accurate and informative. Not much like this on UA-cam so keep it up please.

    • @ArtofServer
      @ArtofServer  Рік тому

      Thanks! I appreciate the kind words and glad it was helpful to you!

  • @talkenrain842
    @talkenrain842 2 місяці тому +1

    Great video. Very helpful. Thanks !!!

  • @KrumpetKruncher
    @KrumpetKruncher 2 роки тому +1

    Awesome breakdown and good reference! Thank you!

    • @ArtofServer
      @ArtofServer  2 роки тому

      Glad you enjoyed it! Thanks for watching!

  • @VitaliySunny
    @VitaliySunny 2 роки тому +4

    Thanks for explanation!

  • @philstreeter9703
    @philstreeter9703 Місяць тому +1

    Outstanding video. Thanks.

  • @malexejev
    @malexejev 2 роки тому +2

    Interesting (to me) IO b/w topic is socket-association of nvme drives. like, we may have enough pci-e lanes for each drive but cross-cpu bus (for example, QPI) may have less bandwidth. and the question is how to organise disk-to-sockets and stripes-mirrors in file system to have best possible practical performance from filesystem. network is not involved here, I'm talking just about local storage.

    • @ArtofServer
      @ArtofServer  2 роки тому

      Those are great questions. I don't think there's ever really a "best performance" because the way block storage works currently, it is always tuned to a specific use case with performance degradation as you deviate from the primary use case. You can always tune it to a different use case, but you are just moving the performance objective to a different part of the spectrum. Traditionally, this type of problem has been approached by partitioning - you split your resources and tune to each use case. But there may be other solutions that I'm not aware of, and I'm not really a storage expert.
      thank you for watching! I hope you'll find my videos helpful! :-)

    • @malexejev
      @malexejev 2 роки тому

      @@ArtofServer yep, great videos and channel overall. ty!

  • @williamvaughan1218
    @williamvaughan1218 Рік тому +1

    Awesome So informative. Thank you

  • @ierosgr
    @ierosgr 2 роки тому +1

    Very nice video. You could have shown a 3008 sas adapter to have them all

  • @rohank9292
    @rohank9292 2 роки тому +1

    Very nicely made and informative video about how to connect server controllers and backplanes.

  • @GuillermoPradoObando
    @GuillermoPradoObando 2 роки тому +3

    very thanks for your video, really useful information, unfortunately I could only give one like..

  • @infocentrousmajac
    @infocentrousmajac 2 роки тому +1

    VERY GOOD INFO. THX

  • @orthodoxNPC
    @orthodoxNPC 2 роки тому +4

    12:42 you got any evidence that two ports increases speed? Every piece of supermicro literature I've read said those ports are for failover not throughput, you need a dual expander/daughterboard backplane configuration to double the throughput.

    • @alexhuang2349
      @alexhuang2349 2 роки тому +2

      Yes, I'm hoping he can comment on the nuances of EL1 vs EL2 backplanes.🤞

    • @ArtofServer
      @ArtofServer  2 роки тому

      That's a great question. Probably worth a video on it's own for demonstration purposes.
      However, I think what you're talking about is a dual path configuration, requiring dual SAS expanders, SAS drives or SATA drives with interposers, etc. And you need to setup multi-path support in the OS as well. This type of configuration is very different than what was discussed in this video. Dual path configuration can be for both redundancy and/or increased bandwidth to the storage device if the storage device can perform beyond the rated speed of a single link. Some Samsung SAS-2 SSDs had this capability, where if you had dual path configured, it could perform beyond 6Gbps.

    • @dougm275
      @dougm275 2 роки тому

      Yeah, I'm confused now. I thought that, with the SAS-2 standard, two wide ports to a single expander creates a loop and that's a no no for SAS-2 but that constraint was removed for SAS-3?

    • @dannydoolhoff7657
      @dannydoolhoff7657 Рік тому

      It really depends on the specific controller and expander in use.
      My set up does indeed double the bandwidth with two cables.

    • @subagon
      @subagon Рік тому

      @@ArtofServer At 13:25 you state that you can connect a second cable to double the bandwidth of the backplane. I read the BPN-SAS2-846EL manual and can't find anything to support that claim. I have a server with a BPN-SAS2-846EL1 and a 2308 HBA (on motherboard X9DRD-7LN4F-JBOD) with a single cable. How do I add a second cable to this configuration and double the bandwidth? Worst case, I could update the backplane to SAS3 and use a single cable, but would rather not spend the ~$300.

  • @FunkyKong
    @FunkyKong 2 місяці тому

    SATA SSDs are limited to ~5Gbps not because of the underlying tech but because of the 8b/10b encoding used on SATA. You lose 20% to overhead, hence it comes out to 4.8Gbps or ~600MiB/sec as you've seen.

  • @ragingskittlez
    @ragingskittlez 2 роки тому +1

    I just picked up a Supermicro 846 case that had an old DDR2 motherboard in it that I'm taking out and throwing in my old gaming rig that has a 3090, and a 5950x in it. It has the slower SAS846EL1 backplane in it. Would it be cheaper to just bypass the backplane all together (Its about $180 right now for a SAS 2 backplane), and use an HBA card and cables and just hard wire the drives in that way. I picked the chassis up for $50. I'm on a budget so I'm trying to figure out the best way to get this done without having a ton of problems. Also, am I going to be limited with having the GPU in there when it comes to PCIe lanes? Just discovered your channel and am having a blast watching everything and learning all about servers

    • @ArtofServer
      @ArtofServer  2 роки тому

      glad you're enjoying my channel. :-) i'll make a video about supermicro backplanes in the coming weeks. in short, i don't think you want to "hard wire" stuff in that chassis; it'll be quite a mess and in the end, you'd be better off getting a proper backplane. if you account for all the wires, adapters, cables, and SAS controllers to do a "hard wire" setup, it'll probably add up to the same as just getting the backplane and you'd end up with a less ideal setup. I think the best approach is a BPN-SAS2-846EL1 with 1 SAS controller.

    • @ragingskittlez
      @ragingskittlez 2 роки тому

      @@ArtofServer OK. That's what I came to as well. Spent the last 12 hours or so doing a bunch of research. What about using this chassis and installing 2 GPUs in it and not installing an ATX PSU?
      My only issue is the PDU. Ive spent 2-3 hours looking into it and haven't come up with anything profitable. I have a 3090, which uses 3 8pin and want to get another GPU that I can pass thought to multiple VMs and utilities to almost its full potential in those. So probably another 2 8pins. Then The HBA card. I know I'm going to be running very thin on PCIe Lanes and room for the GPUs to stay cool. Not sure if it's even feasible with the x570 platform. Id like to do it with what I have now before I go out and sell what I have to build a new rig.
      I do know that I probably won't be able to use CUDA when passing the other card though to other VM's because of Nvidia's Will to not let me own my own hardware LOL. Craft Computing did a video on it and explained it.
      I'll be running Unraid or Proxmox. Not sure which one would work best, but I'm leaning towards Unraid as I'm familiar with how it works.

  • @Mr_Meowingtons
    @Mr_Meowingtons Рік тому +2

    good video all my bandwidth is limited to my network speed ..
    WOW i looked at used 4TB SAS Drives and I can get 10 for $170 if I get 8 4TB WD RED's ill be over $800
    I'm all most willing to take a chance and get 20 and 4TB SAS drives weed out the bad ones even if half are bad I'm ahead..

    • @ArtofServer
      @ArtofServer  Рік тому +1

      Indeed, I've talked about using SAS drives here ua-cam.com/video/QtvJA9mHNjw/v-deo.html
      I find them generally more reliable and longer lasting than SATA.

  • @uMalice
    @uMalice Рік тому +1

    How do you remove ZFS from a drive? I want to recycle some hard drives from a server but the system regens even after zero fill reformatting the drives.

    • @ArtofServer
      @ArtofServer  Рік тому

      that doesn't sound right. if you've filled the entire drive with zeros, there should be no more ZFS on it.

  • @MarkDeSouza78
    @MarkDeSouza78 2 роки тому +5

    Couldn’t you also have used 2 controllers (pci 2) instead of a newer controller (pci 3)

    • @ArtofServer
      @ArtofServer  2 роки тому +4

      That's a good question and probably worth a video on it's own. But what you're suggesting is a dual HBA system. This can provide redundancy, although usually configured with dual SAS expander backplanes and require multi-path configuration or the drives will show up twice and you could have data corruption if you are not careful. Also, for true "dual path", only SAS drives or SATA drives with interposers are supported. So, in short, it's not that simple.
      Thanks for watching and asking a good question! :-)

  • @tsonglin3890
    @tsonglin3890 Рік тому +1

    thanks for your video, very helpful.
    I am going to build a home storage server, silverstone cs381 case +i3 12100+ MSI B660m, cs381 got sff8643 for each hddx4 cage, I want to use sas3008 card. Can I put the sas3008 PCIe3.0x8 into pcie3.0x4 slot(x16 form factor), will SAS3008 work with 4000MB/s PCIe3.0X4 bandwidth limitation or totally not work? Any experience?
    Thanks for.your help.

    • @ArtofServer
      @ArtofServer  Рік тому

      It can work with limited PCIe bandwidth. All PCIe cards can work with less lanes as it's negotiated during start up.

    • @tsonglin3890
      @tsonglin3890 Рік тому

      @@ArtofServer thanks for your help.

    • @tsonglin3890
      @tsonglin3890 Рік тому

      @ArtofServer Thanks. Is there any SAS pciex16 card with 4 port? 2 port go to sff8643 for 8hdd, another 2 port connect 2 u.2. Toshiba. Second hand U.2 price is cheap now, 7.68t only cost abt usd350.

  • @interactivesage4609
    @interactivesage4609 2 роки тому

    Hey @The Art Of Server, I was wondering if you have any tips on purchasing a RAID Controller, if I should look for PCIE versions or if I should look for Cache RAID. Also, why is cache important in a RAID Controller?

    • @ArtofServer
      @ArtofServer  2 роки тому

      I'm not sure I fully understand your question. If you're not sure what a RAID controller is vs an HBA controller, you might want to checkout this video: ua-cam.com/video/xEbQohy6v8U/v-deo.html
      I don't have any video that explains RAID controllers in detail or how to shop for them. I typically focus on HBA type controllers on this channel, but I'm going to add that to my list of future content.
      Thanks for watching!

  • @redhat4ua
    @redhat4ua 2 роки тому +1

    Can you suggest good used sas3 ssd's to look on ebay?

    • @ArtofServer
      @ArtofServer  2 роки тому +1

      I really like the HGST HUSMM series personally.

  • @evlqueen777
    @evlqueen777 Рік тому

    I have a truenas server with 8 12tb SAS HDDs and I wonder after watching this video if it would be any use to upgrade my network any further. I'm running a 2.5gbe switch and NICs now. I think the weak link is the PCIe lane because I'm on an AMD 785G motherboard in the server. Do you have any thoughts on this?

  • @binks3371
    @binks3371 Рік тому +1

    how about nvme ?

    • @ArtofServer
      @ArtofServer  Рік тому

      good question. NVMe doesn't require a controller (it's a PCIe device) so that's a totally different beast. In that case, you can mostly just consider the benchmarks of the NVMe drive.

  • @Gastell0
    @Gastell0 Рік тому +1

    I just go with rule of thumb of 250MB/s for HDD and 600MB/s for SATA SSD, which turns out same 2Gb/s

  • @orthodoxNPC
    @orthodoxNPC 2 роки тому

    4:35 outter edge doesn't matter, the data is spread out father on the edge to maintain consistancy, just like optical media

    • @saccharide
      @saccharide 2 роки тому +2

      The outer edge is definitely faster

    • @orthodoxNPC
      @orthodoxNPC 2 роки тому

      @@saccharide oh yea? let's see how fast it is at 100% capacity, then empty "the outer edge" and empty the "inner edge" and lets compare that to the manufacturer's literature... maybe three partitions would be equivalent but that is not clear. all drives are faster when empty, yes, lets introduce some nuance into this equation

    • @saccharide
      @saccharide 2 роки тому +1

      @@orthodoxNPC Yes, tracks on the outside have more data than those on the inside and thus have a higher bitrate per revolution. See "Zone bit recording"

    • @orthodoxNPC
      @orthodoxNPC 2 роки тому

      @@saccharide no they actually dont, the sectors are spaced out more on the edge

    • @saccharide
      @saccharide 2 роки тому

      @@orthodoxNPC I've already pointed you in the right direction. Why don't you do your own testing and research?

  • @zjimenez2885
    @zjimenez2885 2 роки тому

    What's bandwidth, I have always struggled with that term.

    • @IM_A_BEAR_LOL
      @IM_A_BEAR_LOL 2 роки тому +2

      Think about the path the data is taking like a pipe. The wider the pipe the more data can pass. The maximum amount of data that can pass at a given interval is the "bandwidth" of that data path.
      Data paths can take many forms: wireless, electrical, optical, etc, so bandwidth doesn't describe any physical attributes. It describes the maximum potential for data to be signaled across any given medium.

    • @wayland7150
      @wayland7150 2 роки тому +1

      In terms of carrying people from A to B the motorbike has greater speed than the bus. But the bus carries 50 passengers and the motorbike carries one. So the bus carries 50 passengers at 60mph and the motorbike carries one passenger at 120mph. You'd need 50 motorbikes to do the job of one bus but the bikes would get the passengers there in half the time. The bus has 25 times the bandwidth of one motorbike.
      It's why a bunch of slow hard drives can move data as fast as an SSD.

  • @andre_warmeling
    @andre_warmeling Рік тому +1

    Then plug it all into an Atom server!!!

  • @Maxx-qq8lh
    @Maxx-qq8lh Рік тому +2

    Interesting to contast this design limit of SAS-3 of around 10Gbps (actually an ENTERPRISE storage concept) with the 32Gbps plafond of NVMe Gen3 x4 PROSUMER devices (e.g. Samsung SSD 970 Evo Plus, Samsung 970 PRO). And there are PCIe Gen3 x8 AOC (add-on-cards) available such as the PM-1725 that can reach 64 Gbps - Especially in very conservative business settings, RAID still seems the way to go, but that technology appears to me a dead end, a "ghost from the past", trying to solve yesterday's problems (bad sectors, crashed HDDs, data integrity) with today's technology. Future solutions to these storage persistence and data integrity problems might look entirely different, without having to endure the performance problems and overhead introduces with RAID management. Using Flash RAM the way we used magnetic discs increasingly proves to be a rather wasteful effort !

    • @ArtofServer
      @ArtofServer  Рік тому

      thank you for sharing your thoughts! :-)

  • @soundfire79
    @soundfire79 2 роки тому

    Wouldn’t 2-4-8 be simpler?

  • @QrchackOfficial
    @QrchackOfficial 2 роки тому +4

    One more point - taking aside the fact that it's sequential transfers, you're also using full duplex SAS connections here. If you're using SATA drives, and going the older enterprise SATA HDD route (like the SATA variant of HGST Ultrastar 7K4000 that I go for), then the HDD itself caps at 125MB/s sequential, or 1 Gbps per HDD. And in real life (aka everything other than moving huge files, 20GB+ each) you'll see more like 30-60MB/s, so 1/4 to 1/2 of that, bringing it to more like 0.25/0.5Gbps per HDD. At that point, 48Gbps aka 2-port SAS2008 HBA is easily enough for 96 to 192 drives - and for situations where you really hammer a single drive sequential, you're going to have the spare bandwidth anyway since the other drives are unlikely to go full tilt sequential at the same time. Not to mention unless you're doing 10Gb networking, you're limited to 125MB/s over the network anyway. No need to sweat it at all.

  • @jonathanbuzzard1376
    @jonathanbuzzard1376 2 роки тому +2

    What you miss in sizing all this is what's the connection out the back of the server. No point in having wizzbang speeds in the server if you are only on a 1Gbps connection for a storage server. I would at this point note that a dual port SAS2 card would have 8 lines of dual ported 6Gbps so 96Gbps of throughput to the drives is more than good enough for a NAS with a 10Gbps connection.

    • @ArtofServer
      @ArtofServer  2 роки тому +4

      That's a good point that I missed for sure, especially since many storage servers are NAS. However, that said, even if you're serving just a 1Gb connection, and you don't need more than 1Gb performance for the NAS functions (CIFS, NFS, iSCSI), having faster storage internally still has several advantages. For example, when restoring lost redundancy (drive failure, etc.), a faster storage subsystem will allow quicker time to recovery, or when performing data integrity checks (zfs scrub), the checks will complete faster, etc. There are still many "internal I/O" operations that will benefit from a properly sized storage subsystem.
      Thanks for pointing out an important aspect I missed. :-)

    • @jonathanbuzzard1376
      @jonathanbuzzard1376 2 роки тому

      @@ArtofServer Yes having more internal bandwidth than you need does speed some internal housekeeping up but as I pointed out a dual port SAS2 card has 96Gbps internal bandwidth (presuming you install dm-multipath or equivalent to take advantage of the dual port nature of SAS) which is nearly 10 times more internal bandwidth than external bandwidth if you have a 10Gbps network connection. I would note most people don't use dm-multipath and waste half the bandwidth of their system.
      Generally speaking, if you are using spinners (I refuse to use the term rust because there have been no iron compounds on hard disk platters for over 20 years now) SAS3 only makes sense when you have lots of external enclosures or are using SAS expanders with lots of disks. My day job involves looking after several storage systems with large numbers of hard drives (like hundreds) in external enclosures and I have decades of experience in this. Trust me 99% of people watching this video don't need SAS3.

  • @binks3371
    @binks3371 Рік тому +1

    get that mole checked out

    • @ArtofServer
      @ArtofServer  Рік тому +1

      thanks for the concern. it's not a mole. when i was a kid, i lived in a place that hated people that look like me. and someone stabbed me with a pencil at school, among many other altercations. I've tried to dig it out with a knife when I was younger, now it's just a battle scar. LOL

    • @binks3371
      @binks3371 Рік тому

      @@ArtofServer lol, i have one of those too on my palm. It looked like a strange mole, now i know why.

  • @shephusted2714
    @shephusted2714 2 роки тому +1

    this is just plain nonsense - smb mkt wants to go to 100gbe and nvme 10g and 2.5g are decent fallbacks - the biggest bottleneck is isp bw generally - for compute and r proxy apps you want as fast as you can manage

    • @j_taylor
      @j_taylor 2 роки тому +3

      This video is about storage. I think you're talking about something else.