How Fast is the Fastest Server We Ever Built?

Поділитися
Вставка
  • Опубліковано 28 лип 2024
  • Every second week, we will be releasing a tech tip video that will give users information on various topics relating to our storage products.
    Incase you haven't heard, the next-generation Stornado (F2), is now available for purchase and beginning to ship next week. This is the fastest storage server we have every released by a large margin and features 32, U.3 NVMe bays within a 2U chassis. How fast is it? To understand speed, we need to look at the system's throughput and IOPs.
    For this week, Brett and Mitch are back to talk about the differences between throughput and IOPs to truly understand the speed of the next-gen Stornado. What is unique about it? Why is the NVMe Stornado so much more powerful than an HDD system? Check out this week's episode to learn more.
    ---
    Chapters:
    00:00 - Introduction
    00:35 - What is unique about our third-gen NVMe Stornado?
    02:14 - What electronics have changed from the 2nd-gen SATA Stornado?
    04:10 - Understanding speed with throughput versus IOPs
    09:10 - Why is the NVMe Stornado more powerful than an HDD system?
    11:25 - How much performance is in an NVMe Stornado compared to a Storinator Q30?
    15:55 - Outro
    Visit our website: www.45drives.com/
    Learn more about our Stornado line of storage servers: www.45drives.com/products/sto...
    Check out our GitHub: github.com/45drives
    Read our Knowledgebase for technical articles: knowledgebase.45drives.com/
    Check out our blog: www.45drives.com/blog
    #45drives #storinator #stornado #storageserver #serverstorage #singleserver #storagenas #nasstorage #networkattachedstorage #proxmox #virtualization #cephstorage #storageclustering #virtualmachines #cephcluster #storagecluster #proxmox #ansible #nasstorage #prometheus #samba #cephfs #allflash #ssdstorage #ssdserver #allflashserver #allflashstorage #zfs #ransomwareprotection #linux #linuxtraining #nasstorage #selfencryptingdrives #jbod #justabunchofdisks #justabunchofdrives #serverexpansion #esxi #migratevirtualmachines #ssd #ssdtrim #sed #dataencryption #harddrives #harddrivehandling #seagate #westerndigital
  • Наука та технологія

КОМЕНТАРІ • 25

  • @jimsvideos7201
    @jimsvideos7201 6 місяців тому +3

    Block size is the size of the plastic bin in your cupboard. An IO is taking a bin down, getting something from it and putting it back. Throughput is the total number of things taken out of the cupboard per unit time.

  • @ArmoredTech
    @ArmoredTech 6 місяців тому

    It's reassuring to see that someone has the same handwriting as myself 🙂 -- Good explanation

  • @visheshgupta9100
    @visheshgupta9100 6 місяців тому +3

    A big fan of 45Drives, you guys are doing a fantastic job. Can you please explain the following:-
    1. In the video you talk about the capability of NVMEs to handle 4000 VMs, however, can you also please explain how many VMs can a 64 Core CPU handle? Do the VM's share CPU cores?
    2. What is the maximum size of SATA drive available in the market today? VS the maximum size of NVME drives available in the market.
    3. What is the power draw of a SATA drive vs an NVME drive
    4. What is the idle power consumption of the sever with 32 NVMEs installed?

    • @glmchn
      @glmchn 6 місяців тому +2

      Your VM can have the compute in a dedicated box and just hit the storage elsewhere as SAN. Anyway they said at some point this is theoretical just to explain the scale of power of this kind of solution so don't need to be too picky on accuracy.

    • @nadtz
      @nadtz 4 місяці тому

      What is the maximum size of SATA drive available in the market today? VS the maximum size of NVME drives available in the market.
      What is the power draw of a SATA drive vs an NVME drive
      Sata 24tb, NVME up to 30tb right now last I checked. For power SATA about 5ish idle to ~7-10w and NVME idle is about the same and can hit as high as 18w depending on the drive. From there with some math the idle/max power consumption can be figured out though you will also have to account for HBA's, network cards/whatever else as necessary.

  • @kingneutron1
    @kingneutron1 6 місяців тому +2

    Pertinent questions:
    Can it saturate 100Gbe fiber network? (Probably / assuredly) - if it does this easily, what is the next step beyond that?
    How many simultaneous users for SMB / NFS shares?
    Simultaneous 8k video editing / transcoding benchmarks?
    ZFS benchmarks? (As much of a ZFS fan as I am, my understanding is they still need to tweak the code for nvme / non-spinning speed)
    Potential zpool layouts (don't forget ZIL mirror / L2ARC) - mirrors would obv be fastest but you lose capacity, how would you recommend setup RAIDZx vdevs or possibly DRAID?
    How many SQL transactions per second?
    How many Linux kernel compiles per hour?
    How many $big-open-source-project compiles per hour (Firefox, Gentoo, Libreoffice, etc)?
    How long would it take to compile the entire Debian distro (all packages) from source?
    Can it potentially replace X model/series of mainframe?
    --TIA, just some stuff to think / brag about :)

    • @MarkRose1337
      @MarkRose1337 6 місяців тому

      The drives do very little for compiling.
      Netflix cache servers are already encrypting video at 400 Gbps using a 32 core 7502P Epyc, a CPU from 2019. They did require some tuning to get there, especially around reducing memory bandwidth usage. I bet they're getting 800+ Gbps in the lab already using a modern 32 or 48 core Epyc, which have 2.25 times the memory bandwidth of the older 7502P.

  • @dj_paultuk7052
    @dj_paultuk7052 6 місяців тому

    We are using NVME Drive arrays in the Data Center i work in, far bigger than this. Some of the units we have use 75 NVME's. Cooling is an issue as they generate a ton of head when packed so densely.

  • @polygambino
    @polygambino 6 місяців тому

    Good video and good questions, a few minor technical inaccuracies such as the IOPs decreases the larger the block size but that's nitpicking. While i dont represent 45drives the amount of VMs you can run on a 64 core always depend on the use case for the VM, the application requirements, the hypervisor vm maximum plus the performance the storage can provide. There are 100TB Sata drives on the market but u can get 61TB Nvme drives. And lastly NVMe will pull more power when it's being used just because it's that much faster and need more energy. But always check the spec sheets for the devices that you want to buy and use. Then look at the number of Vms you can run for the IO and power it takes if u use NVME vs SATA. You will get a better picture of the cost and power for the performance of the vms

    • @mitcHELLOworld
      @mitcHELLOworld 5 місяців тому

      IOPs do actually decrease as the block size increases, but the reason is different for HDD's vs solid state. When we're talking NVMe or SSD - it is typically simply because the bandwidth limitations of the bandwidth of the connection - (SATA / PCIe 3/4 2x/4x) - In regards to HDD Seagate EXOS are rated for 440 read IOPS, but you will have a very hard time getting 440 1MB read iops out of it I think you'll find! In regards to the number of VMs you can run on a 64 core CPU, of course this is true! However, we may have not done a sufficient job explaining - we are not speaking with the intention of the VMs being run on the storage server, but instead this server being the storage back-end for dedicated hypervisors. Hope this clears it up! - Sincerely , one of the guys in the video! haha

  • @kingneutron1
    @kingneutron1 6 місяців тому +1

    BTW, you guys are working on some really cool stuff (and I wish this video was monetized 💎)

  • @64vista
    @64vista 6 місяців тому +1

    Hi Guys!
    Thanks for the video!
    Do you have any plans to show us the real life capabilities with this storage with vmware vms with NFS, iSCSI, NVMEoF?
    It will be really good :)
    Thanks!

    • @mitcHELLOworld
      @mitcHELLOworld 5 місяців тому +1

      We defeinitely do:) we have some great content coming up in the next month ! be sure to tune in

  • @glmchn
    @glmchn 6 місяців тому +1

    Those guys are something 😅

  • @sevilnatas
    @sevilnatas 6 місяців тому

    How does a ZFS read and write cache improve the rust numbers you talked about? Most of my pools are from M.2 drives on bifurcation cards, followed by SATA SSDs with read/write caches on Optane drives and finally a few rust drives with the same Optane drive setup in front of them, should I be seeing the type of performance you were talking about?

    • @jttech44
      @jttech44 6 місяців тому

      ZFS doesn't have a write cache.
      I'll say it again, because people are very confused by this, ZFS does not, in any way, have a write cache.
      Read caching is handled by ARC and L2ARC if you've got it, and will be as fast as your RAM, or the L2ARC devices. Realistically, if you have NVME storage, you see no benefit from an L2ARC, but, you will see a benefit by adding as much RAM as possible, as cache hits are basically guaranteed to run at wire speed.

    • @sevilnatas
      @sevilnatas 6 місяців тому +1

      @@jttech44 Hmm, OK, no such thing as read cache, got it. jj 🤣 All my NVME pools have no caching, besides RAM, and my SSDs will have read caching, plus RAM, mentioning RAM just to be annoying, but I do have 128gb on the NAS, with a 24 core EPYC cpu and more PCIe lanes than you can shake a stick at. I will be putting my VMs on the NVME pools and files on the SSD pools. I have a very specific need to have super fast small file access of the last few files written. Not as important as VM speed, but close behind. The rest of the SSD space will be just regular file shares. Also, I will probably want to put a REALLY FAST NVME disc in as a paging disk for RAM overflow but that might be overkill for 128gb of RAM.

    • @jttech44
      @jttech44 6 місяців тому

      ​@@sevilnatas Depending on your working set size, 128GB of RAM may or may not be enough. Also depending on what your read/write mix is, it can make sense to just have SSD's and spend the extra money on an absolutely massive amount of RAM so you can fit your entire working data set into cache.

  • @djordje1999
    @djordje1999 6 місяців тому

    EDSFF? Long (E1.L)?

  • @kyleallred984
    @kyleallred984 6 місяців тому

    Send to ltt so they can test the 4000 VM stat.

  • @rhb.digital
    @rhb.digital 6 місяців тому

    just send me one already 🙂

  • @shephusted2714
    @shephusted2714 6 місяців тому +2

    they really aren't selling anything here other than industry standards basically - that is fine but no ip here - they have contributed to open source and seem trustworthy - for small biz the documentation and support is what they are paying for not the hardware - really

    • @mitcHELLOworld
      @mitcHELLOworld 6 місяців тому +1

      This is actually untrue. We developed and built our own firmware for the microcontroller. I explain this a little bit in the previous nvme stornado teaser video. Everything else you mentioned however, is fairly true. That being said, we are very much leading the industry here with a tri mode UBM backplane with u.3 NVMe. This is brand new platform that in our research we were the first to release with. Finally, 32 NVMe in a single 2U form factor is much less common as well.
      Thanks for the comment!

    • @Anonymous______________
      @Anonymous______________ 6 місяців тому

      I want that smb.conf file lol... I have tried every combo and can never break 700-900 MB/s on a single thread/client with the latest open source version of smb.

  • @elmeromero303
    @elmeromero303 6 місяців тому

    Where the CPU/RAM Performance for 8000 "high" VMs comes from? And how the heck you want connect the Compute Nodes. I doubt also that all the VM's will run in parallel. Ok, maybe the Storage Server can run 8mil IOPS in a single (or frw) thread but not with 8000 Threads. Too many Bottlenecks: Network, Storage-Controllers etc. Not talking about Dedup/Compression and all the fancy Options as "Real" Enterprise Storage must have..