Using dRAID In ZFS for Faster Rebuilds On Large Arrays.

Поділитися
Вставка
  • Опубліковано 28 вер 2024

КОМЕНТАРІ • 19

  • @Mikesco3
    @Mikesco3 9 місяців тому +12

    I really like your deep dives into these topics. You're one of the few UA-camrs I've seen that actually knows what is being presented...

    • @dominick253
      @dominick253 8 місяців тому +2

      Apalards adventures is really knowledgeable as well.

    • @andrewjohnston359
      @andrewjohnston359 6 місяців тому +1

      @@dominick253 true, and Wendell from level one techs

  • @makouille495
    @makouille495 9 місяців тому +3

    how the hell do you manage to make everything so water clear for noobs like me haha as always quality content and quality explainations ! thanks a lot for sharingyour knowledge with us ! keep it up ! 👍

  • @FredFredTheBurger
    @FredFredTheBurger 10 місяців тому +2

    Fantastic video. I really appreciate the RaidZ3 9 disk + spare rebuild times - and the mirror rebuild times. Right now I have data striped across mirrors (Two mirrors, 8TB disks) that is starting to fill up and I've been trying to figure out the next progression. Maybe a 15 bay server - 10 bays for a new Z3 + 1 array, leaves enough space to migrate my current data to the new array.

  • @zyghom
    @zyghom 10 місяців тому +5

    imagine: I only use mirrors and stripes but I am still watching it ;-)

  • @TheExard3k
    @TheExard3k 10 місяців тому +3

    If I had like 24 drives, I'd certainly use dRAID. Sequential resilver....just great, especially with today's drive capacities.

  • @boneappletee6416
    @boneappletee6416 8 місяців тому +1

    This was a very interesting video, thank you for the explanation! :)
    Unfortunately I haven't had the chance to really play around with ZFS yet, most of the hardware at work use hardware RAID controllers. But I'll definitely keep dRAID in mind when looking into ZFS in the future 😊

  • @Spoolingturbo6
    @Spoolingturbo6 6 місяців тому

    @2:15 can you explain how to set that up, or give a search term to look that up?
    The I installed promos, I split my 256GB NVMe drive up in the following GB sizes (120/40/40/16/16/1/.5) (Main, cache,unused,metadata,unused,EFI,Bios)
    I knew about this, but just now at the stage I need to use metadata and small files.

  • @inlandchris1
    @inlandchris1 26 днів тому

    Why not use a good quality Raid card with…16 ports? Use 8 hard drives in a Raid #? With 8 SSD’s wrapped around the spinning drives? That will solve the latency problem and it really speeds things up.

  • @wecharg
    @wecharg 9 місяців тому +4

    Thanks for taking my request, that was really cool to see! I ended up going with CEPH but this is interesting and might use it in the future! -Josef K

  • @awesomearizona-dino
    @awesomearizona-dino 10 місяців тому +4

    Upside down construction picture?

    • @ElectronicsWizardry
      @ElectronicsWizardry  10 місяців тому +4

      I didn't realize the picture looks odd in video. The part of the picture that is visible in the video is a reflection, and the right side up part of the picture is hidden.

  • @Mikesco3
    @Mikesco3 9 місяців тому +2

    I'm curious if you've looked into ceph

    • @ElectronicsWizardry
      @ElectronicsWizardry  9 місяців тому +1

      I did a video on a 3 node cluster a bit ago and used ceph for the video. I want to do more ceph videos in the future when I have hardware to show ceph and other distributed filesystem in a correct environment.

    • @andrewjohnston359
      @andrewjohnston359 6 місяців тому +2

      @@ElectronicsWizardry I would love to see that. There are zero videos I can find showing a promox+ceph cluster that are not homelabbers in either nested VM's or using very under powered hardware as a 'proof of concept' - and once it's setup the video finishes!!. I have in the past built a reasonably specced 3 node proxmox cluster with 10GB nics, mix of SSD's and spinners to run VM's at work. It was really cool - but the VM's performance was all over the place. A proper benchmark, deep dive into optimal ceph settings and emulating a production environment with a decent handful of VM's running would be amazing to see!

  • @marconwps
    @marconwps 17 днів тому

    12 hdd in my pool i try draid soon as i can , truenas support confirmed?

    • @ElectronicsWizardry
      @ElectronicsWizardry  17 днів тому

      I’m pretty sure truenas has fraud support as I’ve seen it as an option when making pools. Draid makes a good amount of sense with 12 drives.

  • @severgun
    @severgun 8 місяців тому

    why data sizes so weird? 7 5 9? None of them divisible by 2.
    why not 8d20c2s?
    Because of fixed width I thought it better to comply 2^n rule. Or I miss something?
    How compression works here?