Forbidden Arts of ZFS | Episode 3 | ZFS and mixed size drives

Поділитися
Вставка
  • Опубліковано 29 сер 2024

КОМЕНТАРІ • 142

  • @ThomasTomchak
    @ThomasTomchak 4 роки тому +24

    This series has really opened up my thinking about ZFS in ways I wouldn't have thought of. This is another great one. I love how you used graphics to help visualize it too. This was clearly a lot of work. I feel like a broken record but NICELY DONE. I'm starting to think I might be able to identify your hands in a lineup. :)

  • @fearian
    @fearian 2 місяці тому +4

    I have a pile of old hard drives in odd sizes. All I want to do is throw together a big ol NAS to store junk in. I'm not an enterprise level operation. I'm not making it for anything else. There's nothing critical on this data. It's all duplicated on my desktop (the majority of it is also sitting on Dropbox and Google Drive!)
    But whenever I want to do something that is not the *absolutely correct* way, Home Lab/Home Networking communities will refuse to answer a question, and tell you what you *should* be doing instead. It's honestly insufferable. The warning you made at the start of this video, and the term "forbidden arts" really just says it all. Thanks for treating people like adults.

    • @ArtofServer
      @ArtofServer  Місяць тому +1

      I'm glad you found this. I definitely know the feeling and share your sentiments. It's part of the reason why I made this series of videos. Thanks for watching!

  • @joefriday5903
    @joefriday5903 4 роки тому +15

    As soon as I realized you were going to use partitions to slice up the drive it hit me what a sneaky so and so you are. Nicely done.

  • @Neolith100
    @Neolith100 Рік тому +4

    This was amazing! I have so many mixed drive sizes... I had no idea I could use them in this way! Thank you!

    • @ArtofServer
      @ArtofServer  Рік тому

      Thanks. Keep in mind, there are drawbacks to using the ideas in this series of videos. It's only meant to show what is possible, not the "best" configuration. Hope you enjoy the channel and thanks for watching!

  • @stokesfamily1047
    @stokesfamily1047 4 роки тому +3

    ....AKA, AZDNice here! Glad I'm a Marvel's Junkie...."Always wait for the last credit to roll!" ...I would have missed the message, Don't Thank me! It's the least I can do for the time and videos you create that this self-taught, Long-time, PC Enthusiast here that recently made the change to servers as I'm studying for my RHCSA.....In the home lab YOU helped me build. In the meantime while I wait for my H710 mini HBA and other goodies to get here...Time Play with partitions and ZFS! Thanks again and appreciate the knowledge in this series!

  • @nalle475
    @nalle475 Рік тому +1

    I’m using partitions for zfs but in a much more conservative way. Your videos get me thinking about zfs more freely. Thanks for making me open up for experimenting with zfs. Really appreciate your videos.

    • @ArtofServer
      @ArtofServer  Рік тому

      Glad this gave you some new ideas! Thanks for watching!

  • @sharkbytefpv4326
    @sharkbytefpv4326 3 роки тому +2

    Most interesting ZFS video i watched this month!

    • @ArtofServer
      @ArtofServer  3 роки тому

      Glad you enjoyed it. Thanks for watching!

  • @Alfamoto8
    @Alfamoto8 3 роки тому +4

    Ok thank to you I finally got what I wanted. I now have my junk pc with a pool of 1,5tb of old rubbish hdds of different capacities from 80gbs to 300gbs...!!!
    I split them in 40gbs partitions and got my zfs pool of one large vdev. Thanks a 100 times!

    • @ArtofServer
      @ArtofServer  3 роки тому +5

      Apparently you didn't pay attention to the warning at the beginning of this video, or you are foolishly as brave as me. LOL

    • @Alfamoto8
      @Alfamoto8 3 роки тому +4

      @@ArtofServer Oh I don't know if I qualify but, once I had all my data in a clicking drive for almost a month without backup... LOL.... Seriously now I don't care, it's the junk pc with all the old hdds and serves as a 3rd or so backup.
      The problem I had is that I wanted all drives to appear as one. It's much more convenient to share one drive for the backups...
      Besides... where is the fun if you always go by the book.. right?

    • @ArtofServer
      @ArtofServer  3 роки тому +4

      @@Alfamoto8 ha ha... You are welcome to join me on the fringe of the ZFS community lol

  • @madsmith1352
    @madsmith1352 4 роки тому +10

    Oh god!!! They’re all in the same pool?!! I’m going to go sit in the corner and breath for a bit.

    • @ArtofServer
      @ArtofServer  4 роки тому +1

      Just the reaction I was looking for! LMAO ... take a deep breath.

    • @artlessknave
      @artlessknave 4 роки тому

      just split off a new personality to deal with the insanity. amnesiac walls are the only sure way to keep the pool safe.

    • @TiagoJoaoSilva
      @TiagoJoaoSilva 3 роки тому +2

      Wouldn't having several partitions which are on the same spindle share a pool just crash the amount of IOPs you'd get from the array? You've been careful with not putting several partitions from the same spindle on the same vdev... Why not put each vdev on his own pool and then try to send different kinds of traffic to each one so the access patterns overlap less for each spindle?

    • @WorBlux
      @WorBlux 3 роки тому +1

      @@TiagoJoaoSilva I depends a lot on how the block layer issues and drives queues requests. If you have a deep queue and re-ordering it may not work out too badly.
      ZFS will strips across vdevs in the same pool, but it doesn't do so blindly, and it tends to prefer the faster vdevs which may mitigate this somewhat.
      Separate pools really only help if you are only hammering one at a time, otherwise your use case gets worse. With the single pool striping behavior your guaranteed to have time for time somewhere for lagging vdevs to catch up. Each pool would continually be starving each other for IOPS.

    • @Spacefish007
      @Spacefish007 2 роки тому

      Why not? :D they are all redundant raidz2s

  • @James-mo5uj
    @James-mo5uj 2 роки тому +3

    This guy's a shell wizard.

    • @ArtofServer
      @ArtofServer  2 роки тому

      Thanks! And thanks for watching!

  • @awesomearizona-dino
    @awesomearizona-dino Рік тому +1

    ok, ive just watched the entire FORBIDDEN series. im amazed. i like the single zfs HDD idea. i may implement that.

    • @ArtofServer
      @ArtofServer  Рік тому

      Glad you found this interesting. :-) Thanks for watching!

  • @exid3277
    @exid3277 Рік тому +4

    One more idea for Forbidden Arts of ZFS episode: Using one Intel Optage drive sliced to use as SLOG, L2ARC and Metadata vdevs simultaneously. Can be useful with a home NAS where there is rarely more than one operation running at the same time.

    • @ArtofServer
      @ArtofServer  Рік тому

      Thanks for the suggestion! And thanks for watching! :-)

    • @AfroJewelz
      @AfroJewelz 3 місяці тому

      cool,though l2arc and slog can do this your way,it won't harm the main data pool when disconnect those vdev,but special vdev is so special that tied to vdev you binding is not disconnect able or loose able, it can only replace with same or larger while in mirror or multi-way mirror forms, otherwise you lose all data on data vdev that looses binding with special vdev. take care and have fun with those cache vdevs

  • @RebelPhoton
    @RebelPhoton 4 роки тому +6

    Great work but... No benchmarks? :-/ I'm curious to hear the sound this would make on a resilver }:-D

  • @ajshell2
    @ajshell2 3 роки тому +7

    This is the most beautiful and most disgusting thing I've ever seen.

    • @ArtofServer
      @ArtofServer  3 роки тому +3

      Exactly the reaction I was looking for 🤣

  • @CMDRSweeper
    @CMDRSweeper 2 роки тому +2

    I realize I have been breaking a lot of ZFS rules here.
    I use a single disk in my server as ZFS on root, to save on headaches I have /boot as ext4 filesystem where the kernel lies and GRUB to make booting easier and less headaches.
    But the rest of the root filesystem resides in ZFS, this is on a single disk.
    It isn't protected, but it does break conventions.
    I do plan in the future to do something equally whacky, /boot on a ext4 mdadm RAID 1 array, and the ZFS partitions in mirror to ensure uptime.
    Not that it has been a problem for this server's 10 year life so far though.

    • @ArtofServer
      @ArtofServer  2 роки тому

      I like thinking "outside the box" ... I don't always recommend it to other people, but if you understand the consequences and are willing to accept the risks, then I think it's fine. I can understand why communities like the TrueNAS folks like to establish "conventions" and "rules of thumb" since they are often working with a lot of new comers to this type of technology. Those "guard rails" keep people out of trouble they don't know they can get themselves into. But sometimes they get a little too zealous for my tastes.

  • @ishmaelmusgrave
    @ishmaelmusgrave 4 роки тому +2

    Mixed sized drives in vdevs are fine; that's what Autoexpand is for? Start with the drives you have, and swap out for larger ones later as money or deals pop up... Was waiting for the utter trash I/O and bandwidth hit, that the multiple raids on the larger drives would get Or if the system would just not allocate to the slower (secondary/tertiary) vdevs until faster vdevs full? Nice video though

    • @ArtofServer
      @ArtofServer  4 роки тому

      Since all vdevs are in the same pool, ZFS spreads data proportionately. So, the 16x4TB and 12x4TB raidz2 vdevs will get most of the data since they are the largest. The smaller vdevs will get less of the data. There is definitely a trade-off with performance in this kind of setup; see my response to hescominsoon below.

  • @DavidStarkers
    @DavidStarkers Рік тому +1

    I used to do this back in the dark ages but paid the cost still of having painful upgrades to expand.. often I'd simply be stuck with the original combo of disks for ages, once nearly a decade actually... (No data loss naturally.. Cause zfs is baller).
    Nowadays I have a simpler but not flexible solution, I always pair equal drives together.. I always buy two at a time(at least)..
    Then I run mergerfs (fuse) on all of the zpools to make them logically appear as one...
    Simple, extensible and no waste (except some CPU for fuse overhead)

    • @ArtofServer
      @ArtofServer  Рік тому +1

      Thanks for sharing your thoughts.

  • @eltoniozamora2898
    @eltoniozamora2898 4 роки тому +1

    Well done.... WELL DONE 👏👏👏👏

  • @Elrevisor2k
    @Elrevisor2k 11 місяців тому +1

    Do we have any performance issues doing this? As some deives could be busy with one thing writing in a pool and othe app or VM using other pool but on the same disk of his partition?

    • @ArtofServer
      @ArtofServer  11 місяців тому +1

      of course. that's one of the reasons I caution against these ideas at the beginning. there are consequences and trade-offs to using these methods. you need to understand how that affects your own use case and decide accordingly.

  • @stevelionheart
    @stevelionheart 3 місяці тому +1

    This is awesome. I got only one question about replacing drives. If I created such pool, and later I replaced the first four 4TB drives with four 8TB drives, is there any way to ¨merge¨ vdev#1 and vdev#2 or am I stuck with both?

    • @ArtofServer
      @ArtofServer  3 місяці тому +1

      I don't think that is possible yet.

  • @jinxtdc6477
    @jinxtdc6477 4 роки тому +4

    Nice vid! Would be scary to lose both 12TB drive. Replacing those and having to resilver it all, muchos IO, no? Thinking out loud here.
    Honestly, I'm ZFS curious, wanting to use to use it on the NAS i'm gathering the parts for. Would this be a really bad thing?
    How about the IOPS impact as @hescominsoon mentioned. Or the fact it's one single pool?

    • @ArtofServer
      @ArtofServer  4 роки тому +2

      yeah.. see my response to hescominsoon .. it's a very good point, and as with anything, there's a trade-off being made here.
      I'll have to do a video on resilvering one of these days... but you can alway stagger the resilvering. say you replace one of the 12TB (4TB+4TB+2TB+2TB), you can start by resilvering only the 1st 4TB partition... then the 2nd, then the 3rd 2TB, and finally the last 2TB. Or, you can replace with a different size disk, as you're using partitions you can carve out the size you need from another disk. The partitioning allows more flexibility, so there are creative ways to go about it.

    • @artlessknave
      @artlessknave 4 роки тому +4

      no just take the storage hit with mixed drives and make them all the same type of vdev...just because you can do this doesn't mean you should. and you shouldn't, it adds a ton of complexity for no real benefit. dump the drives and replace the smallest when you can. every write to this pool would write to the 12TB drives 4 times, and i would think that you could thus expect the 12TB drives to die 4 times as fast.any time they died, you would have to replace them with more 12TB drives, instead of upgrading a smaller drive, or else fuck around with trying to move partitions around and risk destroying the whole thing with a bad command. you would have ~66TB of storage with 8 mirrors, mirrors are easier to manage, and have way more iops (depends on use case)

  • @billycroan2336
    @billycroan2336 9 місяців тому +2

    I did this with raid6 before zfs was viable on GNU+Linux. each layer was a raid6 array except for the highest which had to be raid1, and I think second highest I made raid5 due to the low number of drives. Then every layer became an LVM PV. Didn't run that fast. But it was on IDE drives at the time too, on custom right-angle PATA cables. Brings a tear to my eye. I'm kind of disappointed in ZFS that it can't just handle this natively. it really should be able to work it out during allocation in 2023.

    • @ArtofServer
      @ArtofServer  9 місяців тому +1

      That's cool! ZFS is continuing to evolve.

  • @mattiashedman8845
    @mattiashedman8845 2 роки тому +2

    This really gave me an idea for my next storage server (on prem backup) with older disks in it. 8 2 TB and 4 4 TB. I am thinking of going mirror and create 4 vdevs of the 2 TBs and a vdev of the 4 TBs. That should give me cirka 32 TB. That since the 2 TB are the oldest and a few really needs replacing (8 years spinning...).

    • @ArtofServer
      @ArtofServer  2 роки тому

      Happy to hear this inspired some ideas!

  • @RingZero
    @RingZero 2 роки тому +1

    Great video. Thanks 🤟👍🏼

  • @JaimeZX
    @JaimeZX 3 роки тому +2

    Man, now I'd love to see you bust out a video featuring the just-released DRAID. :D

    • @ArtofServer
      @ArtofServer  3 роки тому

      wait? DRAID is now stable in production release of OpenZFS? I didn't hear the news... been following DRAID for a while now...

    • @JaimeZX
      @JaimeZX 3 роки тому

      I understand they're planning to formally incorporate it into ZFS v2.1, but DRAID is available on Github as of last week; I don't have an array/pool big enough to take advantage yet so I haven't looked into the actual implementation details thus far.
      github.com/openzfs/zfs/commit/b2255edcc0099e62ad46a3dd9d64537663c6aee3

  • @battmarn
    @battmarn Рік тому +2

    I'm really new to NAS, DAS and zfs. If one of the drives failed but redundancy was set up as you have, would all the VDEVs be able to rebuild at once if you replaced the failed drive with a new drive of equal or greater size?

    • @ArtofServer
      @ArtofServer  Рік тому +1

      if one of the drives failed, and was replaced, then you can rebuild just *that* vdev that was affected by the fault, no need to rebuild all vdevs. Rebuild usually requires manual intervention though, although it is possible to configure automatic rebuilds.

  • @brofids_
    @brofids_ 5 місяців тому +1

    I currently have 3x10TB drive in raidz1.
    If I split each of the drive into two partition then grouping it into two raidz1 vdevs then strip it into one pool, am I will be getting better IOPS but worst sequential performance?

    • @ArtofServer
      @ArtofServer  5 місяців тому +1

      Ultimately, your IOPS is determined by the capability of the hardware. Software layers might add more latency if they are not efficient. So, I definitely do not think you will get higher IOPS. How it will affect various use cases would require testing each of those use cases. The type of configuration demonstrated in this video is not geared towards better I/O performance, but perhaps a possible way towards more efficient space utilization.

  • @yuwuxiong1165
    @yuwuxiong1165 3 роки тому +1

    Interesting video, thanks!

  • @dougm275
    @dougm275 2 роки тому +2

    This reduces potential e-waste for people who have mixed consumer drives. It just takes a bit if thought.

    • @ArtofServer
      @ArtofServer  2 роки тому +1

      That's a good point. That said, a lot of people build huge data stores and only occupy less than 50% of the space.

    • @dougm275
      @dougm275 2 роки тому

      @@ArtofServer At least those new gen drives will be useful for a long time if they're cared for. My WD Greens and a AMD E350 board are from 2012ish, and they'll make for a perfect little storage box. Wouldn't be saying the same toe of thing in 2009, though.

  • @PelDaddy
    @PelDaddy 2 роки тому +1

    Awesome. Thanks.

  • @phildegruy9295
    @phildegruy9295 3 роки тому +1

    Opened my thinking. Thought. could one follow along the same idea, creating partitions as described from a a few drives creating raidz2 vdevs, then later actually replace the partitions that make up the vdevs with real drives, either the same size as the partition or preferably, with larger drives? It seems that ZFS does not really know if the 'drive' is a partition or a real drive.

    • @ArtofServer
      @ArtofServer  3 роки тому +1

      Well, ZFS does check to see if you're giving it a whole disk or just a partition or a file. I believe this affects whether it tries to change the I/O scheduler for the drive (obviously, if not a whole disk, it will not do that). But, besides that, ZFS doesn't care what you give it... so yes, you can replace a partition with a new drive, or vice versa.

  • @thomasschneider5983
    @thomasschneider5983 2 роки тому +2

    Hi, I have a NAS with mixed size drives:
    4x HDD 4000B
    1x HDD 1500GB
    1x HDD 1000GB
    1x NVMe SSD 1000GB
    Based on this tutorial I created partitions on the HDD and then a single ZFS pool Raid-Z2 that includes all partitions.
    Now my question is:
    How could I make use of the NVMe drive for cache or SLOG?
    What do you recommend?

    • @ArtofServer
      @ArtofServer  2 роки тому

      I wouldn't run a single raidz2 vdev using partitions from the same drive. if that drive fails, your redundancy won't survive it.
      adding L2ARC and slog are a separate matter. there are ZFS tutorials all over the internet for that... but keep in mind slog will have no effect if you don't have synchronous I/O requirements.

    • @thomasschneider5983
      @thomasschneider5983 2 роки тому

      @@ArtofServer Isn't your tutorial showing how to create a single ZFS pool RAID-Z2?
      And with regards to SLOG I was interested in your opinion for using it efficiently, means I'm hesitating to use 1TB for cache or SLOG only.

  • @MarkDeSouza78
    @MarkDeSouza78 4 роки тому +2

    Once again nice video. Really enjoying the series. However, you are not magically getting more capacity by doing this. You are just trading operational resiliency for capacity. ie 4 drives in RaidZ2 is not the same as 40 drives in RaidZ2. Yes you have still have 2 parity drives but the failure time of any drive in the vdev has now increase 10x

    • @yuwuxiong1165
      @yuwuxiong1165 3 роки тому

      Say I have four 4TB HDD, with RaidZ2 I get 8TB space and two parity "disks"; If I partition each HDD into to 2TB partitions and build RaidZ2 with eight 2TB partitions, then I have 12TB spaces and two parity "partitions". My question is, is the latter any better than RaidZ1 with 4 HDDs?

    • @NdxtremePro
      @NdxtremePro 3 роки тому +1

      @@yuwuxiong1165 It doesn't work like that. If you lose one drive in your setup, you have lost 2 parts. So yes, you just reinvented raidz. He shoed seting up multple vdevs, not partitioning into one vdev.

    • @yuwuxiong1165
      @yuwuxiong1165 3 роки тому

      @@NdxtremePro Got it, thanks!

  • @Jagosix
    @Jagosix 2 роки тому +1

    Art of Server - Very unique and informative video. Question? What OS are you using to utilize said drive options ? Also what would be the correct command to evenly partition the same drive? for example 2TB.... need it split 4 ways.

    • @ArtofServer
      @ArtofServer  2 роки тому

      Thanks for watching! :-) I'm using CentOS 7.x with ZFS on Linux in these videos. The ZFS related commands should be same on any ZFS supported platform.
      As for the partitioning, repeat the same command I used in this video and do 0% 500GB, 500GB-1000GB, etc. Or, I think you can also use 0%-25%, 25%-50%, etc. To confirm, look up the commands in the man pages for the 'parted' command I used.

  • @oso2k
    @oso2k 3 роки тому +1

    So...what is the total number of drives that this pool can sustain before impacted? Is it any 4? 1 drive from each sized drive? Or as us consultants say..."It depends..." Having a bit of a time trying to understand the fault tolerance? But so awesome! I've been wanting to setup a couple Dell SC200 trays and I considering mixed configurations. This is giving me ideas.

    • @Spacefish007
      @Spacefish007 2 роки тому +2

      two drives can fail at least, afterwards it depends which drive fails, if the first two belonged to the same vdev (had a partition of that vdev in this case) and if a third one in this vdev fails, the whole pool is lost. If another drive fails, no problem ;). But resilvering will be reall nerve wrecking if you have 3-4 failed drives, as at that point any additional failure leads to catastrophic data loss.
      Also keep in mind, that drives from the same batch tend to fail at the aprox. same time. If you put a lot of load on the remaining ones, they might fail during the resilver.

    • @devinbarry
      @devinbarry 8 місяців тому

      Timo is unfortunately incorrect. This pool can sustain a maximum of two drive failures. The 16 wide vdev (in blue in the video) is on every single physical disk and it is RAID-Z2. If any two disks of any size are lost this vdev will have no remaining parity. There is a reason for the warnings at the beginning of the video: this is a dangerous configuration. Imagine the situation where both 12TB drives died. These drives lost would mean that every single vdev had lost the maximum number of parity drives without data loss. ZFS would report that all vdevs were degraded. To rebuild this you would need to use 12TB drives, partition them again into the weird configuration and rebuild all 4 vdevs at once. Image the crazy reads and writes happening in the pool to do this. Its such a nightmare scenario if you have data you care about. If you built a pool using only full disks (not partitions) and the pool was made of multiple RAID-Z2 vdevs, then it is possible for more than two drives to fail without losing data. For example, if your pool is made of 2x RAID-Z2 vdevs, you could have 2 drives in each vdev fail and still have all your data.

  • @ZionMainframe
    @ZionMainframe 4 роки тому

    Cheers!

  • @TradersTradingEdge
    @TradersTradingEdge 3 роки тому +1

    awesom :)
    Thanks so much.

  • @matthiasdiehl4305
    @matthiasdiehl4305 4 роки тому +1

    One more time: Thanks!

  • @topstarnec
    @topstarnec Рік тому +1

    Why put all the vdevs in one pool? What’s the benefit?

  • @makermatrix9815
    @makermatrix9815 3 роки тому +1

    I have wondered about all these things before (basically, ways you might utilize partitions to do unconventional things). How about taking one larger NVMe SSD and carving it into both a read and write cache for TrueNAS? I assume those are ZFS functions under the hood? Similarly, squeezing metadata out of the same flash in TrueNAS 12+.

    • @ArtofServer
      @ArtofServer  3 роки тому +1

      ZFS doesn't really have ability to use SSD for write caching. If you're thinking about SLOG, it is not really a write cache at all. Be careful with the special vdev stuff, as the loss of it can destroy all data, so you want to consider putting the special vdev on a redundant vdev type.

    • @makermatrix9815
      @makermatrix9815 3 роки тому +1

      @@ArtofServer Yeah SLOG I guess. It gets variously called a write cache, and recommended for NFS heavy work. I don't know exactly what it does, yet. Guess I had better look.
      Good point about the metadata redundancy. Alternatively, backups. Getting into snapshot replications. The ability to backup every block that changed without walking the filesystem is brilliant.

    • @ArtofServer
      @ArtofServer  3 роки тому +1

      @@makermatrix9815 yeah, slog is not write cache. It is a log, basically write once and erased. It helps when you have synchronous writes, which happens with NFS and some other applications.
      Snapshots and incremental backups are awesome. It's been around forever, you could do that even back in the day with LVM2 on Linux, but it was just a bit more cumbersome. ZFS brought those ideas together in a simpler way.

  • @newbinkin77
    @newbinkin77 2 роки тому +1

    Is this similar to how Synology's SHR works ? Cool video BTW.

    • @ArtofServer
      @ArtofServer  2 роки тому

      yes, I think so... someone else made that comment somewhere on this video. thanks for watching!

  • @karloa7194
    @karloa7194 2 роки тому +1

    Can you add more disks with this setup assuming you have free bays?

    • @ArtofServer
      @ArtofServer  2 роки тому

      With ZFS, you can add more drives to a pool by adding another vdev. As of this time, I do not believe you can expand an existing vdev with more drives. It maybe a new feature in the future though.

  • @VirendraBG
    @VirendraBG 3 місяці тому +1

    What OS you use as daily driver?

    • @ArtofServer
      @ArtofServer  3 місяці тому +1

      On servers, used to be CentOS, but currently migrating to Alma Linux. For desktop, Fedora workstation, currently running version 39 and 40 on various machines.

    • @VirendraBG
      @VirendraBG 3 місяці тому

      @@ArtofServer
      Wow.
      I use Fedora & Rocky Linux.
      What is your opinion on Rocky Linux for web hosting server for PHP / Laravel / MySQL websites?

  • @NetBandit70
    @NetBandit70 3 роки тому +2

    I can hear the jimmies being rustled

    • @ArtofServer
      @ArtofServer  3 роки тому

      gotta shake things up sometimes... :-P LOL

  • @mithubopensourcelab482
    @mithubopensourcelab482 3 роки тому +2

    You are master of ZFS.
    But, sir, sorry. This looks scary. For 3 % extra space, & providing Raidz2 even for two hdd's by slicing, why some one will go this way ? There could be odd use case. But definitely not for ordinary situation.

    • @ArtofServer
      @ArtofServer  3 роки тому

      None of the configurations in this series of videos is for "ordinary" situation. Like I said at the beginning of the video, it's not recommended for most people. The point of these videos is to just show what is possible because so many people say you can't do this or can't do that. I wanted to show what is possible, but it does not mean it is recommended.

  • @sheldonkupa9120
    @sheldonkupa9120 2 роки тому +1

    I thought this will work fine only when there is not a lot work load on all vdevs simultaneously. Wont zfs overburden 1 harddisk when the several vdevs on it have to operate simultaneously?! 3 pools using 3 vdevs on 1 HD get 1/3 speed and transfer capacity, basically, or even worse due to repositioning of read units. Thats why such config isnt used in enterprise IT. I use a similar setup at home, but i know that i wont have multi user workload. And its just fun to play with zfs😀 i like your channel btw!

    • @ArtofServer
      @ArtofServer  2 роки тому

      yes, as mentioned at the beginning, there are possible reasons not to use such a configuration. but this was just to demonstrate the possibilities.

  • @gunnerjoe53
    @gunnerjoe53 3 роки тому +1

    Hope your still breathing, LOL!

  • @fengchouli4895
    @fengchouli4895 3 роки тому +2

    I'd like to see how to replace bad disks once needed.

    • @stevejones2697
      @stevejones2697 3 роки тому

      Yes.. My thought exactly.. Also, what if a 4TB drive died, and you replaced it with a 12tb drive.. Could you dynamically add the extra space as partitions in the other groups? I'm a ZFS novice, but I've been theorizing this should be possible, and my goal is to basically replicate the functionality of how unRaid lets you expand your NAS array one drive at a time without a full rebuild..

    • @ArtofServer
      @ArtofServer  3 роки тому

      @Steve Jones Yes, it's possible and kind of the point of this video. The use of partitions, although adds a bit of complexity, affords ZFS a lot more flexibility.

  • @Michael-DK
    @Michael-DK 4 роки тому +1

    What speeds do you get in r/w 😀 Nice video

    • @jk-mm5to
      @jk-mm5to 4 роки тому

      It would be difficult to benchmark this setup under all real world scenarios.

    • @ArtofServer
      @ArtofServer  4 роки тому +2

      See my response to hescominsoon below, but yes, there's definitely a performance trade-off here in this type of setup. I probably should have done a benchmark segment in this video, apparently this is a common question. I was thinking it's obvious this is not a high performance setup and no one would be interested in the benchmark as it will obviously be compromised as I explained to hescominsoon below. But rough numbers it was something like ~600MB/s seq writes, and 400~500MB/s seq reads. I didn't do any kind of random I/O testing, but with HDDs and raidz I'm expecting that to be very low anyway, and on top of that we have the trade-off being made here.

  • @dfitzy
    @dfitzy 2 роки тому +1

    This looks a bit like how beyondRAID lays out data in drobos thinking that a video idea occured to me: adding space to a pool in an "incorrect" way.

    • @ArtofServer
      @ArtofServer  2 роки тому

      I don't know much about drobo. Years ago, it looked like they were doing interesting things, but I never dug into it. It seems the company has gone downhill since.
      thanks for the video idea! It is noted! :-)

    • @dfitzy
      @dfitzy 2 роки тому

      @@ArtofServer I only know what i know about drobo from Wikipedia's non standard raid article. I almost bought one until I discovered FreeNAS very happy I didn't waste my money on drobo

  • @user-fp3mn3dw7x
    @user-fp3mn3dw7x 3 роки тому

    Is it possible to do this in TrueNAS (formerly freenas)?

    • @ArtofServer
      @ArtofServer  3 роки тому +1

      I'm not 100% sure, but I think so. Probably not via the GUI though, but i'm sure it's possible to do this kind of madness via the CLI.

  • @martixy2
    @martixy2 Рік тому

    This can be a cool optimization problem. Maybe you just did this for demonstration purposes, but the fact that you didn't even hint at finding a better layout is mildly infuriating.
    Below is the answer for the lazy:
    .
    .
    .
    .
    12x8TB + 6x4TB (4x4TB+2x12TB remainder) + 6x2TB.
    1. Only 3 vdevs. More even split of drives. 2. More usable space (104TB). 3. No device is part of more than 2 vdevs (this is the really important part). (Unlike the naive split where losing a 12TB drive degrades ALL 4 vdevs).

  • @artlessknave
    @artlessknave 4 роки тому

    ok that.....is not how a conventional distribution of those drives would be...the convention is never mix vdev types....

  • @hescominsoon
    @hescominsoon 4 роки тому +1

    while this is a way to maximize space usage you are going to take an iops hit due to the partitions...but it's a valid way to maximize your usage..:)

    • @ArtofServer
      @ArtofServer  4 роки тому +1

      Yes, very true. The IOPS hit comes not just due to multiple seeks due to the partitioning, but also the fact that vdevs are limited by the slowest device, and the partitions that are on the inner tracks of the platter are going to be slower than the outer tracks; so in essence, you may be limiting yourself to the worst performance of the HDDs. This is all a trade off of course... as is true with anything. But if the benefit out weighs the disadvantages, it may be worth considering.

  • @whatevah666
    @whatevah666 3 роки тому

    Excelent vid. So i have a small project where i'll start with 8 disks and will gradually have to add more of unknown size to thepool/vdev. Is there some clever way of doing that without "wasting" space by creating parity disks on new vdevs? just adding disks to a vdev would be nice. If there isen't i think i'll have to suffer and go snapraid + mergefs :\

    • @ArtofServer
      @ArtofServer  3 роки тому

      I don't know of a way to dynamically change the geometry of a vdev. I believe there's development for such a feature in the future, but not available yet.

  • @jacj2490
    @jacj2490 4 роки тому

    what kind of sequential performance we should expect from this kind of mixed size vdev????/

    • @artlessknave
      @artlessknave 4 роки тому +1

      mixed probably.anything that uses the split-to-hell 12tb would probably be slow as hell.

  • @erendiz79
    @erendiz79 7 місяців тому +1

    Great video as usual, but the music you are using is even greater. I was wondering which music this is? Shazam can't find it.

    • @ArtofServer
      @ArtofServer  7 місяців тому +1

      It's from the YT audio library. ua-cam.com/video/CF1-osNaVYE/v-deo.html

    • @erendiz79
      @erendiz79 7 місяців тому

      @@ArtofServer Thanks a lot! You are doing wonderful, wish you the best.

  • @RangerDK21
    @RangerDK21 4 роки тому +1

    First

  • @ewenchan1239
    @ewenchan1239 Рік тому +1

    What would be some of the downsides to setting up the ZFS pool this way?
    I'd imagine that with the partitions, performance would take quite a substantial hit, no?

    • @ArtofServer
      @ArtofServer  Рік тому

      The partitions by themselves would not cause a performance hit. By default ZoL uses partitions anyway. The performance hit would come from ZFS level I/O request amplifying into multiple I/O to each partition, some of which are on the same drive. These would cause a necessary seek from one partition to another, which will add latency. Now, if you don't put all those vdevs based on those partitions under the same pool, you might avoid that. However, if you have processes accessing both pools, that's going to cause a seek between the partitions and increase latency. This is of course, also no different than seeks resulting from fragmentation. So, if your data set is naturally going to be heavily fragmented anyway, then it doesn't matter. On the other hand, if you're doing this on NAND, the impact would be much less since you have constant access time and any IOPS amplification might just get absorbed by the high IOPS capacity of NAND drives.

    • @ewenchan1239
      @ewenchan1239 Рік тому

      @@ArtofServer
      "Now, if you don't put all those vdevs based on those partitions under the same pool, you might avoid that. However, if you have processes accessing both pools, that's going to cause a seek between the partitions and increase latency."
      Well...that's basically my point and a part of the issue.
      If your I/O pattern is significantly "singular" (i.e. queue depth = 1), then I would agree with you, you might not have a problem with the partitions UNLESS sequential data is written to the different partitions on the same physical disk, because ZFS doesn't KNOW (or isn't handling) the fact that multiple partitions are on the same disk.
      In all other scenarios/situations, then you are going run into performance degradation as a result of this, as you've mentioned.
      re: fragmentation
      Yes and no.
      I remember reading the Solaris ZFS Administration Guide probably about 10 years ago by now, and the basic conclusion was the ZFS, by virtue of its copy-on-write nature, will ALWAYS create fragmentation, eventually, of all of your data, if the data is constantly changing.
      And of course, the problem with using ZFS on SSDs is that all NAND flash-based SSDs have a finite write endurance limit.
      Therefore; what you ideally want, is actually Intel Optane Persistent Memory, except that in terms of $/GB, it's really, really, expensive. But that would be the best of both worlds where you have higher capacity than straight RAM, it's non-volatile, and it can handle high number of random IOPS as a result of ZFS' copy-on-write nature.
      If you use rotating HDDs, if the data is always changing, then the probability that it will become fragmented increases.
      Imagine what the latencies will be and what the performance hits will be if you did this with a SMR drive. Yikes!!!

    • @ewenchan1239
      @ewenchan1239 Рік тому

      @@ArtofServer
      So I just implemented this on my Proxmox test server where I only had two HGST 3 TB SATA HDDs available (rather than having more, which is what ZFS would prefer for a raidz1 setup).
      As such, I partitioned the two 3 TB drives into four partitions of 1.5 TB each, and then put them into a raidz1 ZFS pool so that the data that I was about to put on it would have SOME level of redundancy (although, really, if a drive failed, it would be taking down two partitions at a time, so that probably didn't work the way I thought it would - but that's besides the point for this comment).
      Getting back to my question earlier about it taking a substantial performance hit:
      A single drive ZFS pool was capable of writes of upto around 236 MB/s I think. Something like that. Most of the time, it averaged around maybe 140-150 MB/s -- somewhere in that range.
      With this partitioned setup, running raidz1, I am currently writing to the disk at around 70 MB/s, so depending on how many drives you have and how the drives have been partitioned, the performance hit, as I alluded to in my question, can be quite substantial. (1/3 to 1/2 of the native drive write speeds.)
      (The task that I've got running right now in Proxmox, to get these numbers is I created a Windows 10 VM, got that all set up. And then converted that VM into a template. And now I am doing a full clone of that to spin up a new Windows 10 VM and it is this full clone task that is currently writing at ~70 MB/s.)