ZFS Essentials: Reformat (ZFS) and or Upgrade Cache Pools in Unraid

Поділитися
Вставка
  • Опубліковано 22 гру 2024
  • This video is a comprehensive showing showing how to enhance your Unraid setup by either upgrading your cache drive to a larger capacity or switching over to the robust ZFS file system. This guide is the ultimate solution whether you're aiming for more space or a better hosting environment for your appdata on a ZFS Zpool!
    Please, if you can and want to support the channel and donate you can do so by Paypal here goo.gl/dw6MLW or check my patreon page / spaceinvaderone
    Bitcoin donations 3LMxDzcwPdjXQmmeBPzfvUgjYFDTqDAgQF
    ----------------------------------------------------------------------------------------------------------------
    Need a VPN?
    PIA is popular with Unraid users as its easy to setup with various vpn download containers - www.privateint...
    Torguard is also an excellent VPN again with both openvpn and wireguard protocls supported.
    Get 50% off for life using code spaceinvaderone torguard.net/a...
    ----------------------------------------------------------------------------------------------------------------
    Need a cheap windows 10 license for around $10
    consogame.com/...
    ----------------------------------------------------------------------------------------------------------------
    Need to buy something from amazon? Then please use my link to help the channel :)
    USA - amzn.to/3kCikfU
    UK - amzn.to/2UsYb1f
    USA link - USB HDD Docking station amzn.to/3v754WG
    UK Link - USB HDD Docking station amzn.to/3hLenYp
    HighPoint RocketStor 6414S amzn.to/3fiXv9s USA
    Mini SAS 26-Pin SFF-8088 Male to Mini SAS 26-Pin SFF-8088
    amzn.to/2V4x9kT USA
    amzn.to/3xfkxEl UK
    ----------------------------------------------------------------------------------------------------------------
    A big thankyou to Limetechnology for all the great work that they put into always improving Unraid OS.

КОМЕНТАРІ • 173

  • @mrsittingmongoose
    @mrsittingmongoose Рік тому +178

    It would be super nice to see you redo you’re old getting started, tips and tricks, setting up reverse proxy type videos. They are super old and not only has Unraid changed a lot, but swag, all the apps, and the plugins have all changed a lot.

    • @phileeny
      @phileeny Рік тому +8

      I used the videos to set up reverse proxy etc but now I use Cloudflare tunnels, a video in that would be nice

    • @massgrave8x
      @massgrave8x Рік тому +3

      @@phileeny When I discovered Cloudflare Tunnel it was a game changer!

    • @eug1
      @eug1 Рік тому

      I agree. I am going to set up my first unraid box this week in a hp micro server gen8 and would like to use zfs

    • @NeptuneSega
      @NeptuneSega Рік тому +2

      Your* For a second I thought you were calling them old since you used "You are" haha

    • @mrsittingmongoose
      @mrsittingmongoose Рік тому +1

      @@NeptuneSega nope, just iOS autocorrect shenanigans…sigh

  • @dgaynor1
    @dgaynor1 Рік тому +31

    Looking forward to more ZFS tutorials from you SIO. I probably wouldn't even try a full ZFS array setup until I can get my head around it from the videos you upload. Keep up the good work.

    • @NarcoSarco
      @NarcoSarco Рік тому +2

      Im also looking forward to that, these videos have been a great help in the past! :)

  • @tommyjackson3811
    @tommyjackson3811 Рік тому +6

    I'm in absolute awe of your generosity Spaceinvader One!!!
    I have a fair idea just how much time that you give to help others in your detailed videos...... We are all blessed...... Thankyou for all that you do!!!

  • @moonstomper68
    @moonstomper68 10 місяців тому +1

    Brilliant, as a total newbie to unraid these videos are invaluable in my learning experience. Very clear and detailed, many thanks.

  • @TomFoolery9001
    @TomFoolery9001 Рік тому +5

    I can't wait for the zfs array disk formatting and snapshotting videos! Thanks for all the easy to follow videos, they make unraid SO much more usable and helps explain a lot of its quirks and features.

  • @pd3471
    @pd3471 Рік тому +3

    Thank you so much for this. Moving from cache to array took roughly 11 hours for 400GB of appdata, format was fast, move and compress from array to cache took about 6 hours and ended at 180GB appdata. I thin ki need to get clever with the Plex metadata. Looking forward to more videos like this. I have learnt so much from you over the years.

    • @sean7949
      @sean7949 Рік тому

      Yea, the plex metadata/database and supporting files become very large very quickly. This is something I'm investigating as a solution to delaying upgrading my 1TB ssd to a larger size. One thing though the transfer times for 400GB of data to the array seems very long. I can flush my whole download pool at about 1TB in a much shorter time. Are you running Unraid on an older system? Maybe this is just because the Plex appdata has so many small files?

  • @BLKMGK4
    @BLKMGK4 Рік тому

    Perfect timing, just bought a new larger NVME for my system! Got to remember how the heck I encrypted the last one!

  • @davidusi27
    @davidusi27 Рік тому +2

    THIS is what I was waiting for! Thanks my dude!

    • @SpaceinvaderOne
      @SpaceinvaderOne  Рік тому +2

      Thanks for watching David :)

    • @davidusi27
      @davidusi27 Рік тому +3

      @@SpaceinvaderOne Can I do a future video request? Differences between ZFS cache (ZIL/SLOG/ARC/L2ARC) and how to implement it via GUI/CLI on 6.12.X. I get the gist of it but when you explain it, everything becomes crystal clear lol

  • @AlyredV2
    @AlyredV2 Рік тому +1

    Can't wait for more information. Thanks for these clear and complete instructions; I'm wanting to swap out my 3x1tb NVMe BTRFS to a 3x2tb NVMe ZFS once I get time to upgrade to 6.12.

  • @patrickmurphy5389
    @patrickmurphy5389 7 місяців тому

    PLEASE MORE ON ZFS. Love it!! I have been meaning to get into it and this might be my way in!

  • @ndandan1369
    @ndandan1369 4 місяці тому

    🎯 Key points for quick navigation:
    00:00:21 *🔄 This video covers upgrading a cache pool or cache drive by changing its format to ZFS in Unraid 6.12.*
    00:00:35 *🛠️ The process for physically swapping or reformatting the cache drive is the same.*
    00:00:48 *📦 In-place reformatting while retaining data isn't possible; data must be temporarily moved elsewhere.*
    00:01:16 *📁 It's easy to temporarily move data and reformat a cache pool or drive to another file system like ZFS.*
    00:01:31 *🚀 The video will demonstrate reformatting a cache pool, with future videos covering reformatting drives without damaging parity.*
    00:01:43 *⚙️ Reformatting to ZFS allows using advanced features like snapshots, cloning, and replication.*
    00:01:56 *🔄 Replicating data between ZFS pools can enhance backup and recovery.*
    00:02:24 *🔧 The demonstration will show how to reformat the cache drive from XFS or BTRFS to ZFS.*
    00:04:00 *🔍 Secondary storage should be enabled to move data from the cache to the array before reformatting.*
    00:05:51 *🛑 Stop Docker and VM services to ensure no files are in use during the data move.*
    00:06:45 *📤 Move data using the Mover tool; wait until it's complete before proceeding.*
    00:07:28 *✔️ Once the data is moved, stop the array and proceed to reformat the cache drive.*
    00:08:11 *🔄 Reformat the cache drive, select ZFS, and start the array.*
    00:09:48 *💾 Reformatting a single disk cache pool is straightforward; reformatting mirrored or multi-disk pools requires additional steps.*
    00:12:04 *🛠️ Add additional drives to create a RAID-Z1 pool for increased storage efficiency.*
    00:13:10 *🔄 Reverse the data move process to transfer data back to the reformatted cache drive.*
    00:14:10 *🚀 Start Docker and VM services after the data is moved back to the cache drive.*
    00:15:07 *📅 Upcoming videos will cover reformatting array disks to ZFS, auto-converting top-level folders into datasets, and ZFS replication.*
    Made with HARPA AI

  • @Shadow.Dragon
    @Shadow.Dragon Рік тому

    Thanks for the video! I'm still weighting the pros/cons of moving my unRaid server to ZFS. I'll watch your future videos on ZFS before I make a decision!

  • @acrusso1
    @acrusso1 Рік тому

    Great Tutorial - replaced my 970 with a 980 pro while doing this - no issues thanks!

  • @K1LLA_KING_KONG
    @K1LLA_KING_KONG Рік тому +3

    What are the benifits / negatives of using compression with ZFS? How does it affect read/write speed?

  • @traxsongaming15
    @traxsongaming15 Рік тому +2

    Excellent as always!

  • @Ricofizz
    @Ricofizz Рік тому +1

    My cache drive wasn't being emptied due to me manually assigning some paths in docker to /mnt/cache/ , I had to manually move these before formatting, keep an eye on that in case you did something similar!

  • @rallygallery
    @rallygallery Рік тому

    Awesome Ed. Will be following the series of zfs videos and converting my servers.

  • @csmith1095
    @csmith1095 Рік тому

    This was very timely as I just went to add some drives to my array and make a cache pool today. Thanks!

  • @stefanlasser3872
    @stefanlasser3872 Рік тому +12

    Hi! You forgot the very last step after moving everything back to cache: You should disable the secondary storage for each of the "cache-only" shares

    • @pd3471
      @pd3471 Рік тому

      Is this recommended, required or just good practice?

    • @CampRusso
      @CampRusso Рік тому

      Would it be a good idea to leave those shares in the "cache preferred" setup? Just in case something happens and the share grows to beyond the size of the cache disk. It can spill over to the likely larger array.

    • @sean7949
      @sean7949 Рік тому

      @@pd3471 Depends on how much data you have on your cache pools. If you don't expect the cache pools to grow larger than their maximum size then you do not need to allow them to spill over to the array. If you do expect them to grow beyond the size of the cache pool then you should leave the cache -> array mover enabled.

  • @person98453
    @person98453 Рік тому

    Thank you, i needed to swap my cache set up from a combined hdd raid to a mirrored raid for saftey. Also changed to zfs while i was at it.

  • @jmlauranson1891
    @jmlauranson1891 Рік тому

    Huge thanks for the video! interested in the ZFS and making maximum use of UNRAID and disks!
    Catch you in the next videos

  • @jkenefake
    @jkenefake Рік тому +2

    Wish I had seen this 1 month ago when I upgraded my cache drive. I stopped the dockers and VMs but didn't stop their services. So after reboot I lost my VMs and dockers. I was able to restore everything with appdata backups thankfully, but caused me a bit of stress for awhile.

  • @mrq332
    @mrq332 Рік тому +1

    you're the best, keep up the good work, greetings from Belgium

  • @acidmexx
    @acidmexx Рік тому

    Awesome as always - i realy was kinda afraid updating to 6.12 - as i wasnt sure how to properly "convert" to zfs ... u made it easy for me as always!!! Thank you sir - br from Austria

  • @Gragorg
    @Gragorg Рік тому +5

    In your opinion is ZFS the preferred file system for the regular array over XFS?

    • @sean7949
      @sean7949 Рік тому

      No, the reason he suggested adding a ZFS disk to the main array was so you could do ZFS snapshot replication to the disk from the cache pools that are also ZFS. This would allow you to quickly backup and restore given some kind of failure. For instance this disk in the array that you format to ZFS would probably not be allowed to host your bulk data share (it would be excluded), but would host the snapshots of your cache pools. This would give it the ability to be redundant due to the main unraid array and give your cache pool a true backup.

  • @Dex.456
    @Dex.456 Рік тому +2

    Would like to get your opinion on who should use ZFS. Now it's like a hype going on for ZFS. But myself (and maybe many others) just use the Unraid pools and only need to decide on the single drives, if they should go with zfs, xfs or btfrs. For me, it's important to be power efficient (no unnecessary spinups), so I will stick with the Unraid pool. Just need to decide, what file system I should use for the single disks. I would really appreciate your opinion.

    • @sean7949
      @sean7949 Рік тому

      Depends on how you are backing up your pools. If you are using the appdata backup plugin then there is no necessary need to switch to ZFS, except for the snapshot restore functionality. It would be better than the appdata backup plugin and more reliable. Also you should know that ZFS is hyped, but very old, well in today's standards. It was created in 2001 and is considered one of the most reliable filesystems in existence. I'm really glad Unraid implemented ZFS in place of btrfs. btrfs has too many issues. I've tested btrfs on my system and it was a pretty horrible experience.

  • @39zack
    @39zack Рік тому +1

    Really looking forward to the Appdata on ZFS video.

  • @gswhite
    @gswhite Рік тому +2

    Brilliant video as always, but what is the benefit of ZFS in the cache pools / drives

    • @sean7949
      @sean7949 Рік тому

      The benefit for the cache is that these are more reliable than btrfs and support raidz which is the equivalent to raid5 (one drive failure). They also support mirror = raid1. As for the array drives you can format one as ZFS which would allow it to receive ZFS snapshots from the cache pool. Then this snapshot can be protected by the Unraid array parity.

  • @EvoTG
    @EvoTG Рік тому

    That are the videos I waited for 😍😍

  • @Seven-ks6rk
    @Seven-ks6rk 11 місяців тому

    problem with that pool at 11:45 where you add 3 drive, any changes to the pool you have to format entire pool, if a drive fails in the pool and you want to change a HDD you have to format entire pool same if you swap drive places place on that pool, they will not recognise them, same if you want to change from 3 drive back to 2 drive you will ne to format, i got a a 2 drive cache pool and i lost all info because i wanted to change a HDD, so i am not sure how save is this if a drive fails in that pool

  • @IEnjoyCreatingVideos
    @IEnjoyCreatingVideos Рік тому

    Great video as always Ed! Thanks for sharing it with us and have a Awesome weekend!💖👍😎JP

  • @phileeny
    @phileeny Рік тому +1

    I have an extra pool on my unraid server of 6x1tb drives in btrfs and was waiting for official upgrade to convert this pool to Zed FS, so thanks fot the video but I want to know what happens when a drive dies on a Zed FS pool, will you be doing a video on what happens to the data when a drive dies and how to fix it.

  • @glasshalfempty1984
    @glasshalfempty1984 11 місяців тому

    You didn't change the shares on the cache pool to cache only. Does that matter? Is there any significant difference from cache only, versus cache with the array as a secondary storage and the mover set to move from array to cache?

  • @emsbas1
    @emsbas1 Рік тому +2

    Is there really a benefit to swap the cache from xfs to zfs? Especially if it is just a 1-4 drive cache config?

    • @sean7949
      @sean7949 Рік тому

      If you want redundancy in your cache pool it is much better than btrfs. If you are running your cache drives independently no raid then formatting them all independently as zfs would only allow you to send the zfs snapshots to another zfs drive as backup.

  • @ab999852
    @ab999852 Рік тому +3

    Thanks for this (and dozens of all other videos), they are really inspiring. with zfs availability and its immense feature, is there still a place for the unraid array?

    • @SpaceinvaderOne
      @SpaceinvaderOne  Рік тому +11

      Yes, I think so for sure. The great advantage of the Unraid array is being able to add a disk when you need to. And using mixed size drives without loosing any capacity. Now we can put a ZFS disk in the Unraid array this is a really cool thing. So we can have full native Zpools and a hybrid Unraid array. I should have a whole bunch of videos on using ZFS on Unraid over the next couple of weeks. :)

    • @ab999852
      @ab999852 Рік тому +3

      @@SpaceinvaderOne True.. heck I don't even have a single identical drive in my system of 8 disks. but I'm definitely considering replacing the smaller disks to make them more uniform in size.
      I heard zfs needs more RAM to run, that means less for our VMs and dockers? Maybe you'll address that in your next videos. Can't wait to see them :)

    • @got2liv4him
      @got2liv4him Рік тому

      @@ab999852 zfs has an option for deduplication that requires a lot of ram. other than that (to my knowledge) it's not too memory hungry if you don't use deduplication.

    • @sean7949
      @sean7949 Рік тому

      @@ab999852 Its not actually that bad. Over on the FreeNAS forums the general guidance is 1GB of RAM per 1TB of ZFS storage, but there is a limit to this scaling and honestly isn't really that big an issue. They also say you should use ECC memory which to be fair is always a good idea, but ZFS even without ECC is still better than other filesystems given you have a true raid array of ZFS drives. E.g. raidz

  • @JHACbiz
    @JHACbiz 10 місяців тому

    Maybe it was in the video but when I swapped out to a larger cache drive this am it wouldn't let me just swap out the cache and start the array. I ended up having to delete the old pool and add a new one, then I could format,

  • @pd3471
    @pd3471 Рік тому

    Ok, having run with the zfs format for three separate nvme drives (cache for appdata - Downloads for well downloads - Media for writing movies/TV/Music) I have found that when running PLEX the ZFS memory is taking 100% and play backup stutters. I checked and zfs is using 1/8 the total memory so roughly 12bg out of 96gb - the unraid release notes says to create a custom zfs.conf file, but provides no notes on how to do that. I have tried many different ways and none have worked. I eventually found your old video that showed now to change the GO file to set the max arc size. After doing this the memory allocation successfully changed to what I set it at, but all that happened was that the zfs memory usage just took longer to get to 100%. I may have to reverse this all out and wait until more of your videos come out.

  • @DannyStammers
    @DannyStammers Рік тому

    A week or 2 ago I got a new HDD for my array that I precleared and formatted as ZFS. Now I'd like to use this moment to empty another drive on the new one and reformat that drive as ZFS. Since the new drive is to replace a smaller one, this is sort of the only change I have, without making things annoying.
    I've moved all of the data of the xfs drive to the new drive using unbalance (took forever, since the copy didn't go faster than around 70MB/sec due to what I understand to be a bug). I'm planning to use the 'zero-drive' script to zero my old drive, keeping parity in check. After that I can reformat the drive to ZFS. Maybe there are easier and less time consuming methods, but this should work.

    • @sean7949
      @sean7949 Рік тому

      Does the mover respect excluded drives from shares when moving data? If so maybe you could have used the mover without having to zero the drive?

  • @matthewh.
    @matthewh. Рік тому +1

    I started this with my cache drive. I am moving appdata and system shares from cache to the array. Appdata appears to be successfully on the array. System share is taking a really long time and I'm not seeing any reads or writes occuring on the array. Mover is grayed out and shows (Disabled - Mover is running.) Been a couple hours. Is this normal?

    • @47koR
      @47koR Рік тому

      Got the same problem here and found no solution yet. My cache was btrfs before and it seems mover hangs on the btrfs subvolumes.

  • @magicmanj32
    @magicmanj32 Рік тому

    great video Ed as per usual

  • @Paxtiny
    @Paxtiny 10 місяців тому

    I am going to set up an Unraid server soon. Should I set up the cache drives as ZFS from the get go?

  • @gotmonkey70
    @gotmonkey70 3 місяці тому

    Great guide. I wanted to change my Cache RAID-1 BTRFS to a single drive RAID-0 ZFS. That all went fine. Having an issue getting mover to invoke and transfer the files back onto the new Cache. Any idea?

    • @gotmonkey70
      @gotmonkey70 3 місяці тому

      Looks like Mover Tuning was the problem. Once I removed it, I was able to invoke mover for moving files back from the array to the cache.

  • @andretorrico
    @andretorrico 11 місяців тому

    Great video! I followed all the steps and now have a mirrored ZFS cache pool! Thanks! If, at a later date, I want to expand this pool, is there any way to convert it to Raidz? Or do I have to move all the data to the array again and create a new pool?

  •  Рік тому +1

    Great job as always!
    The idea for the next video: how to set up nVidia vGPU on unRaid
    So you are able to use it same time in dockers and vm's.
    I've read that this is possible, but I'm not all mighty wizard like you to be able to figure it out by myself...
    Pretty please :)

  • @AquaRelliux
    @AquaRelliux Рік тому +4

    I don't get why you would want ZFS over the unraid disk array. The biggest advantage is that you can add disks to the array but you can't with ZFS. Granted it has many benefits but my problem is that my array is constantly growing so for me ZFS will just be a hassle

    • @sean7949
      @sean7949 Рік тому

      ZFS isn't supposed to be a replacement for the array, but a replacement for the cache pools arrays. Previously the only raid option for the cache pools was btrfs which was terrible when I tried it. ZFS is well supported and very reliable for data restores and snapshot rollbacks. It is also great if you have a catastrophic situation where the host dies somehow and you need to send your drives off to a data restore company. A good example of this is when Linus' server Whonnock died and Wendel was able to recover most of the data even though they had improperly configured ZFS. (no scrubs I think). Basically if it can still be salvageable after being completely configured wrong you know it has at least some merit.

  • @sean7949
    @sean7949 Рік тому

    I'd like to see the recovery process on Unraid when a zfs drive fails. For instance I have a raidz pool with three 1TB SSDs and one fails. How do you re-silver the array in unraid? This is something I need to see before I'm willing to switch to ZFS on my cache pool. I do need to move to zfs for the compression though so I can minimize the appdata and plex appdata pools. Obviously not the media... just the database and stuff. Currently my plex db and supporting files is too large for me to used the appdata backup plugin on plex, but i'd like to be able to add another larger SSD around 4-8TB that would host the snapshots of my production pools as a backup. And also for another video idea. Is it possible to send zfs snapshots to another Unraid system with a large ZFS pool (think 10-20TB SSDs) as a secondary backup location that isn't part of the main server. I can see this also being setup to run over a site-to-site vpn so that the snapshot is geographically seperated. And one more is it possible to send the zfs snapshot to S3 compatible storage or another cloud storage provider?

  • @CharlesCai2008
    @CharlesCai2008 Рік тому

    Just had one of my new 22TB Ultrastar disks formatted with ZFS but now copying files to and from it is extremely slow (50M write, 80M read) vs XFS 280-290M raw speed! Did a bit search within Unraid forum, people having similar issues... moving back to XFS ... any suggestion single ZFS in Unraid array still can enjoy the raw read / write speed? thx. 🥴

  • @vaggeto
    @vaggeto Рік тому

    @Spaceinvader One, any thoughts on the upsides and downsides of doing this cache pool zfs process, but encrypting the cache pool? I'm worried about issues running VMs from the cache pool, and also of speed/encryption overhead? (And if someone cache drive speed may slow down if there still is no trim support for encrypted drives?)

  • @carlmbereko4329
    @carlmbereko4329 Рік тому

    From reading the forums, it would appear the mover takes forever to move files from the cache pool to the array, how long would it take to move 170GB of files? And is there an alternative like using rsync or krusader? I am looking to upgrade my cache pool...

  • @bruceparks5601
    @bruceparks5601 Рік тому

    not sure of its just me but I’ve been checking his channel everyday sometimes twice for these next videos. LOL impatient much? 😊 I did this reformat on the cache immediately!! and am super interested in what else zfs can do. i cant find anything useful anywhere in the forums on the same topics that this next videos will be on. swapping out an array disk making it zfs without a parity rebuild and appdata its own data sets with snapshots are defiantly something I cant wait to see.. Help us space invader.. your our only hope.. Thanks for all the great videos thru the years.. I very much appreciate you Sir

  • @squeak751
    @squeak751 9 місяців тому

    OK, I have 1tb ZFS cache Pools. I bought 2tbs to replace them but i can't get it to work. Any advise or help or another video? Everytime i go to swtich it out i lose all data for my dockers and containers.

  • @Vogy45
    @Vogy45 Рік тому +1

    Hope the reformat video come's out soon

  • @sarf9069
    @sarf9069 Рік тому

    I tried ZFS from the 6.12 RC series and it is so damn fast... Really looking forward to changing my main server over :)

  • @OptimusSpongo
    @OptimusSpongo Рік тому

    I did convert successful to ZFS. Thank you. If I delete a container in Docker Manager the dataset is still present. it's empty and if I try do Destroy the dataset it gave me an error: dataset is busy. only stopping docker service , destroy dataset and restart docker service helps. is there anything around this issue?

  • @AirzGamingTTV
    @AirzGamingTTV 9 місяців тому

    I had to swap multiple disks for resilver in my zfs pool in unraid. Despite assigning the ATA drive#s when making the pool the pool became degraded after unraid randomly reassigned device designations (sda sdb sdc etc). Lost everything

  • @inderveerjohal7218
    @inderveerjohal7218 Рік тому

    As a first time UNRAID user I’d love you to do a video on how to set it up first time with 6.12.X with ZFS. mostly interest on how to get it as fast as possible for read/writes.

  • @resolutepixel
    @resolutepixel Рік тому

    I would like to get some direction with Plex th docker.img file is a btrfs vdisk file that doesnt seem to like zfs. Plex container starts but gui wont load logs show sqllite3 access error

  • @spik330
    @spik330 Рік тому

    Okay but how to I fix the problem of ZFS being "Unmountable: Unsupported or no file system"? Fixed it, it terms out some Pool name are reserved when using ZFS and not the other unraid file systems

  • @joeshmoe346
    @joeshmoe346 Рік тому

    Why did you say 'definitely not going to do raid 0' on your cache pool? Was it only because you were planning on doing a 3 drive raid-z pool?

  • @koolkiwikat
    @koolkiwikat Рік тому

    Are there any caveats with ZFS cache and how it handles dockers? Im having issues installing dockers since changing over to ZFS, cannot install any new dockers at all throws errors

  • @GriffonWalker
    @GriffonWalker Рік тому

    What would be the point of single drive zfs cache pool? I mean what would be the advantage?

  • @darylnuera4914
    @darylnuera4914 Рік тому

    following this was awesome. if your like me that is on a docker directory and not on vdisk, you'll have to recreate all your containers through the template

    • @bruceparks5601
      @bruceparks5601 Рік тому +1

      I am with you. I converted to docker directory just after this video. It's working great

    • @sean7949
      @sean7949 Рік тому

      What is docker directory? Never heard of it. Is this something to do with docker volumes?

  • @cameronsettle3555
    @cameronsettle3555 Рік тому

    I wonder if a simpler way, to keep the server up is to copy the data, rather than move it, or use app data backup utility to back it up to another drive, format then restore

  • @Kasjo87
    @Kasjo87 Рік тому

    Well very nice video. I tryed this now a few times. In my Case i have a Raid0 Cache pool. But if i format that in ZFS raid 0 i cant write anything anymore to the cache pool cause it says there is not enough memory left. its formated and shown up in the UI as a rdy cahce pool but not working so far. Did i do something wrong?

  • @dopeytree
    @dopeytree Рік тому

    Would be very invested in seeing a test. a zfs cache drive sending snapshots to a 2nd zfs drive.. would this then mean the 1st drive can run at full speed? Currently my 2nd ssd is bottlenecked by motherboard if using a pcie slot it goes from gen3 x4 to gen3 x2. so from 3940MB/s to 1970MB/s whereas the other SSD is gen4 x4 can go up to 7880MB/s.. even so it's all way faster than my 10Gb network can do

    • @sean7949
      @sean7949 Рік тому

      No, during the snapshot you would still be reading from the source drive. But if your first drive is PCIe 4.0 it probably wouldn't hit performance that much.

  • @transparency1
    @transparency1 Рік тому

    Having issues with destroying Datasets now. This has happened on mulitple containers. I have to stop the array to delete. Error is Unable to destroy dataset cache_nvme/appdata/xxx
    Output: 'cannot destroy '\''cache_nvme/appdata/xxx'\'': dataset is busy destroy cache_nvme/appdata/xxx' Do you have any suggestions?

  • @jag5cof
    @jag5cof 9 місяців тому

    I have 3 1 TB ssd installed on my server. Is it best to create ZFS pool or use Raid-1?

  • @marioguerra2618
    @marioguerra2618 Рік тому +1

    Anybody is having the same issue as me? I did every step in the video and for some reason, the mover is not moving the appdata/motioneye/config folder nor the systems folder. Help would be appreciated.

    • @Ricofizz
      @Ricofizz Рік тому

      If you're like me you might've manually put in a cache path in a docker config, what I did was while docker was stopped, manually moved the folders to the location, used the mover, then after formatting moved them back. Worked for me, look into it!

    • @CampRusso
      @CampRusso Рік тому

      Mine was moving great. Now the counters/speed have stopped and the system share seems to be the only thing left. In the libvirt folder the libvirt.img is still on the cache. There is also a docker folder in system on the cache with a bunch of folders. Not sure if I should wait or try to interrupt it. =\

  • @Dr-AK
    @Dr-AK Рік тому

    if I have 1 ssd zfs drive and want to add two more, do I need to move everything off the original zfs ssd drive then add all 3 together?

  • @Narueschke
    @Narueschke Рік тому

    how do we increase the zfs memory (RAM) limit in unraid tried to find an option but wasn't successful and I have lots of available free ram and using 90% on zfs atm

  • @peet431com
    @peet431com Рік тому

    Hi,
    I was wondering at the end of all the process, do you need to change to mover action cache to array, my logic is that if my cache drive is full it move the new data to the array.
    Thanks

    • @sean7949
      @sean7949 Рік тому +1

      The mover is set on a schedule. It will move the data from the cache to the array even if there is sufficient space for the cache to hold the data. If you want the data to stay on the cache drive you have to disable the secondary storage.
      There might be a way to get it to move items to the array after the cache is filled up with the minimum space setting, but i've never tested it and don't know for sure.

  • @ironman-cq2dn
    @ironman-cq2dn Рік тому

    How can I go from Cache Drive to Zpool?

  • @volodesi0000
    @volodesi0000 Рік тому +1

    So hard trying to decide if I want to switch over from btrfs or not. Not knowledgeable enough. Anytime I look up comparisons between the two the site always ends with "decide for yourself". I am only hosting a Emby server, which uses a NAS on our network for video files. I locally store my music on my Unraid and use it to host dedicated servers for whatever games I am currently playing.

    • @chadblows
      @chadblows Рік тому +1

      As far as I understand, ZFS is more for data integrity and there wouldn't be much of a gain if at all by going to ZFS. If you were hosting important documents or a database it's probably worth it.

    • @joespurlock4628
      @joespurlock4628 Рік тому +2

      Agreed, there’s not a huge benefit if you’re in a fairly narrow use case with data that’s not “irreplaceable” and/or easy to backup off a single cache drive etc. Obviously no disk format will overcome a single copy of data if something goes wrong with the disk. ZFS snapshotting tools are arguably more mature and ZFS is probably a little more protected against write corruption (say from a power failure while writing data). So most would say technically superior, but in your case, probably a wash.
      Now if you’re using a cache pool (>1 disk), then ZFS is just way more flexible and Uber bulletproof, and probably still worth doing. This video makes doing it a snap! 6.12 mover enhancement is very cool.

    • @CampRusso
      @CampRusso Рік тому

      @@joespurlock4628 🤔 Sounds like i have given myself more flexibility and bulletproof-ness. 😁 My appdata/domain/system shares used to live on a single NVMe. While I am using the plugin to back those up I felt uneasy only relying on that should the drive fail. Since I also have two SSDs connected to the mobo I moved caches around. Now the two SSDs are in a raid1 and formatted as ZFS. If an SSD fails, emby/nginx/etc stay up and I can replace it with only reboot as downtime.

  • @FlyingSucuk
    @FlyingSucuk Рік тому

    Thx 4 showing us newbees how to do this :D i love your contant !

  • @CyberAlejo17
    @CyberAlejo17 Рік тому

    Can I create the datasets for all the dockers (appdata/plex for example) before execute the mover action that move the files from the aray to the new cache ZFS volume?

    • @CyberAlejo17
      @CyberAlejo17 Рік тому

      Too late. I forget to install the plugin ZFS Master before start all the process. In this moment i can't create datasets :(

  • @BobSmith-wu2ll
    @BobSmith-wu2ll Рік тому

    I was going through this guide step by step and I get an "unmountable disk present" for all the disks in my main pool (Not including the 2 parity disks). I was expecting to only see it in my cache pool drives (i'm switching from 2 nvme drives running btfrs) I'm searching forums too, but no luck yet. Any thoughts?

    • @BobSmith-wu2ll
      @BobSmith-wu2ll Рік тому +2

      I rebooted my system and it all went away. YIKES! close one, but I'm going to have to look into it.

  • @allanjones4283
    @allanjones4283 Рік тому +1

    Another great vid from a great community (unRAID)
    Q/. ECC RAM, is this critical to use with zfs? I have G.Skill F3-12800CL10D-16GBXL 32GB 1600MHz DDR3 because my MSI MoBo does not take ECC RAM, and really don't want to upgrade MoBo and RAM at the moment --- Thankyou 🙂

  • @guidosaur7506
    @guidosaur7506 Місяць тому

    I moved everything from the cache to the array (even backed up onto my personal pc), installed a bigger m.2 for the cache drive, did the formatting, moved everything back from array to cache, and now even though all my data is there, the dockers and VMs I had are gone? What gives.

  •  Рік тому

    I'm running my vms in cache (btrfs), should I switch to ZFS? Will I gain or lose performance ?

  • @jonh6671
    @jonh6671 Рік тому

    Would this work for data drives to XFS?

  • @koolkiwikat
    @koolkiwikat Рік тому

    What's the advantage of 3 drive zfs cache pool vs 2 drive cache mirror?

    • @sean7949
      @sean7949 Рік тому

      In a 3 drive zfs cache pool (raidz) you only lose 1 disks storage. This is equivalent to raid5. In mirror mode all drives have the same data. So if you have 3 drives in your cache pool that were each 1TB and you formatted them as ZFS in mirror mode you would only have 1TB of storage. This is different than raidz where you would have 2TB of storage and one drive could fail before the entire raid fails.

  • @jesper1010
    @jesper1010 Рік тому

    Is it possible to encrypt the ZFS cache pool?

  • @theFPVgeek
    @theFPVgeek Рік тому

    Yesterday I converted my single SSD cache drive to ZFS using your video as a guide. Everything worked successfully but now I'm thinking I may have caused an unforeseen issue. I was thinking about upgrading my hardware (motherboard, CPU, RAM) as I'm running on an old 4-core i7-3770. Now that my cache is on ZFS am I going to have an issue swapping everything over? Should I revert back to XFS or BTRFS before upgrading more maybe move the cache data over to the array first, just leave the cache as ZFS, or is there a better option?

    • @sean7949
      @sean7949 Рік тому

      You should not have an issue so long as your new motherboard has the necessary connections to host all the different drives you have. But you can always be safe and move the data off the ZFS cache pool.

  • @theshadowduke
    @theshadowduke Рік тому

    I followed the instructions and had to reinstall my dockers. Not sure why, but everything except the jellyfin data was retained

  • @ashtonk1130
    @ashtonk1130 Рік тому

    Could you please do a video for syncthing and seafile?

  • @Dr-AK
    @Dr-AK Рік тому

    is zfs better then btrfs for cache? Also if doing a new unraid, should I just go with ZFS for all my disks array and cache? Thank you for all your videos

    • @sean7949
      @sean7949 Рік тому +1

      In my experience btrfs was terrible. I was running 4 1TB NVME SSDs on PCIe 4.0 and had issues where the plex metadata/appdata drive would drop out. This never happened on XFS. Also, i've run ZFS on FreeNAS/TrueNAS and it was rock solid. That was a long time ago but I can't image it has gotten worse, only better. Heck, I've still got the logs from when the drive dropped out in elastic just because the retention time hasn't passed.

  • @zthemoney
    @zthemoney Рік тому

    Hi, can you do a video on folder permission in Unraid from the Terminal? I'm having an issue with the files in the Media folder not being seen by Plex Server

    • @sean7949
      @sean7949 Рік тому

      You should check the user id and group id of the Plex processes. If this is running as a different user than the files are set too then you will have those issues. So to find this out you would left click on the container, open the console then enter the command cat /etc/passwd. This will list all users in the container. Copy this to a notepad. Next in the console type ps -aux. This will list all running processes with their user names. Then compare the results. For instance in my container the plex processes are running as the user nobody. This will then tell you the user id and group id of the running processes. Once you have the uid an gid you can then set the files via the command chown -R. Be very careful with this command. Make sure you only affect the files that are needed. Once the proper uid and gid have been set by chown you should be able to see all your media files.
      Alternatively you can set your container's process to run as your user's uid and gid. which are likely the uid and gid of the files. You will have to check if your container supports this parameter though. All the linuxserver.io containers support this.

  • @hermanjones8734
    @hermanjones8734 Рік тому

    Can you use nvme SSDs as the array drives & eliminate the cache drive all using ZFS?

    • @sean7949
      @sean7949 Рік тому

      No, SSDs require TRIM and this is incompatible with the style of parity the main array uses. Do not include SSDs in the main array. You can of course create large cache pools and use them as if they were the array though! Think 4 8TB SSDs!

  • @Opa_on_Tour
    @Opa_on_Tour Рік тому

    hello
    can I create an ZFS Mirror as an array ? or only as Pool ?
    and can I import an old ZFS pool as array ? - or only as Pool ?
    because on array i can use an nvme as primary and the array as second storage (I want to use 10GBIT at home, and that works only with nvme as "cache" drive)
    so big files first goes with 1GB/sec from PC to 1 TB nvme on unraid, and later unraid moves it on the 16TB mirror
    on the old unraid I make this with user scripts and mount bind usw. that was a lot of console work

    • @sean7949
      @sean7949 Рік тому

      Only as a pool.
      Don't know about the old ZFS pool as an arrray. You shouldn't do this you'd be better off copying the data over NFS/SMB before attempting that.
      I wouldn't push this idea ZFS on Unraid is still relatively new, at least the native setup. You would be better off conforming to the normal flow of things.

  • @soana65
    @soana65 Рік тому

    The mover option seems like the safest thing for this task but you could be waiting quite a bit for the mover to finalize if you have a big media library managed by plex/photoprism/paperles dockers. This is due to the way the mover works - checking every file to see if it's being used etc. I'm assuming one could just move the data from the cache disk(s) to the array using unBalance plugin. Does anyone see a risk in doing it this way?

    • @oko2708
      @oko2708 Рік тому +1

      I had the same problem. I ended up making a backup using the 'Backup Appdata' plugin and just restored from the backup after reformatting the cache as ZFS.

    • @soana65
      @soana65 Рік тому

      @oko2708 Thanks for your feedback. As it turns out, the photoprism docker folder was 250Gb, and that took a while to move. On the other hand, the plex folder was only 51Gb, but with a million files in a million subfolders. In the end, I decided to wait. Took about three days to move to the array and back to the zfs cache deive.

    • @oko2708
      @oko2708 Рік тому

      @@soana65 yeah that wasn't really an option for me since Home Assistant runs on my server and controls my lights. But glad to hear it worked out for you.

  • @boriss282
    @boriss282 Рік тому

    Thanks for Great video like always, waiting for more videos of "ZFS" and if it is worth for array disks , "Exclusive shares" and the mostly a problem of "Call traces related to macvlan"

  • @simonrussell4986
    @simonrussell4986 Рік тому +1

    If I'm honest, I wasn't that interested in the 6.12 update - Unraid seems to be getting too complicated for me now - the release announcement mostly went over my head, and because of the largely unannounced 2FA system, I've managed to lock myself out of the forums for now. It's a NAS and app set-and-forget server for a lot of people, and I'm not in that bleeding edge group that I think is assumed we all are.
    Long story short, thank you for this (and the 6.12 preview) - it's helped me understand some of this release.

  • @gregfrplockone1466
    @gregfrplockone1466 Рік тому

    It would be super nice if you did redos of some of your old videos that are outdated office rat ca unraid license for 22 dolr Pro

  • @mikerufty1307
    @mikerufty1307 Рік тому

    Note to self, when converting to ZFS, don't create a domains folder until you have backed up everything in that existing folder... It creates the dataset for you and happily empties the folder for you. 2nd note to self, change backups from 2 weeks to nightly... I should have watched your video sooner :)

  • @erodz1892
    @erodz1892 Рік тому

    Does all the disks have to be the same size for zfs to work ?

    • @diedrichg
      @diedrichg Рік тому +1

      The pool will default to the size of the smallest drive. 8TB+8TB+4TB would actually use the space as though the disks were 4TB+4TB+4TB.

    • @39zack
      @39zack Рік тому +2

      If they are not same size you waste space.

    • @erodz1892
      @erodz1892 Рік тому +2

      @@diedrichg so is more like Freenas type of scenario I assume?

    • @diedrichg
      @diedrichg Рік тому

      @@erodz1892 Yep, exactly that. I've been running FreeNAS for 10? years. I've loved it all the way up to Scale. I +HATE+ Scale. That's why I decided to try unRAID. I LOVE it! I love it because of the ease of Docker, built-in Wireguard, and JBOD. I'm in the process of migrating box #2 from Scale to Unraid.

  • @CPR9969
    @CPR9969 Рік тому

    Isn't the Cache dirve bfs and not zfs?

  • @oko2708
    @oko2708 Рік тому

    You're better of just creating a backup from your appdata using the 'Appdata Backup' plugin. Using mover is incredibly slow if you have lots of small files. My 10Gb plex appdata folder would've taken over 6 hours to move (one way). I just erased the cache drive and restored from the backup, took about 15 mins.

    • @sean7949
      @sean7949 Рік тому

      That is an interesting approach, but not everyone has the appdata backup plugin.

  • @kingdomofsaudiarabia6671
    @kingdomofsaudiarabia6671 Рік тому +4

    First comment, Big thanks from Saudi Arabia

    • @SpaceinvaderOne
      @SpaceinvaderOne  Рік тому +1

      So nice of you. Thanks for watching and commenting first :)

  • @TaldrenDR
    @TaldrenDR Рік тому

    I am really hoping for a video that shows us how to migrate a ZFS pool we created under your original videos in 6.11 to 6.12.0. Like, I am sorta getting anxious about this eventual upgrade as redoing this from scratch would be a nightmare for me.

    • @sean7949
      @sean7949 Рік тому

      Could always push all your data off those cache pools and then remove the cache pools then do the upgrade and add them back in with the new system. Avoid all the potential issues.

  • @ryansavenkoff9233
    @ryansavenkoff9233 Рік тому

    Thank you!!!!