Це відео не доступне.
Перепрошуємо.

Don't use local Docker Volumes

Поділитися
Вставка
  • Опубліковано 18 сер 2024
  • How to avoid using local Docker Volumes and connect them to a remote NFS Storage Server like QNAP, Synology, etc.? I will show you how to create NFS Docker Volumes in Portainer and connect them to my TrueNAS server. #Docker #Linux #Portainer
    Teleport Tutorial: • How I secure my Server...
    Teleport-*: goteleport.com/...
    Follow me:
    TWITTER: / christianlempa
    INSTAGRAM: / christianlempa
    DISCORD: / discord
    GITHUB: github.com/chr...
    PATREON: / christianlempa
    MY EQUIPMENT: kit.co/christi...
    Timestamps:
    00:00 - Introduction
    01:17 - Why not store Docker Volumes locally?
    03:03 - What is an NFS Server
    03:30 - Advantages of NFS Servers
    04:29 - What to configure on your NAS?
    06:02 - Advertisement-*
    06:35 - Create NFS Docker Volumes
    11:02 - Migrate existing Volumes to NFS
    ________________
    All links with "*" are affiliate links.

КОМЕНТАРІ • 263

  • @NathanFernandes
    @NathanFernandes 2 роки тому +9

    Brilliant! I was just looking at something like this today, but instead I was trying mounting my synology as a cifs volume, nfs is so much easier and this video helped me set it up under 5mins. Thank you!

    • @christianlempa
      @christianlempa  2 роки тому +1

      Thanks! :) You're welcome

    • @therus000
      @therus000 Рік тому +1

      is it work for you?
      i just do as instructions. but cannot great subfolders as a volume's.
      i got error 500. i try add map root to admin. but got same problem. error500. but if i make volume in the root of nfs folder. it work. but im not comfortabe with that.
      may be any help. i got dsm7

    • @a5pin
      @a5pin Рік тому

      @@therus000 Hey I have the same problem, did you manage to resolve this?

  • @HoshPak
    @HoshPak 2 роки тому +25

    Some food for thought...
    Most NAS systems have 2 network interface. You could attach the NAS directly to the server on a separate VLAN and optimize that network for I/O, enabling jumbo frames etc. That basically makes it a SAN without the redundancy.
    Using iSCSI instead of NFS is also an option and might be preferred for database workloads I assume.

    • @kamilkroliszewski689
      @kamilkroliszewski689 2 роки тому +1

      Exactly, we used NFS as storage for databases and it handled not very good. Sometimes you have locks etc.

    • @christianlempa
      @christianlempa  2 роки тому +6

      Thanks for your experience guys, I have just a little experience with it. FYI, I was playing around with Jumbo Frames on a direct connection between my PC and TrueNAS, worked pretty well. VLANs is a topic for a future video as well :D So stay tuned!

    • @macntech4703
      @macntech4703 2 роки тому +3

      I also think that iscsi might be the better choice compared to nfs or smb. at least in a homelab environment.

    • @HoshPak
      @HoshPak 2 роки тому

      @@christianlempa I've been though that endeavor, just recently. When I searched for tagged VLAN on Linux, the documentation was hopelessly outdated referring to deprecated tools.
      My advice: disable anything that messes with net devices even remotely (i.e NetworkManager) and go straight for systemd-networkd. I've built a VLAN-aware bridge which functions just like managed switch. Virtual NICs from KVM attach to it, automatically as well as Docker containers. This is also a pretty good way to use tagged VLAN inside VMs which on KVM is hard to use, otherwise.
      If you'd like some help at the beginning, let me know. We will get you started. :)

    • @gordslater
      @gordslater 2 роки тому

      last time I did this I had best results using ATAoE, it's not used very much nowadays but is very fast for local links (it's non-routable but ideal for single-rack use)

  • @MichaelWDietrich
    @MichaelWDietrich 3 місяці тому +1

    Great walkthrough and howto. Thanks for that. Nevertheless, small criticism at this point. NFS volumes and snapshot backups on the target NAS IMHO do not replace an application-based backup. Of course (for example in the event of a power failure but also due to other technical and organizational problems) the volumes on the NFS can be destroyed and become inconsistent in the same way as those on the local machine. This is even more likely because the writing process on the NFS relies on more technical components. That's why I also do and highly reccomend application-based backups with at least the same frequency. If the application's backup algorithm is written sensibly, it will only complete the backup after a consistency check and then it is clear that at least this backup is not corrupted and can be restored without data loss.

  • @ColoredBytes
    @ColoredBytes Рік тому +2

    I followed this for my WindowsNAS to share to docker and this is so much easier then doing host level mounts. Thank you soo much

    • @christianlempa
      @christianlempa  Рік тому

      You’re welcome :)

    • @ColoredBytes
      @ColoredBytes 7 місяців тому

      @@christianlempa Hello again , I was wondering if by chance you knew how to specify this in a docker compose file im trying to make a template and this seems to be only my stopping point atm

  • @anthonyjhicks
    @anthonyjhicks 2 роки тому +4

    This was awesome. Perfect timing as exactly the next step I wanted to make with my volumes on Portainer.

  • @ninadpchaudhari
    @ninadpchaudhari 2 роки тому +26

    Btw, one point I need to add here: doesn’t mean you don’t need to have backups. Having redundant storage over NFS is nice. But do ensure you still have restorable backups in addition to this.
    There can be many things that can go wrong here, your FS might get corrupt in case there was a network problem while writing a bit or the raid fails etc etc.

    • @christianlempa
      @christianlempa  2 роки тому +4

      Absolutely! Great point, and I might explain this in future videos :)

    • @gullijons9135
      @gullijons9135 2 роки тому +3

      Good point, "RAID is not backup" is something that really needs to be hammered home!

  • @blairhickman3614
    @blairhickman3614 2 роки тому +1

    I wish I found this video last week. It would have saved me hours trying to mount a NFS share on my Ubuntu server. I ran into the user permission issue also and it took a lot of searching to find the answer.

  • @fx_313
    @fx_313 2 роки тому +7

    Hey! Thank you very much for this.
    Can you also give an example on how to use an NFS mount in docker-compose / Stacks in Portainer? I've spent the last couple of hours trying and googling but wasn't able to find a real answer or constant examples on how to do it.

    • @ColoredBytes
      @ColoredBytes 7 місяців тому

      I'm having the same issue lol , did you by chance find a way to do this?

  • @steverhysjenks
    @steverhysjenks Рік тому +4

    How do you replicate into docker compose? I get these steps and its graat for me to understand manually the steps. I'm not convinced my compose is working correctly for NFS. Love to see this explanation converted into a docker compose example.

  • @Muhammad_Hannan9914
    @Muhammad_Hannan9914 2 роки тому +3

    I read somewhere the NFS is not secure when used with container as it also open access to container processes to get into the main OS processes when we expose server file system to docker container. What's your thoughts on this? Any one.

  • @ercanyilmaz8108
    @ercanyilmaz8108 Рік тому

    You can also run docker containers in TrueNas itself and connecting to the NFS locally. If I'm right the writes can then handled synchroniously. This gurantees the data integrity.

  • @jmatya
    @jmatya 5 місяців тому

    Many of the big datacenter also use Fibre channel based storage, network attached can be slow and subject to tcp congestion and package loss, whereas fc is guaranteed delivery.

  • @marcoroose9973
    @marcoroose9973 2 роки тому +2

    When migrating cp -ar maybe better as it copies permissons, too. Nice video! Need to figure out how to do this in docker-compose.

    • @christianlempa
      @christianlempa  2 роки тому +1

      Oh great, thank you ;)

    • @ansred
      @ansred 2 роки тому +1

      Would be appreciated if you figured out and share how to use docker-compose on Portainer. That would be really handy!

  • @Grand_Alchemist
    @Grand_Alchemist 4 місяці тому +1

    Event with the "wheel" user added to TrueNAS, NFS refused to work for deluge /sonarr / radarr (CentOS using docker compose). I ended up making an SMB share (yes, Microsoft, blasphemy!) and it works perfectly. So much less of a headache than NFS, PLUS it's actually secure (authenticated with a password and ACL (Access Control). SO, yeah. Unexpected but I would just recommend making a fricking SMB share.

  • @iggienator
    @iggienator 5 днів тому

    Danke, das hat mir weitergeholfen 👍

  • @Billyfelicianojp
    @Billyfelicianojp Рік тому +1

    I am having issues installing a stack in a Volume. the volume is already added in portainer and I see it and Tested it but my yml I cant figure out how to add nextcloud to the volume I want it.

  • @haxwithaxe
    @haxwithaxe 2 роки тому +4

    Glusterfs works really well for small files or small file transfers as well.
    Edit: I've since been told databases don't scale well on glusterfs. IDK how much experience that person has with glusterfs but they have enough experience with k8s for me to accept it until I can test it. Works great for the really small stuff I do in my home lab though.

    • @christianlempa
      @christianlempa  2 роки тому

      Oh yeah that's an interesting topic

    • @NicoDeclerckBelgium
      @NicoDeclerckBelgium 3 місяці тому

      True, but you have Galera Cluster for MySQL/Mariadb or just replication in PostgreSQL.
      But I get that the real problem is the 'other' dozens of databases.
      They can still be on a centralised storage, but as you say they don't scale well ...

  • @G00SEISL00SE
    @G00SEISL00SE Рік тому

    I have been stuck for weeks with an nsf volume not mounting right inside my containers. First I couldnt edit the created files. Then I could but couldn't create new ones. This fixed both my issues thanks going to set up my shares like this moving forward.

  • @kanarie93
    @kanarie93 Рік тому +3

    isnt it better to map the NFS volume in a /mnt/NFS on the host running docker so you have 1 connection open instead of hunderds for every container picking its own connection? Or is that not possible when you go docker swarm?

    • @KilSwitch0
      @KilSwitch0 7 місяців тому

      I have this exact question. This is the way unraid handles this. I think I will duplicate UnRaids approach.

  • @Dough296
    @Dough296 2 роки тому +2

    What if you need to perform an update on TrueNas that needs to reboot the NFS service or the TrueNas system ?
    Will the Docker container wait for the NFS service to come back without having trouble with data consistency ?
    I have myself that setup but when I want to restard my NAS it's a real pain to stop everything that depends on it...

  • @ilco31
    @ilco31 2 роки тому +1

    this is great to know -i am currently looking into how to back up my more info sensitive docker containers -like vaultwarden or nextcloud -great video

    • @Clarence-Homelab
      @Clarence-Homelab 2 роки тому

      I mainly use bind mounts for my persistent docker storage (had MAJOR issues with docker databases over cifs or nfs) and an awesome docker image for volume backups: offen docker-volume-backup
      It stops the desired containers before a backup, creates a tarball, sends it to an S3 bucket on my truenas server, spins up the stopped containers and lastly a cloud sync task on my truenas encrypts the data before pushing the backup to the cloud.

    • @christianlempa
      @christianlempa  2 роки тому

      Glad it was helpful!

    • @christianlempa
      @christianlempa  2 роки тому

      What were the issues with the DBs?

  • @rino19ny
    @rino19ny 2 роки тому

    it's the same thing. a storage server can also crash. and you have a complicated setup with a NAS storage. either methods you select, a proper backup is best.

  • @ronaldronald8819
    @ronaldronald8819 2 роки тому +1

    I am triggered into high gear learning mode by all of this. The aim is to set up a home assistant server. ha runs in docker and stores its data in local volumes. I am no fan on having my data all over the place so this video solves that problem. So next step is to get my hands dirty and hope i do not get to many errors that exceed my domain of knowledge. Thanks!!

  • @Steve3dot1416
    @Steve3dot1416 Рік тому +1

    Some dockers don't like shares. Those who use SQLite as a database for example. I had big performance issues with Lidarr who was caused by database lock issues because SQLite does not work well when in a share. I understood this has something to do with file links. I had to fall back to local volumes because of this.

  • @streambarhoum4464
    @streambarhoum4464 Рік тому +1

    Christian, How to make a disaster recovery of the entire system ? could you simulate an example?

  • @MarkJay
    @MarkJay 8 місяців тому

    What happens when you need to reboot or shutoff the storage server. Can the docker containers stay running or do they need to be stopped first?

  • @huboz0r
    @huboz0r Рік тому +1

    Btw, one point i need to add here: why share all of your files as root? You could just make a new group and user(s), specifically for accessing your files, and map your NFS shares to them. It is belt and suspenders since you only expose NFS to a specific IP, however not using root whenever possible is the way forwards. Probably why it got removed as a default in the new release.

    • @christianlempa
      @christianlempa  Рік тому +2

      Yep that’s something I need to get fixed in the future

  • @Telmosampaio
    @Telmosampaio 2 роки тому

    I usually do a cron job to copy volumes and database exports to an AWS S3, and then another cron job to delete files older than 1 month!

  • @black87c4
    @black87c4 2 роки тому +8

    NFS stands for "Not For Servers" ;-) Sys admin for about 30 years now, always cringed when i got a request for NFS on any app. For home use i'd be ok with it tho

    • @dragonsage6909
      @dragonsage6909 2 роки тому +1

      What would you suggest for a production system instead!?

    • @samuelbrohaugh9539
      @samuelbrohaugh9539 2 роки тому +1

      Great question, please tell us what you recommend instead

    • @dragonsage6909
      @dragonsage6909 2 роки тому

      @@samuelbrohaugh9539 I'm asking you because I'm using NFS.. not being sarcastic, trying to learn the most secure best method.. if you know one.. thanks in advance! :)

    • @black87c4
      @black87c4 2 роки тому +2

      @@dragonsage6909 Using a cluster filesystem is typically much better but I understand doing clusters is typically not cheap either. Guess it depends on what your talking about app wise. Is it a high avail app where downtime costs money or something simple that can handle user downtime. NFS will fail at some point, mount becomes stale, network issues locking up apps, etc. Its simple to implement, easy to use but easy to fail. Home use rarely an issue, real production use make sure expectations are met and understood. Just experience talking.

    • @dragonsage6909
      @dragonsage6909 2 роки тому

      @@black87c4 awesome answer, thank you. I'm looking at some other options now, think I've got it.. will update asap

  • @pedro_8240
    @pedro_8240 4 місяці тому

    And how do I use NFS to mount the initial portainer data volume, before configuring portainer?

  • @yangbrandon301
    @yangbrandon301 Рік тому

    Thank you. A great tutorial for NFS.

  • @wuggyfoot
    @wuggyfoot Рік тому

    wow dude when you typed root in that box you solved all of my problems
    crazy how ur the best source of information i have found over all of the internet

  • @desibanjankri5646
    @desibanjankri5646 2 роки тому

    LOL I spent a week on figuring this exact thing out - wanted to use Photoprism to use pictures from Backup server rather than Import files into Docker. Just got it working last night.😂

  • @tuttocrafting
    @tuttocrafting 2 роки тому

    I have a docker user and group on both docker and Nas machines. I use the same UID and guid on the container using env variables.

  • @AJMandourah
    @AJMandourah 2 роки тому +1

    I have read around some people complaining of database corruption using NFS as their cluster storage, didin't tried it personally and I am currently using CIFS mounts for my docker swarm. I was wondering if you have tried Glusterfs as it seems it is recommended for cluster volumes in general ,

    • @christianlempa
      @christianlempa  2 роки тому +2

      I hear that a couple of times, but never found any resources or details why this should be the case. Could you kindly share some insights? Thanks

  • @nixxblikka
    @nixxblikka 2 роки тому

    Nice video, looking forward to true nas content!

  • @Dyllon2012
    @Dyllon2012 2 роки тому +3

    For databases, I feel you'd be better off just taking backups and keeping a read replica or two. You'll almost certainly get better performance plus you'll be able to recover faster with the replica.
    If your app isn't a database, it should probably not be saving important data directly to disk unless you're doing some ad hoc operation (like running tests) where a local volume is fine.
    The NAS is probably more convenient for transferring files, I'll give it that.

    • @christianlempa
      @christianlempa  2 роки тому +1

      I hear that a couple of times, but never found any resources or details why this should be the case. Could you kindly share some insights? Thanks

  • @reppich1
    @reppich1 3 місяці тому

    you are so deep in the weeds here, not accessible to people who are still finding their way... the level this is at is for people who don't actually need this.

  • @FunctionGermany
    @FunctionGermany 2 роки тому +4

    Don't you need higher-tier networking to make sure you're not suffering any performance penalties? I can imagine that the additional latencies can make certain applications run slower when all file system access has to go through 2 network stacks and the network itself.

    • @charlescc1000
      @charlescc1000 2 роки тому

      I have been running some basic selfhosted containers on my servers in a similar configuration as laid out in this video. (Ubuntu server with portainer & Docker connected to TrueNAS for the storage.). My TrueNAS is setup with mirrored pairs instead of raidz/raidz2- but it’s still over a 1Gbe LAN.
      It’s been fine. Yes I’m sure 10GbE would improve it, but it’s plenty usable for most containers.
      I originally set it up as a test environment before buying 10Gbe hardware and then it worked so well that I decided not to bother with 10Gbe (yet)
      It’s not great with a VM that has a desktop environment- but it’s been fine with server VMs. Not fantastic, but fine.

    • @christianlempa
      @christianlempa  2 роки тому +1

      Thanks for your experience! I'm running it with a 10Gbe connection, but I highly doubt this would make a huge difference in this case. As for VM-Disks this might be totally different, of course, but for Docker Volumes 1Gbe should be fine.

    • @ElliotWeishaar
      @ElliotWeishaar 2 роки тому

      You are correct. The approach outlined in the video is great! And I would recommend this approach to pretty much anyone starting with docker. As you continue your journey you will have to adapt to the requirements of what you're hosting. You are correct in saying that some applications don't play well with storage over the network. Plex is a big one. I tried hosting my plex library (the metadata, not the media) on NFS, and the performance was atrocious. The application was unusable, and I had to switch to storing the data locally. I suspect it has to do with SQLite performing tons of IOps which NFS couldn't handle. This was with a dedicated point to point 10GBe connection as well. I was using bind mounts instead of docker volumes but I don't think that created the issue (could be wrong). I have other applications that have experienced this as well. I've resorted to having all of my data be local on the machine, and then just create backups using autorestic.

    • @nevoyu
      @nevoyu 2 роки тому

      Your not serving hundred of connections at a time. So really your not needing as much performance as your think. I run a 1gb connection to my homelab with a 4 disk raid 10 array and I can't tap the full bandwidth of the connection but I have no issues with performance watching 1080p (since I don't have a si gle display 4k makes sense on)

    • @robmcneill3641
      @robmcneill3641 2 роки тому +1

      @@ElliotWeishaar I ran into the same issues with Plex. I did end up storing the media remotely but could never get the library data to work reliably.

  • @MadMike78
    @MadMike78 Рік тому

    Love your videos. Question how would I use portainer to add new volume to existing container? I found how to add the volume but after that I don't know if anything needs to be copied over.

  • @yotuberrable
    @yotuberrable Рік тому +1

    In this case I assume NAS server must always be started before docker server and shutdown in reverse order. Otherwise I assume containers will just fail to start. How do you guys handle this?

    • @MarkJay
      @MarkJay 8 місяців тому

      I also would like to know how to handle this.

  • @GundamExia88
    @GundamExia88 2 роки тому +1

    Great Video! I have a question, so , if I have mounted the volume to the NFS in /etc/fstab. Do I still need to create the nfs volume? couldn't I just point to the mounted NFS on the host? What's the advantage of creating a new nfs volume in portainer? Is it just for easier to migrate from nfs to nfs? Thx!

    • @christianlempa
      @christianlempa  2 роки тому

      It's just for easier management. If you already have NFS mounted on the host, that is totally fine

  • @rw-xf4cb
    @rw-xf4cb 2 роки тому

    Could use iscsi targets perhaps worked well with VMWARE esx years ago when i didnt have a san

  • @maxime_vhw
    @maxime_vhw 13 днів тому

    The nfs share mounts fine to my test ubuntu container. But i cant access it. Permissions issue

  • @xD3adp0olx
    @xD3adp0olx 24 дні тому

    Is it possible with samba/cifs as well?

  • @markobrkusanin4745
    @markobrkusanin4745 2 роки тому +1

    Gluster FS will be even better option to manage data inside Docker swarm.

    • @christianlempa
      @christianlempa  2 роки тому

      I'm so interested in these filesystems, once I finish my projects I start looking at them

  • @Sama_09
    @Sama_09 Рік тому

    once installing nfs-common things just worked !! nfs-common was the missing piece

  • @RiffyDevine
    @RiffyDevine 10 місяців тому

    The folder/volume I created inside the container is user 568 so I can't access the /nfs folder in my container. Why did it use that user id over root?

  • @D76-de
    @D76-de 8 місяців тому

    First of all, I don't have much knowledge about Infrastructure... T,T
    Can the NFS of truenas VM be delivered to a container volume without 1Gb Network bottlenecks?
    Both truenas and container (ubuntu VM) are operated on proxmox.

  • @gunnsheridan2162
    @gunnsheridan2162 8 місяців тому

    Hi Christian, thanks for the informative video. I have two questions though:
    1. What is the correct way of setting user with same id, group id on nfs server and client? I have rpi with user 1000:1000. Such user doesnt exist on my Synology. Should I add new user to synology? Or should I pick one of synology user ids and create such user on raspi? If so, how do you create user with specific ids?
    2. What about file locking through nfs? I had issues with network stored (samba cifs) data containing sqlite database, for example homeassistant, baikal. I couldnt network store mariadb, mysql either due to some "file locking issues".

  • @StephenJames2027
    @StephenJames2027 Рік тому

    7:48 After many hours I finally figured out that I needed NFS4 enabled on TrueNAS to get this to work on my setup. I kept getting Error 500 from Portainer when attempting this with the default NFS / NFS3. 😅

  • @Ogorodovd
    @Ogorodovd 11 місяців тому

    @christianlempa Thanks Christian! Could install portainer on a Debian VM within TrueNAS scale, and then communicate with that? Or are you using a separate machine entirely for your Portainer/Server?

  • @macenkajan
    @macenkajan 2 роки тому

    Hi Christian, great Videos. I would love to see how to use this NFS (or maybe iSCSI) Setup with a Kibernetes Cluster. This is what I am trying to setup right now ;-)

    • @christianlempa
      @christianlempa  2 роки тому

      Thanks mate! Great suggestion, maybe I'll do that in the future.

  • @gonzaloamadorhernandez7020
    @gonzaloamadorhernandez7020 2 роки тому

    Oh my gosh!!! You are a crack !!! Thank you very much, mater

  • @anthonycoppet8788
    @anthonycoppet8788 8 місяців тому

    Hi . Created nfs volume on server. Can connect with the synology. But in portainer. The nfs volume appears empty .. The server volume is nobody;nogroup.

  • @zaberchann
    @zaberchann 2 роки тому +1

    One concern of using NFS in a home lab is that NFS needs the user to ensure the local network is safe, otherwise the security is compromised since the only auth is the ip address (of course you can use kerberos, but it’s too hard to configure). Besides, an malicious docker container could connect the NFS by using the host ip.

  • @manutech156
    @manutech156 2 роки тому

    Any plans to do a tutorial for Kubernetes Persistent Volume to TrueNAS NFS ?

  • @myeniad
    @myeniad Рік тому

    Great explanation. Thanks!

  • @ettoreatalan8303
    @ettoreatalan8303 9 місяців тому

    On my NAS, CIFS is enabled for the Windows computers connected to it. NFS is disabled.
    Is there any reason not to use CIFS instead of NFS for storing Docker volumes on my NAS?

  • @area51xi
    @area51xi 8 місяців тому

    When I try to deploy I keep getting a "Request failed with status code 500." error message.

  • @rodrigocsouza8619
    @rodrigocsouza8619 10 місяців тому

    @christianlempa is the activation of NFS4 that simple?? I've tried exactly what you deed and the mount fails returning "permission denied", always. I tried to dig on the subject and looks like that NFS4 requires a lot of effort to get working.

  • @scockman
    @scockman 2 роки тому

    Another AWESOME video!! But, I saw in the video that you have a portainer_data volume on the nfs share, how was that done? I have been trying to get this to work but getting docker error while trying to mount the volume.

    • @christianlempa
      @christianlempa  2 роки тому

      Thanks! You need to do it outside of the gui with docker cli commands unfortunately.

  • @Got99Cookies
    @Got99Cookies 2 роки тому +1

    Wouldn’t raidz2 be more safe especially with a 12 drive arrays? Good video tho! Docker volume management is something very important and you made some very good points!

    • @christianlempa
      @christianlempa  2 роки тому +1

      Yea, it would be. You can possibly argue if that would be a better option, I still think it's unlikely that more than 1 hard drive fail at the same time, but hey... people might have seen this in the wild, undoubtedly. That's why offsite backups are important.

    • @devinbuhl
      @devinbuhl 2 роки тому +2

      You would be surprised at how easy it is for another drive to die while your pool is resilvering from a 1 disk failure.

  • @HelloHelloXD
    @HelloHelloXD 2 роки тому +1

    Great topic. One question. What is going to happen to the docker container if the connection between the NFS server and docker server is lost?

    • @christianlempa
      @christianlempa  2 роки тому

      The container will fail to start

    • @HelloHelloXD
      @HelloHelloXD 2 роки тому +1

      @@christianlempa what if the container was already running and the connection was lost?

    • @vladduh3164
      @vladduh3164 2 роки тому +1

      @@HelloHelloXD it seems that the container just keeps running, it may not be able to do anything tho, i just tested this with sonarr as i had the /config folder in the nfs volume and it seemed to work as long as it didnt need anything from that folder, when i clicked on each series it just showed me a loading screen until i reconnected it, I suppose the answer is it depends entirely on what folders you put in that volume and how gracefully the application handles losing access to those files

    • @HelloHelloXD
      @HelloHelloXD 2 роки тому +1

      @@vladduh3164 thank you.

  • @schrank392
    @schrank392 5 місяців тому

    how do you draw this stuff like @2:30 ?

  • @226cenk9
    @226cenk9 Рік тому

    This is nice, but. Is there a way to use a local directory on the host instead? I have docker installed on my Ubuntu 22.04 and it would be nice to use local directories.

  • @vmdcortes
    @vmdcortes Рік тому

    Awesome!!
    Is this a good solution for a docker swarm volume sharing with the different nodes?

    • @christianlempa
      @christianlempa  Рік тому

      Thx! I'm not sure about that, I think NFS is still the easiest for my setup.

  • @TheManuforest
    @TheManuforest Рік тому

    Hello guys ,... I had a power failure and all my docker volumes have gone. Is this a predictable behavior ? Are they still there in disk ? Thanks

  • @devinbuhl
    @devinbuhl 2 роки тому +10

    Couple misleading things in this video. 1. Snapshots and RAID are not a back up. 2. Not all workloads can use a network file share for its persistent data. Anything that relies on file locking is best put on block storage, that includes postgres and sqlite w/ WAL databases. If you choose to use NFS for these you are risking data corruption.
    It's important to know the difference between file, object and block storage and when to use each one. Educate yourselves!

    • @ebeltran
      @ebeltran 2 роки тому +1

      Any recommended videos on the subject?

    • @christianlempa
      @christianlempa  2 роки тому +2

      Thanks for giving me another video topic :)

  • @KR1ML0N
    @KR1ML0N 2 роки тому +2

    i use all my docker volumes via nfs. It works pretty well.

    • @christianlempa
      @christianlempa  2 роки тому

      Thanks for sharing! It also works great for me ;)

  • @tl1897
    @tl1897 2 роки тому

    I tried this some time ago. Sadly my pi4 with 3 HDD's in raid5, using mdadm was not fast enough.
    So i decided to have my deployment files on the nfs, but volumes locally.
    And i wrote backup scripts for the rest.

  • @macpclinux1
    @macpclinux1 2 роки тому

    if i ever go crazy and have to setup a dirty docker system i'll try to remember this. it seems really helpful and imo any improvement possible is a god's gift with docker (i really hate it, ngl kubernetes. also not a big fan)

  • @solverz4078
    @solverz4078 Рік тому

    What about storing portainers volumes on a NFS share too?

  • @esra_erimez
    @esra_erimez 2 роки тому

    Would you please do a video about using Ceph Docker RBD volume plugin?

    • @christianlempa
      @christianlempa  2 роки тому

      Hmmm I need to look that up, sounds interesting

  • @ViktorKrejcir
    @ViktorKrejcir 2 роки тому +1

    Next level: Longhorn :)

  • @nevoyu
    @nevoyu 2 роки тому

    You don't need a "NAS operating system" any operating system can act as a nas as long as it supports some firm of network file share (ssh, nfs, smb, isccsi, ect)

  • @alexanderbradley5009
    @alexanderbradley5009 2 роки тому +1

    Rook Ceph could also be an alternative to NFS

    • @christianlempa
      @christianlempa  2 роки тому +1

      I'm so interested in these filesystems, once I finish my projects I start looking at them

  • @vidiokupret
    @vidiokupret 2 роки тому

    Thank you, it really helps

  • @j4nch
    @j4nch Рік тому

    I'm far from an expert on linux and there is something I'm missing on permissions: When you say that we need to have the same user that use the same permissions between the NFS server and the docker image, how does it work? I though that just having the same user id or the same user name isn't enough, no? I mean, they could have different password ?
    Also, what about the performance implications? I'm thinking to move my plex server in a docker container, with its storage on a NFS volume, could this be an issue?

  • @Felixls
    @Felixls 20 днів тому

    Yeah, well, any application using a sqlite database will get corrupted sooner or later using a volume from a network share.
    I've lost so many hours and tried so many different solutions, like CIFS, NFS, GlusterFS, you name it, it doesn't work, and it never will.
    The only solution is to use a local directory or a docker volume (which is local).

  • @Spydaw
    @Spydaw 2 роки тому

    Awesome video, thank you for explaining this. I am doing the exact same with all my pods in k3s ;)

    • @christianlempa
      @christianlempa  2 роки тому +1

      Oh, that is cool! I'm planning that as well in my k3s cluster I'm currently building ;)

    • @Spydaw
      @Spydaw 2 роки тому

      @@christianlempa Feel free to ping me if you have any questions ;)

  • @GSGWillSmith
    @GSGWillSmith Рік тому

    I don't think this is working anymore. It used to work, but now on TrueNAs 13, I keep getting this error with new volumes I create (both via stack editor and in portainer):
    failed to copy file info for /var/lib/docker/volumes/watchyourlan_wyl-data/_data: failed to chown /var/lib/docker/volumes/watchyourlan_wyl-data/_data: lchown /var/lib/docker/volumes/watchyourlan_wyl-data/_data: invalid argument

  • @Photograaf11
    @Photograaf11 2 роки тому +1

    Hi!
    Would it also be possible to do something similar for the "stacks" that are created with portainer???
    Or maybe this is stupid... images are not downloaded over and over again when there is an update for example (like a local cache system).
    Great video, as usual!

    • @christianlempa
      @christianlempa  2 роки тому +1

      Thanks! I guess that's working as well, but I haven't looked into compose yet

  • @ninji4182
    @ninji4182 8 місяців тому

    how do i do this with wsl2 and synology nas?

  • @raylab77
    @raylab77 2 роки тому

    Will this work with a backup solution as PCloud?

  • @jonath1235
    @jonath1235 Рік тому

    i can't seem to create volume from qnap to the docker. can you help?

    • @jonath1235
      @jonath1235 Рік тому

      this is my export: "/share/CACHEDEV1_DATA/Dockerdata" *(sec=sys,rw,async,wdelay,insecure,no_subtree_check,no_root_squash,fsid=9e50b469aef8f8a22013f16b7d3f69f9)
      "/share/NFSv=4" *(no_subtree_check,no_root_squash,insecure,fsid=0)
      "/share/NFSv=4/Dockerdata"

  • @mydogsbutler
    @mydogsbutler 8 місяців тому

    Have to disagree with the thesis that its' better to have db on remote server. It doesn't follow the 3,2,1 strategy for backups. From a disaster recovery standpoint you are better off having two copies rather than 1. It's also slower as data has to make a roundtrip to remote server rather than local. That said it might have a use case for local lab or test server (e.g. remote share could be zfs whereas docker instance could be on cheap single drive minipc partitioned with ext)

  • @ailton.duarte
    @ailton.duarte 11 місяців тому

    is it possible to use zfs pools?

  • @gjermundification
    @gjermundification 2 роки тому

    I run my local storage lofs across several zpools. Not sure why anyone would do anything as complicated as docker when there are open solaris zones on zfs. In essence I run the server application part on a zpool that is in RAM and NVMe, and storage in RAM and spinning drives. ZIL, L2ARC, and all...
    I use NFS between the Mac and the media servers.

    • @christianlempa
      @christianlempa  2 роки тому

      There are a couple of reasons why Docker is useful ;)

  • @martinzipfel7843
    @martinzipfel7843 Рік тому

    I'm trying to do this for hours now and always run into permission issues. My User on the docker host and the NAS are exactly the same (same username, pw, UID GID) and I get permission denied when I just try to cd into the NAS folder from the ubuntu test container. Anyone an idea?

  • @RileySalm
    @RileySalm 2 роки тому

    I did this a few days ago and it corrupted several of my containers. Please be carful and have backups if you do this.

  • @crckdns
    @crckdns Рік тому

    Not running docker because I couldn't manage to run one single "package" on my qnap in station container (had some network problems)..
    That's why I'm using native running installations all the way without some voodoo in a package.

  • @florianlauer7591
    @florianlauer7591 2 роки тому

    Hi!
    what tool for drawing and marking via mouse directly on screen are you using ?

    • @christianlempa
      @christianlempa  2 роки тому +1

      Hi, I'm using EpicPen and my Galaxy Tab as a drawing screen.

  • @SebastianSchuhmann
    @SebastianSchuhmann Рік тому

    Did you experience problems with containers using NFS mounts after a reboot?
    Until now I used nfs only via mounting it to the host and bind mounting docker volumes to the host
    Since I now switched to the "direct mount" of nfs to docker host, specified in the stack code, after rebooting my CoreOS server, all these containers fail
    After restarting them they start fine
    Seems like a not available nfs service at boot time where the containers try to start but are not able to be mounted yet

    • @christianlempa
      @christianlempa  Рік тому

      I mostly reboot both of my servers, so the NAS server and the Proxmox Server, then it works fine.

  • @denniskluytmans
    @denniskluytmans 2 роки тому

    I'm running my docker inside a LXC on proxmox. Which has MP mount to the host. Which has NFS mounts to the storage server. I'm using bind mounts inside of portainer, is that wrong?

    • @christianlempa
      @christianlempa  2 роки тому

      I'm not entirely sure because I havent used LXC.

  • @mistakek
    @mistakek 2 роки тому +1

    Do you stop your containers when you backup your storage server?

    • @jp_baril
      @jp_baril 2 роки тому +2

      Good question. Because the video stated that backuping the local volume directory was not ideal for databases, yet it was never explained if doing snapshots on the nfs server was overcoming those mentioned potential issues.

    • @mistakek
      @mistakek 2 роки тому

      @@jp_baril Exactly why I asked.

    • @christianlempa
      @christianlempa  2 роки тому +1

      Great question! It depends on the storage server's file system and how you do the backup. If the Backup Server would just "copy" the files away, then the container should be stopped. If you're using ZFS with a snapshot, it shouldn't be a problem. I haven't had any scenario where this would result in an inconsistency issue with the db. However, if you do a rollback, you should of course stop the container, restore the snapshot and then start the container again.

    • @mistakek
      @mistakek 2 роки тому

      @@christianlempa Now I think I should have my TrueNas as my main nas, instead of my Synology.

  • @kevinhilton8683
    @kevinhilton8683 2 роки тому

    Hmm seems based on the comments iscsi might be the way to go which is block storage vs nfs which is file storage. I don't know however but I do know when I've had two linux system sharing via nfs the nfs connection has crapped out in the past causing problems. I'm not sure this is a better option than keeping bind mounted volumes and just having a backup solution for the volumes that runs periodically to backup the volumes to remote source. Lastly I'm wondering if you run an ldap server since this would synchronize users on vms and the NAS. I'm curious if you would get nfs errors in this scenario

    • @christianlempa
      @christianlempa  2 роки тому

      Currently I don't have LDAP, but I'm planning setting up an AD at home

  • @R055LE.1
    @R055LE.1 Рік тому

    One correction here, NFS v4 does use authentication, and you should be using that much more secure method instead.

  • @mrk131324
    @mrk131324 Рік тому

    How about volumes where performance matters? Like tmp or cache folders or source files in local development?

  • @wstrater
    @wstrater 2 роки тому

    How about HACS without OS?