Don't use local Docker Volumes

Поділитися
Вставка
  • Опубліковано 23 січ 2025

КОМЕНТАРІ • 271

  • @NathanFernandes
    @NathanFernandes 2 роки тому +9

    Brilliant! I was just looking at something like this today, but instead I was trying mounting my synology as a cifs volume, nfs is so much easier and this video helped me set it up under 5mins. Thank you!

    • @christianlempa
      @christianlempa  2 роки тому +1

      Thanks! :) You're welcome

    • @therus000
      @therus000 2 роки тому +1

      is it work for you?
      i just do as instructions. but cannot great subfolders as a volume's.
      i got error 500. i try add map root to admin. but got same problem. error500. but if i make volume in the root of nfs folder. it work. but im not comfortabe with that.
      may be any help. i got dsm7

    • @a5pin
      @a5pin Рік тому

      @@therus000 Hey I have the same problem, did you manage to resolve this?

  • @JershBytes
    @JershBytes Рік тому +2

    I followed this for my WindowsNAS to share to docker and this is so much easier then doing host level mounts. Thank you soo much

    • @christianlempa
      @christianlempa  Рік тому

      You’re welcome :)

    • @JershBytes
      @JershBytes Рік тому

      @@christianlempa Hello again , I was wondering if by chance you knew how to specify this in a docker compose file im trying to make a template and this seems to be only my stopping point atm

  • @anthonyjhicks
    @anthonyjhicks 2 роки тому +4

    This was awesome. Perfect timing as exactly the next step I wanted to make with my volumes on Portainer.

  • @HoshPak
    @HoshPak 2 роки тому +26

    Some food for thought...
    Most NAS systems have 2 network interface. You could attach the NAS directly to the server on a separate VLAN and optimize that network for I/O, enabling jumbo frames etc. That basically makes it a SAN without the redundancy.
    Using iSCSI instead of NFS is also an option and might be preferred for database workloads I assume.

    • @kamilkroliszewski689
      @kamilkroliszewski689 2 роки тому +1

      Exactly, we used NFS as storage for databases and it handled not very good. Sometimes you have locks etc.

    • @christianlempa
      @christianlempa  2 роки тому +7

      Thanks for your experience guys, I have just a little experience with it. FYI, I was playing around with Jumbo Frames on a direct connection between my PC and TrueNAS, worked pretty well. VLANs is a topic for a future video as well :D So stay tuned!

    • @macntech4703
      @macntech4703 2 роки тому +3

      I also think that iscsi might be the better choice compared to nfs or smb. at least in a homelab environment.

    • @HoshPak
      @HoshPak 2 роки тому

      @@christianlempa I've been though that endeavor, just recently. When I searched for tagged VLAN on Linux, the documentation was hopelessly outdated referring to deprecated tools.
      My advice: disable anything that messes with net devices even remotely (i.e NetworkManager) and go straight for systemd-networkd. I've built a VLAN-aware bridge which functions just like managed switch. Virtual NICs from KVM attach to it, automatically as well as Docker containers. This is also a pretty good way to use tagged VLAN inside VMs which on KVM is hard to use, otherwise.
      If you'd like some help at the beginning, let me know. We will get you started. :)

    • @gordslater
      @gordslater 2 роки тому

      last time I did this I had best results using ATAoE, it's not used very much nowadays but is very fast for local links (it's non-routable but ideal for single-rack use)

  • @blairhickman3614
    @blairhickman3614 2 роки тому +1

    I wish I found this video last week. It would have saved me hours trying to mount a NFS share on my Ubuntu server. I ran into the user permission issue also and it took a lot of searching to find the answer.

  • @MichaelWDietrich
    @MichaelWDietrich 8 місяців тому +1

    Great walkthrough and howto. Thanks for that. Nevertheless, small criticism at this point. NFS volumes and snapshot backups on the target NAS IMHO do not replace an application-based backup. Of course (for example in the event of a power failure but also due to other technical and organizational problems) the volumes on the NFS can be destroyed and become inconsistent in the same way as those on the local machine. This is even more likely because the writing process on the NFS relies on more technical components. That's why I also do and highly reccomend application-based backups with at least the same frequency. If the application's backup algorithm is written sensibly, it will only complete the backup after a consistency check and then it is clear that at least this backup is not corrupted and can be restored without data loss.

  • @Billyfelicianojp
    @Billyfelicianojp 2 роки тому +1

    I am having issues installing a stack in a Volume. the volume is already added in portainer and I see it and Tested it but my yml I cant figure out how to add nextcloud to the volume I want it.

  • @fx_313
    @fx_313 2 роки тому +7

    Hey! Thank you very much for this.
    Can you also give an example on how to use an NFS mount in docker-compose / Stacks in Portainer? I've spent the last couple of hours trying and googling but wasn't able to find a real answer or constant examples on how to do it.

    • @JershBytes
      @JershBytes Рік тому

      I'm having the same issue lol , did you by chance find a way to do this?

  • @SthreeH-y4z
    @SthreeH-y4z 2 роки тому +3

    I read somewhere the NFS is not secure when used with container as it also open access to container processes to get into the main OS processes when we expose server file system to docker container. What's your thoughts on this? Any one.

  • @kanarie93
    @kanarie93 Рік тому +3

    isnt it better to map the NFS volume in a /mnt/NFS on the host running docker so you have 1 connection open instead of hunderds for every container picking its own connection? Or is that not possible when you go docker swarm?

    • @KilSwitch0
      @KilSwitch0 Рік тому

      I have this exact question. This is the way unraid handles this. I think I will duplicate UnRaids approach.

  • @Dough296
    @Dough296 2 роки тому +2

    What if you need to perform an update on TrueNas that needs to reboot the NFS service or the TrueNas system ?
    Will the Docker container wait for the NFS service to come back without having trouble with data consistency ?
    I have myself that setup but when I want to restard my NAS it's a real pain to stop everything that depends on it...

  • @ercanyilmaz8108
    @ercanyilmaz8108 Рік тому

    You can also run docker containers in TrueNas itself and connecting to the NFS locally. If I'm right the writes can then handled synchroniously. This gurantees the data integrity.

  • @MarkJay
    @MarkJay Рік тому

    What happens when you need to reboot or shutoff the storage server. Can the docker containers stay running or do they need to be stopped first?

  • @steverhysjenks
    @steverhysjenks 2 роки тому +4

    How do you replicate into docker compose? I get these steps and its graat for me to understand manually the steps. I'm not convinced my compose is working correctly for NFS. Love to see this explanation converted into a docker compose example.

  • @Theeporkchopexpress
    @Theeporkchopexpress 5 місяців тому

    Exactly what I was looking for! Great video! Would love to see it done all via CLI also

  • @ninadpchaudhari
    @ninadpchaudhari 2 роки тому +26

    Btw, one point I need to add here: doesn’t mean you don’t need to have backups. Having redundant storage over NFS is nice. But do ensure you still have restorable backups in addition to this.
    There can be many things that can go wrong here, your FS might get corrupt in case there was a network problem while writing a bit or the raid fails etc etc.

    • @christianlempa
      @christianlempa  2 роки тому +4

      Absolutely! Great point, and I might explain this in future videos :)

    • @gullijons9135
      @gullijons9135 2 роки тому +3

      Good point, "RAID is not backup" is something that really needs to be hammered home!

    • @l4rkdono
      @l4rkdono Місяць тому

      So it's better to run containers locally anyways...
      Like, with 2 separate machines you're essentially doubling the risk of something going bad or risk a database corruption because the NFS share decided to betray you for some reason. Why make things simple and reliable when you can make it complicated and error prone.

  • @ilco31
    @ilco31 2 роки тому +1

    this is great to know -i am currently looking into how to back up my more info sensitive docker containers -like vaultwarden or nextcloud -great video

    • @Clarence-Homelab
      @Clarence-Homelab 2 роки тому

      I mainly use bind mounts for my persistent docker storage (had MAJOR issues with docker databases over cifs or nfs) and an awesome docker image for volume backups: offen docker-volume-backup
      It stops the desired containers before a backup, creates a tarball, sends it to an S3 bucket on my truenas server, spins up the stopped containers and lastly a cloud sync task on my truenas encrypts the data before pushing the backup to the cloud.

    • @christianlempa
      @christianlempa  2 роки тому

      Glad it was helpful!

    • @christianlempa
      @christianlempa  2 роки тому

      What were the issues with the DBs?

  • @marcoroose9973
    @marcoroose9973 2 роки тому +2

    When migrating cp -ar maybe better as it copies permissons, too. Nice video! Need to figure out how to do this in docker-compose.

    • @christianlempa
      @christianlempa  2 роки тому +1

      Oh great, thank you ;)

    • @ansred
      @ansred 2 роки тому +1

      Would be appreciated if you figured out and share how to use docker-compose on Portainer. That would be really handy!

  • @jmatya
    @jmatya 10 місяців тому

    Many of the big datacenter also use Fibre channel based storage, network attached can be slow and subject to tcp congestion and package loss, whereas fc is guaranteed delivery.

  • @RiffyDevine
    @RiffyDevine Рік тому

    The folder/volume I created inside the container is user 568 so I can't access the /nfs folder in my container. Why did it use that user id over root?

  • @G00SEISL00SE
    @G00SEISL00SE 2 роки тому

    I have been stuck for weeks with an nsf volume not mounting right inside my containers. First I couldnt edit the created files. Then I could but couldn't create new ones. This fixed both my issues thanks going to set up my shares like this moving forward.

  • @rodrigocsouza8619
    @rodrigocsouza8619 Рік тому +1

    @christianlempa is the activation of NFS4 that simple?? I've tried exactly what you deed and the mount fails returning "permission denied", always. I tried to dig on the subject and looks like that NFS4 requires a lot of effort to get working.

  • @anthonycoppet8788
    @anthonycoppet8788 Рік тому

    Hi . Created nfs volume on server. Can connect with the synology. But in portainer. The nfs volume appears empty .. The server volume is nobody;nogroup.

  • @area51xi
    @area51xi Рік тому

    When I try to deploy I keep getting a "Request failed with status code 500." error message.

  • @TheManuforest
    @TheManuforest Рік тому

    Hello guys ,... I had a power failure and all my docker volumes have gone. Is this a predictable behavior ? Are they still there in disk ? Thanks

  • @schrank392
    @schrank392 10 місяців тому

    how do you draw this stuff like @2:30 ?

  • @haxwithaxe
    @haxwithaxe 2 роки тому +6

    Glusterfs works really well for small files or small file transfers as well.
    Edit: I've since been told databases don't scale well on glusterfs. IDK how much experience that person has with glusterfs but they have enough experience with k8s for me to accept it until I can test it. Works great for the really small stuff I do in my home lab though.

    • @christianlempa
      @christianlempa  2 роки тому

      Oh yeah that's an interesting topic

    • @NicoDeclerckBelgium
      @NicoDeclerckBelgium 8 місяців тому

      True, but you have Galera Cluster for MySQL/Mariadb or just replication in PostgreSQL.
      But I get that the real problem is the 'other' dozens of databases.
      They can still be on a centralised storage, but as you say they don't scale well ...

  • @ettoreatalan8303
    @ettoreatalan8303 Рік тому

    On my NAS, CIFS is enabled for the Windows computers connected to it. NFS is disabled.
    Is there any reason not to use CIFS instead of NFS for storing Docker volumes on my NAS?

  • @pedro_8240
    @pedro_8240 9 місяців тому

    And how do I use NFS to mount the initial portainer data volume, before configuring portainer?

  • @wuggyfoot
    @wuggyfoot Рік тому

    wow dude when you typed root in that box you solved all of my problems
    crazy how ur the best source of information i have found over all of the internet

  • @FunctionGermany
    @FunctionGermany 2 роки тому +5

    Don't you need higher-tier networking to make sure you're not suffering any performance penalties? I can imagine that the additional latencies can make certain applications run slower when all file system access has to go through 2 network stacks and the network itself.

    • @charlescc1000
      @charlescc1000 2 роки тому

      I have been running some basic selfhosted containers on my servers in a similar configuration as laid out in this video. (Ubuntu server with portainer & Docker connected to TrueNAS for the storage.). My TrueNAS is setup with mirrored pairs instead of raidz/raidz2- but it’s still over a 1Gbe LAN.
      It’s been fine. Yes I’m sure 10GbE would improve it, but it’s plenty usable for most containers.
      I originally set it up as a test environment before buying 10Gbe hardware and then it worked so well that I decided not to bother with 10Gbe (yet)
      It’s not great with a VM that has a desktop environment- but it’s been fine with server VMs. Not fantastic, but fine.

    • @christianlempa
      @christianlempa  2 роки тому +1

      Thanks for your experience! I'm running it with a 10Gbe connection, but I highly doubt this would make a huge difference in this case. As for VM-Disks this might be totally different, of course, but for Docker Volumes 1Gbe should be fine.

    • @ElliotWeishaar
      @ElliotWeishaar 2 роки тому

      You are correct. The approach outlined in the video is great! And I would recommend this approach to pretty much anyone starting with docker. As you continue your journey you will have to adapt to the requirements of what you're hosting. You are correct in saying that some applications don't play well with storage over the network. Plex is a big one. I tried hosting my plex library (the metadata, not the media) on NFS, and the performance was atrocious. The application was unusable, and I had to switch to storing the data locally. I suspect it has to do with SQLite performing tons of IOps which NFS couldn't handle. This was with a dedicated point to point 10GBe connection as well. I was using bind mounts instead of docker volumes but I don't think that created the issue (could be wrong). I have other applications that have experienced this as well. I've resorted to having all of my data be local on the machine, and then just create backups using autorestic.

    • @nevoyu
      @nevoyu 2 роки тому

      Your not serving hundred of connections at a time. So really your not needing as much performance as your think. I run a 1gb connection to my homelab with a 4 disk raid 10 array and I can't tap the full bandwidth of the connection but I have no issues with performance watching 1080p (since I don't have a si gle display 4k makes sense on)

    • @robmcneill3641
      @robmcneill3641 2 роки тому +1

      @@ElliotWeishaar I ran into the same issues with Plex. I did end up storing the media remotely but could never get the library data to work reliably.

  • @a5pin
    @a5pin 2 роки тому

    Can someone help me where I'm going wrong? I've created the volume, but when trying to save the volume in the container, I always get a "request failed with status code 500" error when clicking deploy.

    • @christianlempa
      @christianlempa  2 роки тому

      Most likely there is a network connection error or permission error.

    • @old-school-cool
      @old-school-cool 2 роки тому

      Getting the same, and I've gone over everything I can find. I can only imagine this is something that has been broken in Truenas Core 13

  • @martinzipfel7843
    @martinzipfel7843 2 роки тому

    I'm trying to do this for hours now and always run into permission issues. My User on the docker host and the NAS are exactly the same (same username, pw, UID GID) and I get permission denied when I just try to cd into the NAS folder from the ubuntu test container. Anyone an idea?

  • @iggienator
    @iggienator 5 місяців тому

    Danke, das hat mir weitergeholfen 👍

  • @maxime_vhw
    @maxime_vhw 5 місяців тому

    The nfs share mounts fine to my test ubuntu container. But i cant access it. Permissions issue

  • @denniskluytmans
    @denniskluytmans 2 роки тому

    I'm running my docker inside a LXC on proxmox. Which has MP mount to the host. Which has NFS mounts to the storage server. I'm using bind mounts inside of portainer, is that wrong?

    • @christianlempa
      @christianlempa  2 роки тому

      I'm not entirely sure because I havent used LXC.

  • @GundamExia88
    @GundamExia88 2 роки тому +1

    Great Video! I have a question, so , if I have mounted the volume to the NFS in /etc/fstab. Do I still need to create the nfs volume? couldn't I just point to the mounted NFS on the host? What's the advantage of creating a new nfs volume in portainer? Is it just for easier to migrate from nfs to nfs? Thx!

    • @christianlempa
      @christianlempa  2 роки тому

      It's just for easier management. If you already have NFS mounted on the host, that is totally fine

  • @nixxblikka
    @nixxblikka 2 роки тому

    Nice video, looking forward to true nas content!

  • @rino19ny
    @rino19ny 2 роки тому

    it's the same thing. a storage server can also crash. and you have a complicated setup with a NAS storage. either methods you select, a proper backup is best.

  • @D76-de
    @D76-de Рік тому

    First of all, I don't have much knowledge about Infrastructure... T,T
    Can the NFS of truenas VM be delivered to a container volume without 1Gb Network bottlenecks?
    Both truenas and container (ubuntu VM) are operated on proxmox.

  • @jorgegomez374
    @jorgegomez374 2 роки тому

    I have 3 rpis on a docker swarm. One of the. Is my nfs server and doing exactly this. But worry if my docker drive dies so any idea on making backups.

    • @christianlempa
      @christianlempa  2 роки тому +1

      Hm, I would try to back up the raspberry Pis file systems with rsync or similar backup software for Linux.

    • @jorgegomez374
      @jorgegomez374 2 роки тому

      @@christianlempa thanks

  • @tuttocrafting
    @tuttocrafting 2 роки тому

    I have a docker user and group on both docker and Nas machines. I use the same UID and guid on the container using env variables.

  • @xD3adp0olx
    @xD3adp0olx 6 місяців тому

    Is it possible with samba/cifs as well?

  • @shinwadone
    @shinwadone 2 роки тому

    There was a permission problem when I started the container, user and group exist on both server and client, but when executing the chown command in dockerfile, it shows no permissions error, maybe I have to use root user instead. Is there any other ways to work around with using the root user?

  • @yangbrandon301
    @yangbrandon301 Рік тому

    Thank you. A great tutorial for NFS.

  • @GSGWillSmith
    @GSGWillSmith 2 роки тому

    I don't think this is working anymore. It used to work, but now on TrueNAs 13, I keep getting this error with new volumes I create (both via stack editor and in portainer):
    failed to copy file info for /var/lib/docker/volumes/watchyourlan_wyl-data/_data: failed to chown /var/lib/docker/volumes/watchyourlan_wyl-data/_data: lchown /var/lib/docker/volumes/watchyourlan_wyl-data/_data: invalid argument

  • @solverz4078
    @solverz4078 2 роки тому

    What about storing portainers volumes on a NFS share too?

  • @ronaldronald8819
    @ronaldronald8819 2 роки тому +1

    I am triggered into high gear learning mode by all of this. The aim is to set up a home assistant server. ha runs in docker and stores its data in local volumes. I am no fan on having my data all over the place so this video solves that problem. So next step is to get my hands dirty and hope i do not get to many errors that exceed my domain of knowledge. Thanks!!

  • @martinzipfel7843
    @martinzipfel7843 Рік тому

    Everytime I try to bind my NAS volume to a container the container doesn't deploy with error code 500 (deploys fine without binding the volume so I'm sure that is the issue). I tried it with 2 different Truenas scale instances now with the same result. Anyone got an idea what I'm doing wrong?

    • @martinzipfel7843
      @martinzipfel7843 Рік тому

      I figured it out. My Docker hosts are running in Proxmox containers and they don't allow nfs if they're not run privileged.

  • @jonath1235
    @jonath1235 Рік тому

    i can't seem to create volume from qnap to the docker. can you help?

    • @jonath1235
      @jonath1235 Рік тому

      this is my export: "/share/CACHEDEV1_DATA/Dockerdata" *(sec=sys,rw,async,wdelay,insecure,no_subtree_check,no_root_squash,fsid=9e50b469aef8f8a22013f16b7d3f69f9)
      "/share/NFSv=4" *(no_subtree_check,no_root_squash,insecure,fsid=0)
      "/share/NFSv=4/Dockerdata"

  • @yotuberrable
    @yotuberrable Рік тому +1

    In this case I assume NAS server must always be started before docker server and shutdown in reverse order. Otherwise I assume containers will just fail to start. How do you guys handle this?

    • @MarkJay
      @MarkJay Рік тому

      I also would like to know how to handle this.

  • @AJMandourah
    @AJMandourah 2 роки тому +1

    I have read around some people complaining of database corruption using NFS as their cluster storage, didin't tried it personally and I am currently using CIFS mounts for my docker swarm. I was wondering if you have tried Glusterfs as it seems it is recommended for cluster volumes in general ,

    • @christianlempa
      @christianlempa  2 роки тому +2

      I hear that a couple of times, but never found any resources or details why this should be the case. Could you kindly share some insights? Thanks

  • @SebastianSchuhmann
    @SebastianSchuhmann Рік тому

    Did you experience problems with containers using NFS mounts after a reboot?
    Until now I used nfs only via mounting it to the host and bind mounting docker volumes to the host
    Since I now switched to the "direct mount" of nfs to docker host, specified in the stack code, after rebooting my CoreOS server, all these containers fail
    After restarting them they start fine
    Seems like a not available nfs service at boot time where the containers try to start but are not able to be mounted yet

    • @christianlempa
      @christianlempa  Рік тому

      I mostly reboot both of my servers, so the NAS server and the Proxmox Server, then it works fine.

  • @streambarhoum4464
    @streambarhoum4464 2 роки тому +1

    Christian, How to make a disaster recovery of the entire system ? could you simulate an example?

  • @Telmosampaio
    @Telmosampaio 2 роки тому

    I usually do a cron job to copy volumes and database exports to an AWS S3, and then another cron job to delete files older than 1 month!

  • @ninji4182
    @ninji4182 Рік тому

    how do i do this with wsl2 and synology nas?

  • @j4nch
    @j4nch 2 роки тому

    I'm far from an expert on linux and there is something I'm missing on permissions: When you say that we need to have the same user that use the same permissions between the NFS server and the docker image, how does it work? I though that just having the same user id or the same user name isn't enough, no? I mean, they could have different password ?
    Also, what about the performance implications? I'm thinking to move my plex server in a docker container, with its storage on a NFS volume, could this be an issue?

  • @mistakek
    @mistakek 2 роки тому +1

    Do you stop your containers when you backup your storage server?

    • @jp_baril
      @jp_baril 2 роки тому +2

      Good question. Because the video stated that backuping the local volume directory was not ideal for databases, yet it was never explained if doing snapshots on the nfs server was overcoming those mentioned potential issues.

    • @mistakek
      @mistakek 2 роки тому

      @@jp_baril Exactly why I asked.

    • @christianlempa
      @christianlempa  2 роки тому +1

      Great question! It depends on the storage server's file system and how you do the backup. If the Backup Server would just "copy" the files away, then the container should be stopped. If you're using ZFS with a snapshot, it shouldn't be a problem. I haven't had any scenario where this would result in an inconsistency issue with the db. However, if you do a rollback, you should of course stop the container, restore the snapshot and then start the container again.

    • @mistakek
      @mistakek 2 роки тому

      @@christianlempa Now I think I should have my TrueNas as my main nas, instead of my Synology.

  • @ailton.duarte
    @ailton.duarte Рік тому

    is it possible to use zfs pools?

  • @sasab7584
    @sasab7584 2 роки тому

    Is the same thing posible using the CIFS/SMB mounts originating in windows or it does have to be NFS?

  • @226cenk9
    @226cenk9 Рік тому

    This is nice, but. Is there a way to use a local directory on the host instead? I have docker installed on my Ubuntu 22.04 and it would be nice to use local directories.

  • @MadMike78
    @MadMike78 Рік тому

    Love your videos. Question how would I use portainer to add new volume to existing container? I found how to add the volume but after that I don't know if anything needs to be copied over.

  • @mrk131324
    @mrk131324 Рік тому

    How about volumes where performance matters? Like tmp or cache folders or source files in local development?

  • @procheeseburger_2
    @procheeseburger_2 2 роки тому

    have you seen any issues with DB’s specifically SQLite? I tried to move my containers to an NFS share… some work just fine but anything using SQL seems to just break.

    • @christianlempa
      @christianlempa  2 роки тому

      I, personally, haven't. I heard it doesn't work great for databases, that's why I used NFSv4, as it was improved to work better with that. But you still have problems, you might just switch your workflow for your databases to something else, I'd say.

    • @procheeseburger_2
      @procheeseburger_2 2 роки тому

      @@christianlempa yeah, I'm also using v4 and DB's just didn't work. Currently looking for a solution as I don't like having all of my containers using the local storage of the VM.

    • @chris.taylor
      @chris.taylor Рік тому +1

      @@procheeseburger_2 Hey, did you find a solution? I am also finding that SQLITE wont play nice with network shares

    • @procheeseburger_2
      @procheeseburger_2 Рік тому

      @@chris.taylor I just use local storage.

  • @raylab77
    @raylab77 2 роки тому

    Will this work with a backup solution as PCloud?

  • @myeniad
    @myeniad Рік тому

    Great explanation. Thanks!

  • @huboz0r
    @huboz0r Рік тому +1

    Btw, one point i need to add here: why share all of your files as root? You could just make a new group and user(s), specifically for accessing your files, and map your NFS shares to them. It is belt and suspenders since you only expose NFS to a specific IP, however not using root whenever possible is the way forwards. Probably why it got removed as a default in the new release.

    • @christianlempa
      @christianlempa  Рік тому +2

      Yep that’s something I need to get fixed in the future

  • @Jayknightfr
    @Jayknightfr 2 роки тому

    Hey, thanks for the video, unfortunately i have an error "500 request failed" when trying to deploy the container.
    I have no issues adding the NFS on other machines, but on container it doesn't work unfortunately.

    • @christianlempa
      @christianlempa  2 роки тому

      Thats likely a problem with the NFS connection. Check IP, path, user settings and permissions

    • @pWAVE86
      @pWAVE86 2 роки тому

      @@christianlempa Same issue ... already checked and entered all IP's possible. Also set "mapall" to root in TrueNAS ... no success. :(

  • @danielcastrorodriguez3934
    @danielcastrorodriguez3934 2 місяці тому

    thanks Christian, so interesting!

  • @marcg1043
    @marcg1043 2 місяці тому

    Good idea to separate storage server but in practice not useful for homelabs. The network speeds you have at home 1gbit or 2.5gbit max creates a major bottleneck which you avoid when having direct local access.

  • @realabzhussain
    @realabzhussain 2 роки тому

    Could you use cifs/samba to do the same thing?

  • @rw-xf4cb
    @rw-xf4cb 2 роки тому

    Could use iscsi targets perhaps worked well with VMWARE esx years ago when i didnt have a san

  • @jp_baril
    @jp_baril 2 роки тому

    Could we just have mounted a nfs share on the local docker volumes directory?
    I suppose that because such docker native nfs mecanism exists then the answer would be no, but i'm curious of why.

    • @christianlempa
      @christianlempa  2 роки тому

      I guess that should also work, but in that case the Linux Host would be responsible for the NFS connection mangement and not Docker

    • @a.x.w
      @a.x.w 2 роки тому

      That's what I do in my (older) setup. For some reason I couldn't get ACLs to work if mounted through docker (also tried docker-volume-netshare)
      I mount my nfs shares to a seperate location on the host and symlink the volumes' _data directories to that, though.

  • @shetuamin
    @shetuamin 2 роки тому

    I have to reboot docker host if NFS server hang. May be I need stable freenas server.

  • @wstrater
    @wstrater 2 роки тому

    How about HACS without OS?

  • @Grand_Alchemist
    @Grand_Alchemist 9 місяців тому +1

    Event with the "wheel" user added to TrueNAS, NFS refused to work for deluge /sonarr / radarr (CentOS using docker compose). I ended up making an SMB share (yes, Microsoft, blasphemy!) and it works perfectly. So much less of a headache than NFS, PLUS it's actually secure (authenticated with a password and ACL (Access Control). SO, yeah. Unexpected but I would just recommend making a fricking SMB share.

  • @vmdcortes
    @vmdcortes Рік тому

    Awesome!!
    Is this a good solution for a docker swarm volume sharing with the different nodes?

    • @christianlempa
      @christianlempa  Рік тому

      Thx! I'm not sure about that, I think NFS is still the easiest for my setup.

  • @manutech156
    @manutech156 2 роки тому

    Any plans to do a tutorial for Kubernetes Persistent Volume to TrueNAS NFS ?

  • @desibanjankri5646
    @desibanjankri5646 2 роки тому

    LOL I spent a week on figuring this exact thing out - wanted to use Photoprism to use pictures from Backup server rather than Import files into Docker. Just got it working last night.😂

  • @markobrkusanin4745
    @markobrkusanin4745 2 роки тому +1

    Gluster FS will be even better option to manage data inside Docker swarm.

    • @christianlempa
      @christianlempa  2 роки тому

      I'm so interested in these filesystems, once I finish my projects I start looking at them

  • @esra_erimez
    @esra_erimez 2 роки тому

    Would you please do a video about using Ceph Docker RBD volume plugin?

    • @christianlempa
      @christianlempa  2 роки тому

      Hmmm I need to look that up, sounds interesting

  • @TheTyphoon365
    @TheTyphoon365 2 роки тому

    I'm about to do an Unraid server for hosting my NAS and so many docker containers, I can't use NFS in Unraid on my nas though right? I'm watching the video now ...

    • @christianlempa
      @christianlempa  2 роки тому

      Im not sure, haven’t used unraid but I’m pretty sure it does NFS

  • @StephenJames2027
    @StephenJames2027 Рік тому

    7:48 After many hours I finally figured out that I needed NFS4 enabled on TrueNAS to get this to work on my setup. I kept getting Error 500 from Portainer when attempting this with the default NFS / NFS3. 😅

  • @gunnsheridan2162
    @gunnsheridan2162 Рік тому

    Hi Christian, thanks for the informative video. I have two questions though:
    1. What is the correct way of setting user with same id, group id on nfs server and client? I have rpi with user 1000:1000. Such user doesnt exist on my Synology. Should I add new user to synology? Or should I pick one of synology user ids and create such user on raspi? If so, how do you create user with specific ids?
    2. What about file locking through nfs? I had issues with network stored (samba cifs) data containing sqlite database, for example homeassistant, baikal. I couldnt network store mariadb, mysql either due to some "file locking issues".

  • @macenkajan
    @macenkajan 2 роки тому

    Hi Christian, great Videos. I would love to see how to use this NFS (or maybe iSCSI) Setup with a Kibernetes Cluster. This is what I am trying to setup right now ;-)

    • @christianlempa
      @christianlempa  2 роки тому

      Thanks mate! Great suggestion, maybe I'll do that in the future.

  • @scockman
    @scockman 2 роки тому

    Another AWESOME video!! But, I saw in the video that you have a portainer_data volume on the nfs share, how was that done? I have been trying to get this to work but getting docker error while trying to mount the volume.

    • @christianlempa
      @christianlempa  2 роки тому

      Thanks! You need to do it outside of the gui with docker cli commands unfortunately.

  • @gonzaloamadorhernandez7020
    @gonzaloamadorhernandez7020 2 роки тому

    Oh my gosh!!! You are a crack !!! Thank you very much, mater

  • @Earendur08
    @Earendur08 2 роки тому

    What's that split console you are using? Is it screen?

    • @christianlempa
      @christianlempa  2 роки тому +1

      It's Windows Terminal

    • @Earendur08
      @Earendur08 2 роки тому

      @@christianlempa is it really? Must be a windows 11 thing. I've never seen a split window like that other than when I've used screen on Linux.
      Very cool though. I like it.

  • @HelloHelloXD
    @HelloHelloXD 2 роки тому +1

    Great topic. One question. What is going to happen to the docker container if the connection between the NFS server and docker server is lost?

    • @christianlempa
      @christianlempa  2 роки тому

      The container will fail to start

    • @HelloHelloXD
      @HelloHelloXD 2 роки тому +1

      @@christianlempa what if the container was already running and the connection was lost?

    • @vladduh3164
      @vladduh3164 2 роки тому +1

      @@HelloHelloXD it seems that the container just keeps running, it may not be able to do anything tho, i just tested this with sonarr as i had the /config folder in the nfs volume and it seemed to work as long as it didnt need anything from that folder, when i clicked on each series it just showed me a loading screen until i reconnected it, I suppose the answer is it depends entirely on what folders you put in that volume and how gracefully the application handles losing access to those files

    • @HelloHelloXD
      @HelloHelloXD 2 роки тому +1

      @@vladduh3164 thank you.

  • @zaberchann
    @zaberchann 2 роки тому +1

    One concern of using NFS in a home lab is that NFS needs the user to ensure the local network is safe, otherwise the security is compromised since the only auth is the ip address (of course you can use kerberos, but it’s too hard to configure). Besides, an malicious docker container could connect the NFS by using the host ip.

  • @Felixls
    @Felixls 5 місяців тому

    Yeah, well, any application using a sqlite database will get corrupted sooner or later using a volume from a network share.
    I've lost so many hours and tried so many different solutions, like CIFS, NFS, GlusterFS, you name it, it doesn't work, and it never will.
    The only solution is to use a local directory or a docker volume (which is local).

  • @Ogorodovd
    @Ogorodovd Рік тому

    @christianlempa Thanks Christian! Could install portainer on a Debian VM within TrueNAS scale, and then communicate with that? Or are you using a separate machine entirely for your Portainer/Server?

  • @tl1897
    @tl1897 2 роки тому

    I tried this some time ago. Sadly my pi4 with 3 HDD's in raid5, using mdadm was not fast enough.
    So i decided to have my deployment files on the nfs, but volumes locally.
    And i wrote backup scripts for the rest.

  • @kevinhilton8683
    @kevinhilton8683 2 роки тому

    Hmm seems based on the comments iscsi might be the way to go which is block storage vs nfs which is file storage. I don't know however but I do know when I've had two linux system sharing via nfs the nfs connection has crapped out in the past causing problems. I'm not sure this is a better option than keeping bind mounted volumes and just having a backup solution for the volumes that runs periodically to backup the volumes to remote source. Lastly I'm wondering if you run an ldap server since this would synchronize users on vms and the NAS. I'm curious if you would get nfs errors in this scenario

    • @christianlempa
      @christianlempa  2 роки тому

      Currently I don't have LDAP, but I'm planning setting up an AD at home

  • @florianlauer7591
    @florianlauer7591 2 роки тому

    Hi!
    what tool for drawing and marking via mouse directly on screen are you using ?

    • @christianlempa
      @christianlempa  2 роки тому +1

      Hi, I'm using EpicPen and my Galaxy Tab as a drawing screen.

  • @gjermundification
    @gjermundification 2 роки тому

    I run my local storage lofs across several zpools. Not sure why anyone would do anything as complicated as docker when there are open solaris zones on zfs. In essence I run the server application part on a zpool that is in RAM and NVMe, and storage in RAM and spinning drives. ZIL, L2ARC, and all...
    I use NFS between the Mac and the media servers.

    • @christianlempa
      @christianlempa  2 роки тому

      There are a couple of reasons why Docker is useful ;)

  • @Spydaw
    @Spydaw 2 роки тому

    Awesome video, thank you for explaining this. I am doing the exact same with all my pods in k3s ;)

    • @christianlempa
      @christianlempa  2 роки тому +1

      Oh, that is cool! I'm planning that as well in my k3s cluster I'm currently building ;)

    • @Spydaw
      @Spydaw 2 роки тому

      @@christianlempa Feel free to ping me if you have any questions ;)

  • @crckdns
    @crckdns 2 роки тому

    Not running docker because I couldn't manage to run one single "package" on my qnap in station container (had some network problems)..
    That's why I'm using native running installations all the way without some voodoo in a package.

  • @Invaderjason123
    @Invaderjason123 Рік тому

    no matter what I do, I can't get the volume to mount.

  • @adamskalik3458
    @adamskalik3458 2 роки тому

    It does not work for synology, it creates volume on for example @docker folder which is not visible for standard user

    • @procheeseburger_2
      @procheeseburger_2 2 роки тому

      with synology I had to build the volumes in the CLI of the host.. doing it in Portainer doesn’t seem to work for me.