A few updates since Debian 12 (Bookworm) released: -You don't need to add the backports repo, Bookworm includes the more updated packages -You don't need to specify the backuports repo in apt install cockpit -Make sure your group is the owner of your data directories and you have permissions to read/write by group (default is by user). - PLEASE use Debian 12, as there is a bug in the version of Samba which is packaged with Debian 11 which will cause Samba to fall over due to out-of-memory (and if you are having OOM issues, upgrade to Debian 12 and upgrade Samba to the version in backports)
How do you make the group the owner of the data directories? I tried to do this but it wont give me the option to choose the group that I made. The error message I get is: "changing group of '/mnt/data/': Operation not permitted"
@@chrisrgutierrez I think what he means here is set the group ownership of the directory to the group you created. You can leave the user ownership as 'root' or whatever user it already is. But then make sure the group has full read, write, execute permissions. So if you run ls -la you should see "drwxrwxr-x" instead of "drwxr-xr-x" which is the default. In other words: 1. use chmod to change the directory permissions to 775, then 2. use chgrp to change the group ownership to whatever group you created. That's what fixed it for me. Hope it helps!
Update for ProxMox 8.1.x and Debian 12 - when you create your Linux Container (LCX) make sure you enable "Nesting" in the options screen before starting it. Removing the "Unprivileged" flag no longer allows "nesting" by default, and you'll run into all sorts of issues. Hope this helps!
This is bad-ass. And your delivery is not annoying at all. Straight to the point, no bs. I just followed this video to setup samba on a Dell T320 I'm giving to a friend who wants to learn linux, proxmox, and zfs at the tender age of 75. Subscribed and thank you!
Also, I appreciate how you don't have a long music/special effects intro. No one cares about that stuff. Also... 91MB of RAM for a NAS container. How cool is that??
thx for the clear, short tutorial. No ads, no fancy talk, just all what the home/hobby users need. I used Webmin and manual smb edit, but your video helped a lot too. My side notes: watch out for backup flag at PVE mount. If you create a shared backup HDD inc. PVE shares -like I did- the backup flag is a mistake... :)
Awesome video! I've been agonizing over the question of running truenas on bare metal or virtualizing through proxmox. This seems like a great solution!
I've been literally twice to this channel from a seach (different searches). Have not been disappointed in both occasions. Got my sub man. Great work and direct with great extra comments between steps.
I've been tearing down my homelab vSAN in favor of Proxmox. I had been using a VM as a file server was thinking about simplifying and deploying a samba container. I hadn't even considered Cockpit. Thanks for the great content!
Another fantastic video. I love the way you take us effortlessly through everything in each of these videos. Thank you for creating them as it has made Proxmox a straightforward setup for me.
Aweome work! Would love to see the following configuration in your future videos: - 2 disk NAS - 1st main SSD fast - 2nd the one for backing up the first (e.g. once a day) or raid (but honestly raid isn't worth it with limited drives, at least imo) - Additionally, a way to backup everything related to Proxmox (incl. Proxmox host & VM/C) on the NAS, so in case that main disk fails, you'll have a way to restore your setup.
Thank you, this was great. Took me 2 hours total to set everything up including debugging and learning some beginner steps that you (reasonably) skipped over. I now have a working NAS! So happy.
thanks for the tutorial, helped me out a lot! As a side note, at first i was read only limited on the windows side, turns out i had to go back to file sharing in cockpit and edit the permissions of my shares path to use the newly created user and group to either be the owner or add write permissions to the newly created group.
Thanks you so much for this. This secret lies under File Sharing / [YourShare] / Path / Edit Permissions (faded colour) so it was not something obvious to find.
WOw, this was an incredible tutorial. Really neat to see what you are going to do with that small fileserver. Never thought about wiping the NAS and putting Proxmox on it. Brilliant !!!
My man. I have been struggling with getting TrueNas or OMV running on proxmox, but setting up the mount points always killed me. Your video showed me I don't even need either of those to be functional. Thank you!
Fantastic tutorial, straight to the point, perfectly explained what each option actually means. After watching tons of tutorials on setting up a classic Samba share, this one is by far the best explanation. The setup in other tutorials with Debian and then another Debian inside Debian feels like Inception. You’ve definitely earned a new sub!
This has transformed proxmox for me (which in itself is a great hypervisor, but not a very practical NAS OS by design). Thank you for a very coherent and informative tutorial, that I was able to tweak to my preferences on the go. I'll certainly check out your other tutorials. Keep up the great work.
Great video! It's the little explanations that I really enjoy. Like lxc and kernal relationship and that it's a quota rather a dedicated amount of resource.
I have been binge watching your videos and thinks to you I have reinstalled my proxmox servers a few times now to get it just right an of course to learn. My head still hurts after watching your Nebula video. In my case I had a 16GB drive I wanted to connect a proxmox server and then use your cockpit method to share the contents to the internal network. To do that I used lsblk to identify the NTFS partition and then created a mount point on the proxmox server in /mnt/pve and mounted the disk. After a bit of digging I found the command to share a mount point to an LXC and it worked! Here is what you need: pct set 103 -mp0 mp=/host/dir,/container/mount/point. Just remember to edit fstab afterwards. Thanks for your great tutorials!
Exactly the tinkering distraction I was looking for so I don't need to deal with the pile of real work I need to be doing. Great instructions and thank you for giving me something to do today other than work! lol
Thanks for the great video! Hopefully this will satisfy my needs for a file share without having to run a nested virtual server or separate hardware server!
Thank you for the tutorial, of all the options I tried, this one works best for me, much simpler to use compared to Turnkey and File Server, and the fact that I can import old configuration is amazing.
Excellent video! I always appreciate the great Proxmox content you put out. Also, I hadn't heard of Cockpit before so I appreciate learning about that as well. I'll definitely be using it from now on.
"That just seems dumb." Is exactly the thought I had when thinking about virtualizing Truenas and passing through my 4 drives. Such an unnecessary layer of overhead and complexity. This video arrived at the right moment for me. Have been looking at different options for storage and think I'll give this a try. Whats the process for creating NFS shares through this?
NFS is ... a lot more tricky than Samba, since it's normally managed via the kernel server. Since the container is in its own namespace, even a privilaged container can't control the kernel's nfs exports. You can disable all isolation of the container and then it will work, but this is strongly not recommended. The solution is to use nfs-ganesha (userspace NFS server), but Cockpit doesn't have a GUI module for that. TrueNAS also uses nfs-ganesha, incidentally.
In general I use SMB over NFS in my own setups since Windows access is important. But, fundamentally, NFS and SMB are quite different protocols in how they deal with user permissions and access, and SMB is easier to administer due to server-side account permissions. As to performance, SMB can achieve roughly the same performance as NFS on large file IO and is dramatically slower on small file IO. For videos, SMB is perfectly adequate.
Except it's not dumb lol. Backing up and managing file shares, permissions etc from Proxmox is a headache. A separate VM that you can snapshot and pass through a HBA or controller is a much better idea.
Also, not everyone may wanna buy an enterprise grade ssd to deal with write amplification of zfs. So you use proxmox with lvm in a consumer ssd (speed benefit without huge write amplification of zfs) and then have a virtual truenas combining multiple hdds in a zfs pool (say in a raid 10 array).
Very nice, i am getting inclined to switch my home server over to proxmox. I was thinking about this before, just because LXC. So much less overhead than a full VM. And docker is a mess after some time (TM). Really appreciate your focus on small home labs/server. Most stuff about this is "look at this (insert expensive hardware)", which is not what most home servers owners have access to.
Thanks for releasing this video! While I'm not Jellyfin'ing, the LXC tip to enable hardware Quick Sync Video worked wonders with Channels DRV on a Intel Xeon E3-1265L powered Proxmox setup. Also the Terramaster Intel NAS looks pretty sweet, keeping an eye on that one! Thanks again!
Super helpful tutorial!! You may already have a follow on video or article about this, but I think it's important to create an admin account as part of this setup. One that is not root so that you can remove root from being able to login via GUI. In order to do this you'd want to create a new admin user and then add them to the sudo group. Then when that user signs into cockpit they can click this "Limited access/Admin access" toggle at the top of the page.
This is perfect for my needs, thanks for making and sharing this information mate! Now I'm motivated to build a small backup server, to make weekly backups of the proxmox 'nas' files for long-term protection. Cheers!
I think it would be interesting to do similar walkthrough for NFS (next to your SMB setup) and go into details on how it should be used as shared storage for other Proxmox nodes and VMs running on external nodes (should we use VLANS to segregate node and VM traffic? Or is NFS IP based security enough? NFS version? root squash? etc)...and I was also curious about the performance of this mounted NAS/NFS running under Proxmox LXC/VM vs. Disk Passthrough to VM vs. the Native NAS/NFS performance.
NFS is a bit of a different beast to manage, since usually you'd normally use the kernel server. nfs-ganesha is a userspace nfs server which would work in a container. For performance, Samba runs in userspace and the host ZFS pool is bind mounted into the container, so performance in an LXC will be the same as native until it hits a resource limit (either CPU or RAM). For NFS, performance using nfs-ganesha should be worse than the kernel server, however, TrueNAS uses nfs-ganesha anyway.
@@apalrdsadventures Haha, yeah, I ran into that today. To do NFS, you have to make it a privileged container and enable NFS in the features ... that said, I am also running into multiple failures when trying to get it running. When doing it in an unprivileged container I get dependency errors starting the nfs-server. When I do it in a privileged container I get errors starting the cockpit service. Any thoughts?
- create an new VM in Proxmox - allows easy pass-through of USB drives - install Debian 12, maybe with LUKS encryption - sudo apt install nfs-kernel-server ufw ufw-extras - sudo ufw enable - sudo ufw allow nfs - sudo nano /etc/exports - edit to your needs, using the infos provided in the opened file as a template. - mount the share from anywhere you want and use it as a network drive. Only Windows needs some additional software to mount the share, but there are good open source solutions.
I did something similar directly on one of my Proxmox nodes to share an attached (not ZFS) disk for backups, but using WebMin instead of Cockpit for the SMB setup. My next project is to convert an old desktop into a NAS, but I still wanted to stick with ProxMox, and give it a couple proper ZFS pools (SSD and rust, similar to yours), but I didn't want to do the SMB/NFS install on the bare metal hypervisor again, so I'll definitely be "borrowing" your LXC + Mount Point idea 👍👍
Thanks for the guide! As a lot of people in the comments, I was also considering true as/unRAID in bare metal or on a próximos VM. This makes a lot of sense, will definitely try that out! Thanks
I've to agree. Simple fast and convenient. I'm rocking tow 2nd hand HP Micro Servers, 40 and 50 models. First time seeing this tut I has really septic about it's practical use, but after consecutive Trunas Core VM and containers unexplainable failures (in my Proxmox environment), I decided to opt take in consider that my hardware and my resources would better function with a similar scenario. Sharing is caring, an thank you for sharing your thoughts whit us. Cheers from Portugal ;)
I'm slowly getting there, I started with just the features that Proxmox exposes through the GUI, now there's a little bit of manual dataset creation, but zpool management is another thing
Ah, great method! I've already built something up using an OpenMediaVault VM over Proxmox bc I wanted that GUI with solid user management/permissions, but I like the container/cockpit method a lot.
I've made some additional discoveries. So. I am not running IPV6 on my home network at the moment. And as such it is taking the DHCP request ~5 minutes or so for that to timeout before the container will finish starting up. I was able to disable this in the container itself. in the container edit /etc/sysctl.conf and add a line that says "net.ipv6.conf.all.disable_ipv6=1" without quotes and then reboot the container. You can do the same for the entire PVE system by doing the same but in the proxmox base system. The way I did it just modified the individual container.
Did you set the IPv6 address to DHCP in Proxmox? Setting it to static and leaving it empty will cause it to not assign an IPv6 address at all (other than the link local address).
@@apalrdsadventures I can't recall if I tried that or not. Might be worth trying to spin up another one and see if it lets me select static and leave the boxes empty. Some software requires you to populate it. Not sure about PVE
I have followed along with this guide and your jellyfin guide. My fileserver is "unprivledges" while the jellyfin is priveledged container. I used the manual steps to add mount points to both. The problem is on the unprivledged container the ownership of the mount is nobody and nogroup because the root has a differnent UID I think. And I can't change the permissions on the fileserver
Something I had to do which differs, this may be just because of how I set my drive up in Proxmox, etc. But after I created the share, I had to go back and Edit Permissions. and I had to check write for Group to be able to actually create a directory, file, etc. from a remote system accessing a share, In this case both Windows 10 and Ubuntu... I think 22.04. cant remember, getting old, lol And if you have trouble with a FSTAB mount in Linux using cifs try using vers=2.0. I had been using 1.0 for some older shares on legacy systems and had just copy/paste to the new line for this share mount.
I have a weird issue. Cockpit keeps using up about 20% of my available CPU power after following this guide. Cockpit is obviously trying to do something I just have no idea what. I've taken to manually turning cockpit on/off using systemctl whenever I need to log in but I'm hoping to find a solution to this problem... Apalrd, this is a great guide and series. Thanks for putting it together it has helped me quite a bit!
WHAT HAVE YOU DONE? Now i must tear up my system and replace my truenas in proxmox with this solution. Do you know how much work you have caused me !?!?!?! But jokes aside, great vid
Great job... super nice content!!! I create the LXC and everything is ok but the ZFS mount point always have a permissions problem.... (in my pc using nautilus its impossible create folder or files... permissions erro Tks!!!
If you create a new mount point for the container, you might need to chmod / chown the directory in Cockpit to a user or group with samba permissions. If it's a mount point you added from the host, it might need to be chmod/chown'd there.
@@apalrdsadventures Thanks in your video it was not necessary to change permissions... so I didn't imagine it would be a possibility... Thank you very much for the answer. I am a follower of your channel and your videos are top
Excellent video!! There is also an option with Turnkey that has Samba pre-installed and pre-configured. Have you tried that? and what are the disadvantages compared with the Cockpit approach?
I tried Turnkey fileserver first, found that the GUI wasn't as good as Cockpit, the Webmin-based manager has a ton of options to manage services on the system that shouldn't be managed in an appliance (like Apache settings, or hostname / network which are managed by Proxmox), and it doesn't natively support IPv6. Cockpit is also lighter weight than running Apache and runs itself as the logged in user.
I more or less did the same thing. Except I used a privileged container so I could export NFS. The tricky thing with storage is user accounts. Your NAS has to have user accounts for everyone and in the case of NFS those UIDs have to match the client machine UIDs. You can share things straight off of PVE, but that would require you add all your actual users and credentials to PVE which feels wrong. The idea I had was to put all the storage through that one NAS container and use it to share disks to the other VMs/CTs. That gets a bit broken because PVE is a bit fussy about what storage your can write what to, so you end up mapping a filesystem into the NAS container and then mounting the NFS share back on the PVE host. Which also feels wrong.
Mount point like this way will create an single big file on your zfs pool. It run slow your zfs pool. I think point folder from pool to container will run more fast?
You can setup dedup on the zfs dataset by using `zfs set dedup=on pool/dataset`. If you are using Proxmox-managed mount points for the NAS, they will be named something like `rpool/data/subvol-508-disk-1` where 508 is the ID, disk 0 is the root fs, and the rest are sequentially from when they were created. It won't go back and deduplicate things after they are written, it does this when data is written. So existing data will remain in place until it's modified.
So pleased to have found your channel. Is there an LXC container manger (like exists within Proxmox) for Cockpit or some other? I see it can do VMs and I'm already familiar with Virt-manager. I'd like to just run Debian with a gui desktop, kvm-qemu, LXC containers and Docker within one of those too but also be able to access and work at the machine itself. I see Incus is coming with Trixie. Not sure about LXD now that it's getting dropped after bookworm?
Just as a heads up for anyone having issues running this in a privileged container you must enable nesting by adding features: nesting=1 to the container .conf before the cockpit webgui will come up.
I am running proxmox (pimox) on an orange pi 5 with a 2TB nvme. 2 VM's, 5 LXC, it is enjoying itself at 6 Watt with some load, runs great. Cockpit looks useful, wil give it a shot. I use Alpine for the VM's and containers, it is like 30MB lol although that is quite barebone.
Thank for the video, this convinced me to migrate from virtualized truenas to this way, BUT can you explain the drop of perfomance? truenas used to be much faster
What performance difference are you seeing? TrueNAS (SCALE) should have very similar zfs tuning to Proxmox VE, but if you are coming from CORE you will have less ARC space.
A few updates since Debian 12 (Bookworm) released:
-You don't need to add the backports repo, Bookworm includes the more updated packages
-You don't need to specify the backuports repo in apt install cockpit
-Make sure your group is the owner of your data directories and you have permissions to read/write by group (default is by user).
- PLEASE use Debian 12, as there is a bug in the version of Samba which is packaged with Debian 11 which will cause Samba to fall over due to out-of-memory (and if you are having OOM issues, upgrade to Debian 12 and upgrade Samba to the version in backports)
How do you make the group the owner of the data directories? I tried to do this but it wont give me the option to choose the group that I made. The error message I get is: "changing group of '/mnt/data/': Operation not permitted"
did you figure this out? im pretty new to this stuff @@chrisrgutierrez
@@chrisrgutierrez I think what he means here is set the group ownership of the directory to the group you created. You can leave the user ownership as 'root' or whatever user it already is. But then make sure the group has full read, write, execute permissions. So if you run ls -la you should see "drwxrwxr-x" instead of "drwxr-xr-x" which is the default.
In other words:
1. use chmod to change the directory permissions to 775, then
2. use chgrp to change the group ownership to whatever group you created.
That's what fixed it for me. Hope it helps!
@@chrisrgutierrez seems your currently used user is not root (as in the tutorial) or not part of the of sudoers?
@@chrisrgutierrez the version I am using (logged in as root) I see edit permissions under the path (mount point)
Update for ProxMox 8.1.x and Debian 12 - when you create your Linux Container (LCX) make sure you enable "Nesting" in the options screen before starting it. Removing the "Unprivileged" flag no longer allows "nesting" by default, and you'll run into all sorts of issues. Hope this helps!
I don't see this. Please elaborate.
@@osaether Features > Edit > Check nesting box
This is bad-ass. And your delivery is not annoying at all. Straight to the point, no bs. I just followed this video to setup samba on a Dell T320 I'm giving to a friend who wants to learn linux, proxmox, and zfs at the tender age of 75. Subscribed and thank you!
Also, I appreciate how you don't have a long music/special effects intro. No one cares about that stuff. Also... 91MB of RAM for a NAS container. How cool is that??
cool mate. Your best feature is your ability to describe what you are doing and why you are doing it. appreciated.
Proxmox VE Helper-Scripts by tteck has added a script for installing Cockpit LXC with optional installs for file sharing plugins
thx for the clear, short tutorial. No ads, no fancy talk, just all what the home/hobby users need. I used Webmin and manual smb edit, but your video helped a lot too. My side notes: watch out for backup flag at PVE mount. If you create a shared backup HDD inc. PVE shares -like I did- the backup flag is a mistake... :)
You have a gift for simple but deep explanation. Great work
Awesome video! I've been agonizing over the question of running truenas on bare metal or virtualizing through proxmox. This seems like a great solution!
Glad it was helpful!
You are a god send. This is now my favorite Proxmox guru channel. Not sure exactly why, but this was a pleasure following along.
I've been literally twice to this channel from a seach (different searches). Have not been disappointed in both occasions. Got my sub man. Great work and direct with great extra comments between steps.
I've been tearing down my homelab vSAN in favor of Proxmox. I had been using a VM as a file server was thinking about simplifying and deploying a samba container. I hadn't even considered Cockpit. Thanks for the great content!
Glad I could help!
Another fantastic video. I love the way you take us effortlessly through everything in each of these videos. Thank you for creating them as it has made Proxmox a straightforward setup for me.
Aweome work! Would love to see the following configuration in your future videos:
- 2 disk NAS
- 1st main SSD fast
- 2nd the one for backing up the first (e.g. once a day) or raid (but honestly raid isn't worth it with limited drives, at least imo)
- Additionally, a way to backup everything related to Proxmox (incl. Proxmox host & VM/C) on the NAS, so in case that main disk fails, you'll have a way to restore your setup.
Thank you, this was great. Took me 2 hours total to set everything up including debugging and learning some beginner steps that you (reasonably) skipped over.
I now have a working NAS! So happy.
Did you get a error when clicking on the file sharing plugin "ProcessError (exited non-zero)"?
@@arunforlife7009 No, I did not.
Thanks for bringing up the Cockpit project. It will be a good addition to my Proxmox-NAS hybrid which I have been admining via CLI👍
It's a pretty useful project, and the GUI is low-resource and looks good
thanks for the tutorial, helped me out a lot!
As a side note, at first i was read only limited on the windows side, turns out i had to go back to file sharing in cockpit and edit the permissions of my shares path to use the newly created user and group to either be the owner or add write permissions to the newly created group.
Thank you! You just solved my problem!
Thanks you so much for this. This secret lies under File Sharing / [YourShare] / Path / Edit Permissions (faded colour) so it was not something obvious to find.
Thank you so much for this! My share only had permissions for root until I made this change.
Thank you, great instructional video. Also I like the kill-a-watt in the frame, I can really identify with you and your methods.
That is exactly what I was looking for, the perfect alternative to TrueNAS and OMV for sharing. Thanks for your videos.
WOw, this was an incredible tutorial. Really neat to see what you are going to do with that small fileserver. Never thought about wiping the NAS and putting Proxmox on it. Brilliant !!!
My man. I have been struggling with getting TrueNas or OMV running on proxmox, but setting up the mount points always killed me. Your video showed me I don't even need either of those to be functional. Thank you!
Glad it's helpful!
Your videos are amazing! Probably the best tutorials on UA-cam in the self-hosting space. Thank you for doing what you do!
Glad you like them!
Fantastic tutorial, straight to the point, perfectly explained what each option actually means. After watching tons of tutorials on setting up a classic Samba share, this one is by far the best explanation. The setup in other tutorials with Debian and then another Debian inside Debian feels like Inception. You’ve definitely earned a new sub!
Thanks
Thanks!
This has transformed proxmox for me (which in itself is a great hypervisor, but not a very practical NAS OS by design). Thank you for a very coherent and informative tutorial, that I was able to tweak to my preferences on the go. I'll certainly check out your other tutorials. Keep up the great work.
Great video! It's the little explanations that I really enjoy. Like lxc and kernal relationship and that it's a quota rather a dedicated amount of resource.
Fun fact, linux cgroups (the method for enforcing quotas) can also be used to limit the CPU/RAM of individual user accounts, it's super handy
Great video. Thank you. I've been using Proxmox and TrueNAS for a while and just installed Cockpit today. Perfect.
nice. do you use truenas like vm into proxmox?
@@orafaelgf TrueNAS as Proxmox storage
I have been binge watching your videos and thinks to you I have reinstalled my proxmox servers a few times now to get it just right an of course to learn. My head still hurts after watching your Nebula video. In my case I had a 16GB drive I wanted to connect a proxmox server and then use your cockpit method to share the contents to the internal network. To do that I used lsblk to identify the NTFS partition and then created a mount point on the proxmox server in /mnt/pve and mounted the disk. After a bit of digging I found the command to share a mount point to an LXC and it worked! Here is what you need: pct set 103 -mp0 mp=/host/dir,/container/mount/point. Just remember to edit fstab afterwards. Thanks for your great tutorials!
I’ve subscribed your channel because of this video alone. It is exactly what i was looking for! Thanks bro!
That's actually a really pretty and creative way to do that stuff! Thank you for that inspiration!
Exactly the tinkering distraction I was looking for so I don't need to deal with the pile of real work I need to be doing. Great instructions and thank you for giving me something to do today other than work! lol
Glad it was helpful!
Thanks for the great video! Hopefully this will satisfy my needs for a file share without having to run a nested virtual server or separate hardware server!
Thank you for the tutorial, of all the options I tried, this one works best for me, much simpler to use compared to Turnkey and File Server, and the fact that I can import old configuration is amazing.
Oh man, apalrd. I'm doing this very project as we speak. Impeccable timing!
Hopefully it works well for you then
Fantastic, easy to follow even for a linux & proxmox newbie like me, thanks.
Even managed to connect to it with music assistant in Home Assistant
Excellent video! I always appreciate the great Proxmox content you put out. Also, I hadn't heard of Cockpit before so I appreciate learning about that as well. I'll definitely be using it from now on.
Hey, this was pretty good. Fast and fluid w/o bunches of cuts. Nice + thanks!
Thank you for the tutorial. I was able to set up my Network Share successfully by following this guide.
Glad it helped
Best solution for my scenario. Good one! Still works under Bookworm.
"That just seems dumb." Is exactly the thought I had when thinking about virtualizing Truenas and passing through my 4 drives. Such an unnecessary layer of overhead and complexity. This video arrived at the right moment for me. Have been looking at different options for storage and think I'll give this a try. Whats the process for creating NFS shares through this?
NFS is ... a lot more tricky than Samba, since it's normally managed via the kernel server. Since the container is in its own namespace, even a privilaged container can't control the kernel's nfs exports. You can disable all isolation of the container and then it will work, but this is strongly not recommended.
The solution is to use nfs-ganesha (userspace NFS server), but Cockpit doesn't have a GUI module for that. TrueNAS also uses nfs-ganesha, incidentally.
In general I use SMB over NFS in my own setups since Windows access is important. But, fundamentally, NFS and SMB are quite different protocols in how they deal with user permissions and access, and SMB is easier to administer due to server-side account permissions.
As to performance, SMB can achieve roughly the same performance as NFS on large file IO and is dramatically slower on small file IO. For videos, SMB is perfectly adequate.
i did truenas with EXSi. Truenas don't play well when it is a VM. in fact it corrupt data quite often.
Except it's not dumb lol. Backing up and managing file shares, permissions etc from Proxmox is a headache. A separate VM that you can snapshot and pass through a HBA or controller is a much better idea.
Also, not everyone may wanna buy an enterprise grade ssd to deal with write amplification of zfs. So you use proxmox with lvm in a consumer ssd (speed benefit without huge write amplification of zfs) and then have a virtual truenas combining multiple hdds in a zfs pool (say in a raid 10 array).
Thank you for this simple walkthrough. been looking for a simple solution to integrate my NAS into proxmox.
Glad you like it!
Very nice, i am getting inclined to switch my home server over to proxmox. I was thinking about this before, just because LXC. So much less overhead than a full VM. And docker is a mess after some time (TM). Really appreciate your focus on small home labs/server. Most stuff about this is "look at this (insert expensive hardware)", which is not what most home servers owners have access to.
I do enjoy making lower end hardware work for me, spending more time getting the software right rather than throwing hardware at it.
@@apalrdsadventures and truth be told - most home server users have old/repurposed hardware. I wish you much success!
Thanks!
Nice work :) Had to add a bit of custom samba conf to get time machine working on the SMB share, but it works :D
Thanks dude, got me where I needed to be. Simple and easy.
This is an excellent tutorial! I have been looking into building a simple file server for my home network!
Thanks for releasing this video! While I'm not Jellyfin'ing, the LXC tip to enable hardware Quick Sync Video worked wonders with Channels DRV on a Intel Xeon E3-1265L powered Proxmox setup. Also the Terramaster Intel NAS looks pretty sweet, keeping an eye on that one! Thanks again!
one of the best i have found on the interwebs
Thank you!
Thank you so much! just got done with the video and everything just works very well made and easy to understand tutorial
Great to hear!
Great vid! A lot of very useful information.
Super helpful tutorial!!
You may already have a follow on video or article about this, but I think it's important to create an admin account as part of this setup. One that is not root so that you can remove root from being able to login via GUI. In order to do this you'd want to create a new admin user and then add them to the sudo group. Then when that user signs into cockpit they can click this "Limited access/Admin access" toggle at the top of the page.
That's super cool, thx for the video! 👍
This is what I was looking for great tutorial thx 👍
Thank you for the tutorial, very nice!
Thanks for this! Always nice to watch, very understandable and comprehensible.
This is perfect for my needs, thanks for making and sharing this information mate! Now I'm motivated to build a small backup server, to make weekly backups of the proxmox 'nas' files for long-term protection. Cheers!
Glad you liked it!
I think it would be interesting to do similar walkthrough for NFS (next to your SMB setup) and go into details on how it should be used as shared storage for other Proxmox nodes and VMs running on external nodes (should we use VLANS to segregate node and VM traffic? Or is NFS IP based security enough? NFS version? root squash? etc)...and I was also curious about the performance of this mounted NAS/NFS running under Proxmox LXC/VM vs. Disk Passthrough to VM vs. the Native NAS/NFS performance.
NFS is a bit of a different beast to manage, since usually you'd normally use the kernel server. nfs-ganesha is a userspace nfs server which would work in a container.
For performance, Samba runs in userspace and the host ZFS pool is bind mounted into the container, so performance in an LXC will be the same as native until it hits a resource limit (either CPU or RAM). For NFS, performance using nfs-ganesha should be worse than the kernel server, however, TrueNAS uses nfs-ganesha anyway.
@@apalrdsadventures interesting stuff. I still think it would be a great video idea to complete the functionality of your awesome custom NAS.
@@apalrdsadventures ha, didn't know TrueNas always used Ganesha. I always wondered why performance was so poor...
@@apalrdsadventures Haha, yeah, I ran into that today. To do NFS, you have to make it a privileged container and enable NFS in the features ... that said, I am also running into multiple failures when trying to get it running. When doing it in an unprivileged container I get dependency errors starting the nfs-server. When I do it in a privileged container I get errors starting the cockpit service. Any thoughts?
@@drumguy1384 Did you ever get it working with NFS?
- create an new VM in Proxmox - allows easy pass-through of USB drives
- install Debian 12, maybe with LUKS encryption
- sudo apt install nfs-kernel-server ufw ufw-extras
- sudo ufw enable
- sudo ufw allow nfs
- sudo nano /etc/exports
- edit to your needs, using the infos provided in the opened file as a template.
- mount the share from anywhere you want and use it as a network drive. Only Windows needs some additional software to mount the share, but there are good open source solutions.
This is great, thanks a lot for your time.
I did something similar directly on one of my Proxmox nodes to share an attached (not ZFS) disk for backups, but using WebMin instead of Cockpit for the SMB setup. My next project is to convert an old desktop into a NAS, but I still wanted to stick with ProxMox, and give it a couple proper ZFS pools (SSD and rust, similar to yours), but I didn't want to do the SMB/NFS install on the bare metal hypervisor again, so I'll definitely be "borrowing" your LXC + Mount Point idea 👍👍
Glad you like it!
this is exactly what I was looking for. I am going to try it out.
perfect amount of explanations, not to little, not too much
Very nice tutorial well explained
Glad you like it!
Thanks for the guide! As a lot of people in the comments, I was also considering true as/unRAID in bare metal or on a próximos VM. This makes a lot of sense, will definitely try that out! Thanks
Glad I could help!
Thank you so much for this!
Amazing brother, thank you.
Thank you 👍
Nice tutorial.
I've to agree. Simple fast and convenient.
I'm rocking tow 2nd hand HP Micro Servers, 40 and 50 models.
First time seeing this tut I has really septic about it's practical use, but after consecutive Trunas Core VM and containers unexplainable failures (in my Proxmox environment), I decided to opt take in consider that my hardware and my resources would better function with a similar scenario.
Sharing is caring, an thank you for sharing your thoughts whit us.
Cheers from Portugal ;)
Greetings from Michigan :) Glad you enjoyed it
Excellent video, thanks!
Thanks for this! Since 2 years ago when I started working with PVE, I found LXC is good and light OS container. It likes a Swiss Army Knife .).
Bad ass info. I'm really digging all the new stuff I'm learning. Thanks!
great content!
Great content man!
This is great stuff. I'd like to know how you maintain ZFS, as I am a complete noob with that. Maybe a future video?
I'm slowly getting there, I started with just the features that Proxmox exposes through the GUI, now there's a little bit of manual dataset creation, but zpool management is another thing
Thanks for a really comprehensive tutorial.
Can't wait to implement this with Jellyfin
You're very welcome!
Ah, great method! I've already built something up using an OpenMediaVault VM over Proxmox bc I wanted that GUI with solid user management/permissions, but I like the container/cockpit method a lot.
You sir, are a super star!
I've made some additional discoveries. So. I am not running IPV6 on my home network at the moment. And as such it is taking the DHCP request ~5 minutes or so for that to timeout before the container will finish starting up. I was able to disable this in the container itself. in the container edit /etc/sysctl.conf and add a line that says "net.ipv6.conf.all.disable_ipv6=1" without quotes and then reboot the container. You can do the same for the entire PVE system by doing the same but in the proxmox base system. The way I did it just modified the individual container.
Did you set the IPv6 address to DHCP in Proxmox? Setting it to static and leaving it empty will cause it to not assign an IPv6 address at all (other than the link local address).
@@apalrdsadventures I can't recall if I tried that or not. Might be worth trying to spin up another one and see if it lets me select static and leave the boxes empty. Some software requires you to populate it. Not sure about PVE
@@Trains-With-Shane Just select SLAAC as you IPv6 method and you are good to go. SLAAC is in effect an automatically assigned static IP.
I have followed along with this guide and your jellyfin guide. My fileserver is "unprivledges" while the jellyfin is priveledged container. I used the manual steps to add mount points to both. The problem is on the unprivledged container the ownership of the mount is nobody and nogroup because the root has a differnent UID I think. And I can't change the permissions on the fileserver
When I make both containers priviledged the ownership of mount point works. Other option is to just put both services in 1 container.
Excellent video as always, could you please create a video with iVentoy using a NAS for ISO storage on Proxmox
Thanks, this is just what I was looking for.
Enjoy!
Something I had to do which differs, this may be just because of how I set my drive up in Proxmox, etc. But after I created the share, I had to go back and Edit Permissions. and I had to check write for Group to be able to actually create a directory, file, etc. from a remote system accessing a share, In this case both Windows 10 and Ubuntu... I think 22.04. cant remember, getting old, lol
And if you have trouble with a FSTAB mount in Linux using cifs try using vers=2.0. I had been using 1.0 for some older shares on legacy systems and had just copy/paste to the new line for this share mount.
great video dude! thanks
I have a weird issue. Cockpit keeps using up about 20% of my available CPU power after following this guide. Cockpit is obviously trying to do something I just have no idea what. I've taken to manually turning cockpit on/off using systemctl whenever I need to log in but I'm hoping to find a solution to this problem...
Apalrd, this is a great guide and series. Thanks for putting it together it has helped me quite a bit!
I keep searching for things and coming to this channel. I should honestly just subscribe at this point lol, keep up the good work man.
Thanks!
Cool way. There is also a file server template in proxmox lxc templates. But with the ACL you go a step beyond
I did try the turnkey template first, but liked this solution better as something I'd run myself, with enough features in a good UI.
@@apalrdsadventures you are right about that
WHAT HAVE YOU DONE? Now i must tear up my system and replace my truenas in proxmox with this solution. Do you know how much work you have caused me !?!?!?! But jokes aside, great vid
Thanks! Good luck with your new system :)
Cool nice video. I use Proxmox running quite a few debian vm's. Nice addition. Sub'd and liked!
Great job... super nice content!!!
I create the LXC and everything is ok but the ZFS mount point always have a permissions problem.... (in my pc using nautilus its impossible create folder or files... permissions erro Tks!!!
If you create a new mount point for the container, you might need to chmod / chown the directory in Cockpit to a user or group with samba permissions. If it's a mount point you added from the host, it might need to be chmod/chown'd there.
@@apalrdsadventures Thanks in your video it was not necessary to change permissions... so I didn't imagine it would be a possibility... Thank you very much for the answer. I am a follower of your channel and your videos are top
Glad you enjoyed it!
Excellent video!! There is also an option with Turnkey that has Samba pre-installed and pre-configured. Have you tried that? and what are the disadvantages compared with the Cockpit approach?
I tried Turnkey fileserver first, found that the GUI wasn't as good as Cockpit, the Webmin-based manager has a ton of options to manage services on the system that shouldn't be managed in an appliance (like Apache settings, or hostname / network which are managed by Proxmox), and it doesn't natively support IPv6. Cockpit is also lighter weight than running Apache and runs itself as the logged in user.
@@apalrdsadventures and your solution can even work on pimox and a small raspberry.
Perfect as usual
I more or less did the same thing. Except I used a privileged container so I could export NFS. The tricky thing with storage is user accounts. Your NAS has to have user accounts for everyone and in the case of NFS those UIDs have to match the client machine UIDs. You can share things straight off of PVE, but that would require you add all your actual users and credentials to PVE which feels wrong.
The idea I had was to put all the storage through that one NAS container and use it to share disks to the other VMs/CTs. That gets a bit broken because PVE is a bit fussy about what storage your can write what to, so you end up mapping a filesystem into the NAS container and then mounting the NFS share back on the PVE host. Which also feels wrong.
THANK YOU
Mount point like this way will create an single big file on your zfs pool. It run slow your zfs pool. I think point folder from pool to container will run more fast?
great explanation
Great one 👌, please make a video for odk Central installation on ubuntu local machine, thanks 🙏.
Looks great....had to adjust permissions to get it to work :)
Thank you Apalrd's adventure for this video, is there any chance to add deduplication for this kind of NAS setup?
You can setup dedup on the zfs dataset by using `zfs set dedup=on pool/dataset`. If you are using Proxmox-managed mount points for the NAS, they will be named something like `rpool/data/subvol-508-disk-1` where 508 is the ID, disk 0 is the root fs, and the rest are sequentially from when they were created.
It won't go back and deduplicate things after they are written, it does this when data is written. So existing data will remain in place until it's modified.
So pleased to have found your channel. Is there an LXC container manger (like exists within Proxmox) for Cockpit or some other? I see it can do VMs and I'm already familiar with Virt-manager. I'd like to just run Debian with a gui desktop, kvm-qemu, LXC containers and Docker within one of those too but also be able to access and work at the machine itself. I see Incus is coming with Trixie. Not sure about LXD now that it's getting dropped after bookworm?
Just as a heads up for anyone having issues running this in a privileged container you must enable nesting by adding features: nesting=1 to the container .conf before the cockpit webgui will come up.
I am running proxmox (pimox) on an orange pi 5 with a 2TB nvme. 2 VM's, 5 LXC, it is enjoying itself at 6 Watt with some load, runs great. Cockpit looks useful, wil give it a shot. I use Alpine for the VM's and containers, it is like 30MB lol although that is quite barebone.
OpenWrt x86 is even lighter again... and has a web gui.
@@MarkConstable you’re comparing a forklift to a tugboat
Thank for the video, this convinced me to migrate from virtualized truenas to this way, BUT can you explain the drop of perfomance? truenas used to be much faster
What performance difference are you seeing? TrueNAS (SCALE) should have very similar zfs tuning to Proxmox VE, but if you are coming from CORE you will have less ARC space.