Sure do like the detail of your videos and the information you give on things you've found that could happen. I built a redundant OPNsense Proxmox VM using CARP and the testing went well until I removed the pfsync interface connection. I've found three links that usually do a nice job explaining things and in this case all three have different suggestions. If you ever decide to do more OPNsense videos, one on a redundant Proxmox VM CARP setup would be really educational. Thanks again for all your hard work.
Thanks for the feedback Have you seen this video? ua-cam.com/video/IWt3_K-12Ys/v-deo.html I'm running OPNsense myself using CARP in a PVE cluster, so those VMs don't get replicated There's been some newer code updates since I did that video mind and I think I noticed a slight change in the setup after I upgraded
Thanks for the link. i went to your website and clicked on playlist then looked at opnsense but that video wasn't listed. One link I read from Thomas Kreen said to not use opnsense's DHCP when setting up HA with CARP your new video on using Kea is timely. Looking forward to watching the video.
@@jimscomments Thanks for pointing that out. I've updated the playlist I don't use firewalls for services like DNS, DHCP, etc.. Instead I have dedicated servers But the DHCP relay function works fine when using HA
Being in my mid 70's I've been trying to simplify my network at home so I've been lazy and using my WIFI router for DHCP. Then I phased out my two bare metal TrueNAS servers and built some workstations from scratch to run Proxmox and convert TrueNAS into VMs. Now all of a sudden I have two more VMs running OPNsense. Using them as DHCP was just during my test so literally one day before I saw your video I was looking into getting DHCP off OPNsense when I build the final VMs. I literally stumbled onto Kea while looking for a Linux solution that would also let me stop using the WIFI router's DHCP as well. Then the next morning I see your new video on Kea. I guess really great minds think alike. ;~)
@@jimscomments Yeah I started off with everything on the router as well But it's quite a rabbit hole once you get started: "What if I add this.." "Ooh that looks a better option..."
Looking at the 'Join' link on Tech Tutorials vs your 'Patreon' link and the pricing is different. What's the difference between the Early Access vs the Patreon Early Access?
It's tricky but bear in mind I don't post new content as often these days Both offer access to new videos ahead of official publication, although YT is first because Patreon now enforces a delay of maybe 2 hours I think There is a storage limit on Patreon, and so far I'm below 4%, but for now that option also offers ad free access to my popular videos as well as the newer ones I've been uploading since Anyone using YT Premium wouldn't see an extra benefit from the Patreon option
Thanks for this. I have 3 DL380 servers and a MSA SAN. I want to use Ceph for HA on the servers and was wondering what to do with the SAN. I guess I can make it a ZFS storage for the cluster and add it as a shared drive for the VM’s in the cluster
Thanks for the feedback As much as I've found a NAS to be useful I now prefer this strategy for small clusters It's much less inconvenience when the NAS needs updating
Do you have any recommendations on the maximum latency between nodes in HA? I've seen a lot of people on the proxmox forums mention it needs to be less than 5-7ms.
I think it's open to interpretation But we aren't doing real-time synchronisation here and so this isn't suitable for critical data At a set interval, PVE takes a snapshot and then synchs that over to another server At that point, the amount of data to send, the latency and the available bandwidth between them is going to decide how long it will take to replicate and thus often you can run the task The first copy takes the longest because all of the data on the drive has to be sent After that, deltas are sent and that amount of data depends on how much change there has been since the last replication You can send replication traffic over a WAN which has far higher latency than your example, it just means it's going to take longer to send and so the replication interval will have to be higher
One thing I found out was when I set up my 2 node cluster, on the secondary, I couldn't create a vm without the primary being up (no quorum). Additionally, the secondary, on any vm that was running there, I couldn't get a console. The vm was still running, but no console. I'm wondering if this has been your observation as well?
The clusters I use have two servers, which run VMs, but they also have a qdevice If you only have two servers you get the problems you've described because by default you need at least two servers for quorum You can reduce that down to 1 in the configs but it risks problems in the event of a split brain A qdevice is basically just a voting machine and you can use something as simple as a raspberry pi I've got a video on that if you're interested ua-cam.com/video/jAlzBm40onc/v-deo.html They're very useful for small networks when a 3rd server feels like a waste of money It's easy to setup and after that you can do maintenance on a server for instance and still have access to VMs on the other
So I installed Proxmox on a ZFS volume, and discovered that most VMs that run say Ubuntu or Debian server, can’t install/boot on a ZFS volume!😂 Any ideas on how to make this work? Do I need to somehow modify the default Ubuntu/Debian server ISO file to include ZFS? Maybe you can do a tutorial video on this. Thanks!
Not quite sure what you mean PVE is a Debian OS and what I've done is to attach another drive which is formatted using ZFS This is used for VM storage so I can replicate hard drive files using ZFS replication The VMs have no knowledge of this though as they just see a virtual SCSI drive Or are are you talking about installing say Debian itself on a ZFS boot drive? At the moment the OS doesn't offer that as an option but I found this blog which may be useful www.thecrosseroads.net/2016/02/booting-a-zfs-root-via-uefi-on-debian/ I'm not sure of any gain for VMs though as drive redundancy for instance is handled by the hypervisor anyway
@@TechTutorialsDavidMcKone Yes I installed Proxmox on a ZFS boot drive/volume. And it works fine. TrueNAS installed on Proxmox as a VM on a ZFS boot volume also works fine. Mint Desktop Linux when installed on Proxmox on the ZFS boot volume also works fine. Debian server doesn’t install as a VM on Proxmox on a ZFS volume, and after much fiddling I finally got Ubuntu 24.10 to install on Proxmox on the ZFS boot volume, but it only works when the VM is configured as a SCSI boot drive. So luckily I finally got Ubuntu Server installed and working. The key is selecting the SCSI boot disk option in the VM setup even if you are not running any SCSI disks (like mine are all M2. NVME/Pcie Disks). Cheers!
@@TechTutorialsDavidMcKone And by the way the reason I changed over to a ZFS boot volume is because my initial installation of Proxmox, which was on Ext4 got corrupted. So that’s why I changed from Ext4 to ZFS.
Sure do like the detail of your videos and the information you give on things you've found that could happen. I built a redundant OPNsense Proxmox VM using CARP and the testing went well until I removed the pfsync interface connection. I've found three links that usually do a nice job explaining things and in this case all three have different suggestions. If you ever decide to do more OPNsense videos, one on a redundant Proxmox VM CARP setup would be really educational. Thanks again for all your hard work.
Thanks for the feedback
Have you seen this video?
ua-cam.com/video/IWt3_K-12Ys/v-deo.html
I'm running OPNsense myself using CARP in a PVE cluster, so those VMs don't get replicated
There's been some newer code updates since I did that video mind and I think I noticed a slight change in the setup after I upgraded
Thanks for the link. i went to your website and clicked on playlist then looked at opnsense but that video wasn't listed. One link I read from Thomas Kreen said to not use opnsense's DHCP when setting up HA with CARP your new video on using Kea is timely. Looking forward to watching the video.
@@jimscomments Thanks for pointing that out. I've updated the playlist
I don't use firewalls for services like DNS, DHCP, etc.. Instead I have dedicated servers
But the DHCP relay function works fine when using HA
Being in my mid 70's I've been trying to simplify my network at home so I've been lazy and using my WIFI router for DHCP. Then I phased out my two bare metal TrueNAS servers and built some workstations from scratch to run Proxmox and convert TrueNAS into VMs. Now all of a sudden I have two more VMs running OPNsense. Using them as DHCP was just during my test so literally one day before I saw your video I was looking into getting DHCP off OPNsense when I build the final VMs. I literally stumbled onto Kea while looking for a Linux solution that would also let me stop using the WIFI router's DHCP as well. Then the next morning I see your new video on Kea. I guess really great minds think alike. ;~)
@@jimscomments Yeah I started off with everything on the router as well
But it's quite a rabbit hole once you get started:
"What if I add this.."
"Ooh that looks a better option..."
Great video and well explained. Thanks for sharing
Thanks for the feedback
Good to know the video was useful
Got the answer the replication target is taken care of by HA automatically.
Yeah, it's very clever what they've done
It makes the whole process so much simpler
useful, thank u, and even better at 1.25 playback speed 🙂
Good to know the video was useful
Looking at the 'Join' link on Tech Tutorials vs your 'Patreon' link and the pricing is different. What's the difference between the Early Access vs the Patreon Early Access?
It's tricky but bear in mind I don't post new content as often these days
Both offer access to new videos ahead of official publication, although YT is first because Patreon now enforces a delay of maybe 2 hours I think
There is a storage limit on Patreon, and so far I'm below 4%, but for now that option also offers ad free access to my popular videos as well as the newer ones I've been uploading since
Anyone using YT Premium wouldn't see an extra benefit from the Patreon option
Thanks for this. I have 3 DL380 servers and a MSA SAN. I want to use Ceph for HA on the servers and was wondering what to do with the SAN. I guess I can make it a ZFS storage for the cluster and add it as a shared drive for the VM’s in the cluster
It's very useful ZFS
I've noticed folks syncing to TrueNAS as well
The challenge is it only replicates the hard drive file, so HA was very useful
Insightful and well explained, keep up the great videos.
Thanks for the feedback
As much as I've found a NAS to be useful I now prefer this strategy for small clusters
It's much less inconvenience when the NAS needs updating
After HA failover, what will be the effect of replication (set from server01 to server02)
Do you have any recommendations on the maximum latency between nodes in HA? I've seen a lot of people on the proxmox forums mention it needs to be less than 5-7ms.
I think it's open to interpretation
But we aren't doing real-time synchronisation here and so this isn't suitable for critical data
At a set interval, PVE takes a snapshot and then synchs that over to another server
At that point, the amount of data to send, the latency and the available bandwidth between them is going to decide how long it will take to replicate and thus often you can run the task
The first copy takes the longest because all of the data on the drive has to be sent
After that, deltas are sent and that amount of data depends on how much change there has been since the last replication
You can send replication traffic over a WAN which has far higher latency than your example, it just means it's going to take longer to send and so the replication interval will have to be higher
One thing I found out was when I set up my 2 node cluster, on the secondary, I couldn't create a vm without the primary being up (no quorum). Additionally, the secondary, on any vm that was running there, I couldn't get a console. The vm was still running, but no console. I'm wondering if this has been your observation as well?
The clusters I use have two servers, which run VMs, but they also have a qdevice
If you only have two servers you get the problems you've described because by default you need at least two servers for quorum
You can reduce that down to 1 in the configs but it risks problems in the event of a split brain
A qdevice is basically just a voting machine and you can use something as simple as a raspberry pi
I've got a video on that if you're interested
ua-cam.com/video/jAlzBm40onc/v-deo.html
They're very useful for small networks when a 3rd server feels like a waste of money
It's easy to setup and after that you can do maintenance on a server for instance and still have access to VMs on the other
So I installed Proxmox on a ZFS volume, and discovered that most VMs that run say Ubuntu or Debian server, can’t install/boot on a ZFS volume!😂 Any ideas on how to make this work? Do I need to somehow modify the default Ubuntu/Debian server ISO file to include ZFS? Maybe you can do a tutorial video on this. Thanks!
Not quite sure what you mean
PVE is a Debian OS and what I've done is to attach another drive which is formatted using ZFS
This is used for VM storage so I can replicate hard drive files using ZFS replication
The VMs have no knowledge of this though as they just see a virtual SCSI drive
Or are are you talking about installing say Debian itself on a ZFS boot drive?
At the moment the OS doesn't offer that as an option but I found this blog which may be useful
www.thecrosseroads.net/2016/02/booting-a-zfs-root-via-uefi-on-debian/
I'm not sure of any gain for VMs though as drive redundancy for instance is handled by the hypervisor anyway
@@TechTutorialsDavidMcKone Yes I installed Proxmox on a ZFS boot drive/volume. And it works fine. TrueNAS installed on Proxmox as a VM on a ZFS boot volume also works fine. Mint Desktop Linux when installed on Proxmox on the ZFS boot volume also works fine. Debian server doesn’t install as a VM on Proxmox on a ZFS volume, and after much fiddling I finally got Ubuntu 24.10 to install on Proxmox on the ZFS boot volume, but it only works when the VM is configured as a SCSI boot drive. So luckily I finally got Ubuntu Server installed and working. The key is selecting the SCSI boot disk option in the VM setup even if you are not running any SCSI disks (like mine are all M2. NVME/Pcie Disks). Cheers!
@@TechTutorialsDavidMcKone And by the way the reason I changed over to a ZFS boot volume is because my initial installation of Proxmox, which was on Ext4 got corrupted. So that’s why I changed from Ext4 to ZFS.
thank you.
Glad you liked the video