Proxmox High Availability With Ceph
Вставка
- Опубліковано 22 чер 2024
- In this video I deploy Ceph onto my MS-01 Proxmox cluster.
Thanks to Scyto: gist.github.com/scyto/8c652f3...
Minis Forum MS-01: amzn.to/3V9DkAa
Cable Matters Thunderbolt: amzn.to/4bOOZtU
Samsung 980 Pro: amzn.to/4dSaxaS
Corsair Vengeance: amzn.to/44OehWF
GitHub:
github.com/JamesTurland/JimsG...
Recommended Hardware: github.com/JamesTurland/JimsG...
Discord: / discord
Twitter: / jimsgarage_
Reddit: / jims-garage
GitHub: github.com/JamesTurland/JimsG...
00:00 - Introduction to Ceph and Configuration
04:15 - Ceph Instructions Walkthrough
06:13 - Future Jim Edit - You Need Separate Drives
11:35 - Starting Deployment of Ceph
25:29 - CephFS - Shared Storage
29:26 - Ceph Benchmarks - Наука та технологія
The live migration should have happend without a ping being dropped. The disconnect you saw was only the serial console cutting over to the different hv. If you woulf have done it over ssh, you should have seen no dropped ping or at max one, depending on the speed of your switch.
Thanks, yes I did check the output again and saw no dropouts. The next test is to HA the firewall, wish me luck.
Great video!
Just as a head's up -- instead of initiating the migration via the command line, you can just click on the migrate button in the GUI.
Thanks for another straight to the point video!
Thanks!
Just a thought. A nicer test of the ha might be to run your ping command from another node rather than the one you migrated. That way you can see if the service really is fully available to external clients
Great content as usual. Just some notes on the ceph cluster itself. You want to set global flags like NOOUT,NOSCRUB and NO-DEEPSCRUB when rebooting ceph nodes as they will start to rebalance when the first node is down if you dont.
The fun project to cover would be how to shut down a proxmox cluster with Ceph as it does not seem to have an out of the box solution.
I would always perform a full backup in case
everytime i go to build out a project you put out a similar video going over it. If you somehow put out a video on how to use the zfs over iscsi storage option in proxmox I'll be floored
🎉
❤
A few issues to think about when you do migration (live or offline):
1. Try to use the same hardware CPU generation and brand on the nodes. Live migration from Ryzen to older AMD CPUs does not work flawlessly, the destination vm will spike at 100 % CPU and be unresponsive. You will have to restart the vm, so no live migration in this use case. Maybe it has been fixed in Proxmox 8, I used Proxmox 7.
2. Live migration between different processor brands is not possible, so no live migration between AMD and Intel CPUs.
3. Migration (live or offline) of vms with USB-attached devices is not possible. That ruined my idea of having a Home Assistant vm with failover, sigh.
can you show Proxmox High Availability with Home Assistant Containers (LXCs or VMs) and Zigbee Stick?
It's possible but complex without multiple ZigBee sticks
@@Jims-Garage this sticks are cheap. Having to wait few days for parts without working home automations is much worser.
+1
Why do you go to the effort of cloning and then moving the disks? You can choose the storage at the time you do the clone. Does that not work with Ceph?
Agreed, and I mentioned it on screen. It's for people with existing VMs that want to move to the new Ceph storage.
Now just make a video of migration of virtualbox/vmware workstation/bare machine /esxi to proxmox
I did hyper-v, does that count? 😂
@@Jims-Garage lol, but really it will help us me and one of my friend has arround 20,24 VM'S on VMware workstation and I wanted to migrate all to proxmox
a small hickup and voilà