Proxmox High Availability With Ceph

Поділитися
Вставка
  • Опубліковано 22 чер 2024
  • In this video I deploy Ceph onto my MS-01 Proxmox cluster.
    Thanks to Scyto: gist.github.com/scyto/8c652f3...
    Minis Forum MS-01: amzn.to/3V9DkAa
    Cable Matters Thunderbolt: amzn.to/4bOOZtU
    Samsung 980 Pro: amzn.to/4dSaxaS
    Corsair Vengeance: amzn.to/44OehWF
    GitHub:
    github.com/JamesTurland/JimsG...
    Recommended Hardware: github.com/JamesTurland/JimsG...
    Discord: / discord
    Twitter: / jimsgarage_
    Reddit: / jims-garage
    GitHub: github.com/JamesTurland/JimsG...
    00:00 - Introduction to Ceph and Configuration
    04:15 - Ceph Instructions Walkthrough
    06:13 - Future Jim Edit - You Need Separate Drives
    11:35 - Starting Deployment of Ceph
    25:29 - CephFS - Shared Storage
    29:26 - Ceph Benchmarks
  • Наука та технологія

КОМЕНТАРІ • 23

  • @LampJustin
    @LampJustin 10 днів тому +7

    The live migration should have happend without a ping being dropped. The disconnect you saw was only the serial console cutting over to the different hv. If you woulf have done it over ssh, you should have seen no dropped ping or at max one, depending on the speed of your switch.

    • @Jims-Garage
      @Jims-Garage  10 днів тому +4

      Thanks, yes I did check the output again and saw no dropouts. The next test is to HA the firewall, wish me luck.

  • @ewenchan1239
    @ewenchan1239 10 днів тому +3

    Great video!
    Just as a head's up -- instead of initiating the migration via the command line, you can just click on the migrate button in the GUI.

  • @ketiljo
    @ketiljo 10 днів тому +1

    Thanks for another straight to the point video!

  • @rjarmitag
    @rjarmitag 10 днів тому +2

    Just a thought. A nicer test of the ha might be to run your ping command from another node rather than the one you migrated. That way you can see if the service really is fully available to external clients

  • @tactoad
    @tactoad 10 днів тому

    Great content as usual. Just some notes on the ceph cluster itself. You want to set global flags like NOOUT,NOSCRUB and NO-DEEPSCRUB when rebooting ceph nodes as they will start to rebalance when the first node is down if you dont.

  • @rogerthomas7040
    @rogerthomas7040 9 днів тому +1

    The fun project to cover would be how to shut down a proxmox cluster with Ceph as it does not seem to have an out of the box solution.

    • @Jims-Garage
      @Jims-Garage  9 днів тому

      I would always perform a full backup in case

  • @jeefonyoutube
    @jeefonyoutube 10 днів тому +1

    everytime i go to build out a project you put out a similar video going over it. If you somehow put out a video on how to use the zfs over iscsi storage option in proxmox I'll be floored

  • @BenjaminBenStein
    @BenjaminBenStein 10 днів тому +1

    🎉

  • @jacobburgin826
    @jacobburgin826 10 днів тому +1

  • @hanscarlsson7276
    @hanscarlsson7276 10 днів тому

    A few issues to think about when you do migration (live or offline):
    1. Try to use the same hardware CPU generation and brand on the nodes. Live migration from Ryzen to older AMD CPUs does not work flawlessly, the destination vm will spike at 100 % CPU and be unresponsive. You will have to restart the vm, so no live migration in this use case. Maybe it has been fixed in Proxmox 8, I used Proxmox 7.
    2. Live migration between different processor brands is not possible, so no live migration between AMD and Intel CPUs.
    3. Migration (live or offline) of vms with USB-attached devices is not possible. That ruined my idea of having a Home Assistant vm with failover, sigh.

  • @TheMaksimSh
    @TheMaksimSh 10 днів тому +2

    can you show Proxmox High Availability with Home Assistant Containers (LXCs or VMs) and Zigbee Stick?

    • @Jims-Garage
      @Jims-Garage  10 днів тому

      It's possible but complex without multiple ZigBee sticks

    • @TheMaksimSh
      @TheMaksimSh 9 днів тому

      @@Jims-Garage this sticks are cheap. Having to wait few days for parts without working home automations is much worser.

  • @AdrianuX1985
    @AdrianuX1985 10 днів тому +1

    +1

  • @rjarmitag
    @rjarmitag 10 днів тому +1

    Why do you go to the effort of cloning and then moving the disks? You can choose the storage at the time you do the clone. Does that not work with Ceph?

    • @Jims-Garage
      @Jims-Garage  10 днів тому +1

      Agreed, and I mentioned it on screen. It's for people with existing VMs that want to move to the new Ceph storage.

  • @iron-man1
    @iron-man1 9 днів тому +1

    Now just make a video of migration of virtualbox/vmware workstation/bare machine /esxi to proxmox

    • @Jims-Garage
      @Jims-Garage  9 днів тому

      I did hyper-v, does that count? 😂

    • @iron-man1
      @iron-man1 7 днів тому

      @@Jims-Garage lol, but really it will help us me and one of my friend has arround 20,24 VM'S on VMware workstation and I wanted to migrate all to proxmox

  • @magnificoas388
    @magnificoas388 10 днів тому

    a small hickup and voilà