Very cool video! Well made! I'm really impressed with the ansible playbooks and this nifty Cockpit module you guys have made! I've never explored cockpit modules, turns out I've been missing out big time!
Hey, same question as others, can this work for other hardware than 45drives ? Stuck at the Device Alias playbook (core) step. Would love to ear about you. Huge work on your side guys, thanks a lot.
Q - is there an updated version of this for Rocky 9, or that doesn't fail out on updated versions of Rocky 8? I want to try this, but don't want to have to build out unpatched early version of Rocky 8 to make it work... :)
Containerized Ceph in k8s using rook is a solid option if that was fits your use case. We prefer to deploy our Ceph clusters on bare metal. But that’s the beauty of open source, the flexibility to do things your way! - Brett K
package broken now? fedora40 doesn't find dependencies. python3-dataclasses is part of python3 now, and the packages don't seem to have been updated to reflect current package state.
Package currently works as expected in Rocky8.X and Ubuntu 20.04 python3-dataclasses package is still provided via EPEL repo in rocky. Perhaps using a newer distro? Rocky 9 and Ubuntu support coming in the coming months. Reach out to info@45drives.com with any tech issues you're having and I can help you. - Brett K.
@@45Drives i may delete my comment later. Script has conditionals for fedora, but does not work. I will chalk it up to distro differences. Would like a chance to figure this out, maybe over email? Would like to fully admit ceph is super new ro me and i want to get lab setup working.
should we deploy ceph inside kubernetes cluster? I mean, what if something happened to the cluster than we wouldn't have access to our PVs and PVCs. Is it possible to integrate ceph with proxmox and than use it inside the kubernetes? What is the best practice here?
Thanks for the question! We are believers that Ceph should be standalone to the Kubernetes cluster. Ceph and Kubernetes are both complex systems so sticking with our theme of keeping it simple and having them work independently of each other is ideal. You can consume ceph storage (external to kubernetes) through the use of the ceph-csi drivers (github.com/ceph/ceph-csi). Hope this helps!
Took me many nights to install 4 servers one by one by hand with Ceph. Now you do this in a video that takes not longer than 13 minutes..... Am I too old ?
If the ansible inventory mgmt is such a tedious task for you why don't you investigate a little bit time into dynamic inventories or the add_host module. You can get rid of all this inventory management when you make the classification on the machines itself, using the daughter board or with some characteristics of the hardware. Also, I would ship this boring manual pinging as part of your playbook 😉
I don’t get it why they make ceph so complicated. I mean come on, a rpm/deb package for each service (osd, mon…) and one simple config file in etc/ceph (osd.conf, mon.conf…) and that should be enough. But all this cephadm/cephdeploy madness is just unnecessary
Can i suggest something, after each run is done and the done option is made available, would it not be best to Grey out the run option then otherwise you might get someone clicking run again by mistake.
Hey Arthur, this looks like its just a dependency/repo issue. Which OS are you trying to run on? We only officially support Ubuntu and Rocky at this time. If you are using Rocky or Ubuntu, just make sure that your packages are all up to date. If you are still running into issues, feel free to reach out to us at info@45drives.com and we could get you in touch with support. Thanks!
Lol someone had fun editing this video.
Video editing quality is like a... LTT! XD.
Great editing
Love this style haha
Thanks 45Drives for this setup. Do you have an updated video of this setup?
Very cool video! Well made! I'm really impressed with the ansible playbooks and this nifty Cockpit module you guys have made! I've never explored cockpit modules, turns out I've been missing out big time!
Don't care what this video is about but I see a mayhem shrit 🤘
I knew I made the right call wearing that shirt to work the day we shot this. 🤘
Hey, same question as others, can this work for other hardware than 45drives ? Stuck at the Device Alias playbook (core) step. Would love to ear about you. Huge work on your side guys, thanks a lot.
Q - is there an updated version of this for Rocky 9, or that doesn't fail out on updated versions of Rocky 8? I want to try this, but don't want to have to build out unpatched early version of Rocky 8 to make it work... :)
Nice Mayhem shirt!
Hey Guys! What do you think about running ceph on Kubernetes on git?
Containerized Ceph in k8s using rook is a solid option if that was fits your use case. We prefer to deploy our Ceph clusters on bare metal. But that’s the beauty of open source, the flexibility to do things your way! - Brett K
package broken now? fedora40 doesn't find dependencies. python3-dataclasses is part of python3 now, and the packages don't seem to have been updated to reflect current package state.
Package currently works as expected in Rocky8.X and Ubuntu 20.04
python3-dataclasses package is still provided via EPEL repo in rocky. Perhaps using a newer distro?
Rocky 9 and Ubuntu support coming in the coming months.
Reach out to info@45drives.com with any tech issues you're having and I can help you. - Brett K.
@@45Drives i may delete my comment later. Script has conditionals for fedora, but does not work. I will chalk it up to distro differences. Would like a chance to figure this out, maybe over email? Would like to fully admit ceph is super new ro me and i want to get lab setup working.
hii.. what happen if admin crash and need to re install from scratch?
I do you have Sam deployment for Ubuntu or Debian
Will this work on debian?
should we deploy ceph inside kubernetes cluster?
I mean, what if something happened to the cluster than we wouldn't have access to our PVs and PVCs.
Is it possible to integrate ceph with proxmox and than use it inside the kubernetes?
What is the best practice here?
Thanks for the question! We are believers that Ceph should be standalone to the Kubernetes cluster. Ceph and Kubernetes are both complex systems so sticking with our theme of keeping it simple and having them work independently of each other is ideal.
You can consume ceph storage (external to kubernetes) through the use of the ceph-csi drivers (github.com/ceph/ceph-csi). Hope this helps!
where the hell is the second part i want it
Took me many nights to install 4 servers one by one by hand with Ceph. Now you do this in a video that takes not longer than 13 minutes..... Am I too old ?
Whats the default password for the webinterface of cephdeploy?
I figured it out, you can ignore that post.
Wait though.... Building is one thing, adding on is another.... Maintain and repair , just in case, should be the priority, perhaps ?
Fuck the marketing team, this is YOUR studio now. Un-negociable.
Nice shirt
If the ansible inventory mgmt is such a tedious task for you why don't you investigate a little bit time into dynamic inventories or the add_host module. You can get rid of all this inventory management when you make the classification on the machines itself, using the daughter board or with some characteristics of the hardware. Also, I would ship this boring manual pinging as part of your playbook 😉
Rocky Linux haha IBM Hat will make that stop
You get into the quantum matrix ¿ I'm going to stay retired
I don’t get it why they make ceph so complicated. I mean come on, a rpm/deb package for each service (osd, mon…) and one simple config file in etc/ceph (osd.conf, mon.conf…) and that should be enough. But all this cephadm/cephdeploy madness is just unnecessary
Can i suggest something, after each run is done and the done option is made available, would it not be best to Grey out the run option then otherwise you might get someone clicking run again by mistake.
Good call, I'll add that in.
does not seem to work, error: nothing provides python3-dataclasses needed by cockpit-ceph-deploy-1.0.2-2.el8.noarch
Hey Arthur, this looks like its just a dependency/repo issue. Which OS are you trying to run on? We only officially support Ubuntu and Rocky at this time. If you are using Rocky or Ubuntu, just make sure that your packages are all up to date. If you are still running into issues, feel free to reach out to us at info@45drives.com and we could get you in touch with support. Thanks!