What is Ceph?
Вставка
- Опубліковано 1 лип 2024
- Disclaimer: This video is sponsored by SoftIron.
My previous video was all about software-defined storage, or SDS, an alternative to traditional proprietary storage arrays. There are obvious pros and cons with both approaches but if you choose to go the SDS way, the burning question is: what SDS software to use? Ceph seems to be the benchmark SDS software for many.
Join me with a closer look into Ceph, what are it's capabilities and what SoftIron HyperDrive adds to the Ceph experience.
Ceph website:
ceph.io/
SoftIron website:
softiron.com/
Music: mixkit.co
Who am I? Visit my website:
markus-leinonen.com - Наука та технологія
The video starts at 2:15
lol
And ends at the ad
Yikes, 2 relevant minutes of a nearly 6min video. 😅
Thank you
Bless you my friend
Came to the video for Ceph, left as subscriber! Great work, Markus! Keep it up!
Glad to hear you liked it, minkyone! Thanks for the subs, great to have you around. 💪
Came for ceph, stayed for the plant
😀😀
Nice vid 👍
3:00 --- "You can provision object, block and file stores from the same Ceph cluster."
3:18 --- "Crush is used to algorithmically store and locate the data."
3:25 --- "Using erasure coding instead of RAID."
4:42 --- Enterprises of every size doubtless want wider availability of "low-power ARM-based" compute.
Meanwhile, others are seeing Longhorn (Rancher/SUSE/EQT) coming for Ceph.
Kindest regards, neighbours and friends.
"Using erasure coding instead of RAID." so nice
This dude illustrates backwards, left-handed, real-time... And only 10k views in 6months??? I'm amazed.
Thanks man. Great video
Thanks! My pleasure. 👍
Thank you a lot for your work
My pleasure, Anh Dung Phan! 👍
You should make more explainer videos, you are a lot easy to understand. I wonder why your channel has not grown to what it's capability is.
Thanks, Kedar! It is my intention to make more in the future. Just been too busy with other things lately but there are many new videos coming still this year. Stay tuned! 😎
Thanks for showing
My pleasure! 👍
Hey what's the best good performance clustered file system to use right now? I'm building 4 gpu compute nodes over 50 GB/sec infiniband and each has 4x7000R/4000W u.2 drives so it's FAST and there should be a filesystem that could get each node peaking at 50 GB/sec and that would be the one to use which is it?
Open source only obvs
@@isbestlizard Lustre
Hey Markus, many thanks for video, I've always struggled with the concept of CEPH vs SAN's & NAS's (ZFS) and would like some clarity around what I perceive as "the elephant in the room", power consumption, based on my numbers CEPH nodes would use more power than traditional storage equipment, also my other concern is latency, if I understand correctly, ones application/vm needs to wait for success writing of data to all nodes ...if so, how is this concept better? especial in cloud environment....maybe I can't see the trees for forest and missing the obvious and would value your feedback. cheers
Hey Paul. Andrew from SoftIron here. Power's an issue in data centres regardless, and most storage appliances are built on generic inefficient and non-optimized x86 designs. Run Ceph on them and you have the same problem as any other storage architecture. We went the single processor, super optimized, ARM route and have slashed the power and cooling reqm'ts. More info at: softiron.com/storage/tco/ Ref. performance, on the right platform, configured properly Ceph can outstrip traditional designs. See: softiron.com/storage/wire-speed-storage/ Ping us at info@softiron.com if you want to set up a chat to understand it some more. cheers Andrew
@@AndrewMoloney Many thanks Andrew for your response and noted regarding optimisation. What about latency with VM guest, which is better replication or erasure coding? We backup all VM guests every hour and replication backup SAN to second DC. To-date all our SANs are Tiered flash, 15k, 10K & 7.2k and RAID10. I have posted on a number occasion questions related to latency with CEPH and no one has responded which to be honest is sending alarm bells to me. Cheers
@@paulfx5019 Hi Paul - You're absolutely right regarding Ceph writes - a write isn't confirmed until all of the replicated shards are written, so the latency is the sum of the time for every write to be completed. How this differs for erasure coding will depend on the specific EC profile that you use, with typical profiles you will have more shards but they'll be smaller so the difference in latency could be negligible - definitely within bounds for VM writes as that workload is the primary design objective of Ceph's RBD.
As for your second site - there are a number of ways to back up your VM guests in Ceph - mirroring is one option. The latency (physical distance) and link you have between your sites is a primary factor we'd consider when putting a solution together. You sound like you've done your research so I'm sure you're aware there are a number of advantages to a software-defined system like Ceph over traditional SAN - happy to talk to you more about it.
Is there a possibility in you creating video of a CephFS setup?
Good suggestion! Who knows. Maybe one day when I have some slack time. ;)
Thanks for the video. Can I use Ceph as part of an HCI solution?
Do you mean using Ceph as the SDS part of a HCI solution? Most certainly possible and that's something I would like to see actually. But as far as I know, there are no commercial HCI solutions based on Ceph yet.
@@TechEnthusiastInc Yes, that is what I meant. Thanks for your answer and for your videos which I very much enjoy watching. Kiitos.
Thanks a lot, donn, appreciated. Ole hyvä! 😂💪
Proxmox bundles CEPHs as HCI and their support offering includes CEPH support. The caveat I would set is it is still up to you to choose, select and validate the hardware solution.
pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster
Support
www.proxmox.com/en/proxmox-ve/pricing
I have been using in production for just over to years now with CEPH
Your closest alternatives (without CEPH) are:
- ovirt (Redhat Virtualization with GlusterFS)
- xcp-ng (CEPH is in their roadmap but I'm not sure the support scope)
Whoa! Thanks a lot, rand0msamurai. This is really interesting stuff. Glad to see HCI ideology spreading to Ceph. 💪
Proxmox has ceph build in ready for a HCI deployment
Yep, that's what I've heard. Haven't had a chance to try it out but sounds awesome. Do you have any experience with it?
2:48
You said: "can scale ..vertically..infinitely large". That is not correct, as scaling vertically means, you add more ressources to the system, eg. CPU or RAM.
Scaling horizontally means, you add more Systems to the Cluster which is what you do when you expand your ceph cluster - you add nodes.
So it doesn't scale _vertically_ , but _horizontally_!
3:34
First: ceph _can_ use erasure coding. It can use replicated pools, too, which would be something like a RAID 1.
Second: erasure coding is not neccessarily faster than RAID. EC and RAID are two different approaches of getting data redundancy and -safety. It depends on configuration, implementation and hardware which one is faster than the other one.
1) I actually said "virtually", not "vertically". Apologies for the unclear enunciation.
2) You are correct, the two methods have their own use cases, pros and cons. And they are even not mutually exclusive, you can use both at the same time if you want. In general, erasure coding works best with larger drives, scales better in large environments and offers capabilities beyond RAID. But yes, it requires lots more computing power while in normal operation. However, what I was referring to is "recovering from drive failures" or rebuild times. That bit is faster with erasure coding than with RAID, especially (again) with large drives.
Did anyone try canonical ceph yet? Seems quite easy :)
Nope. Haven't tried. Interested to hear experiences too!
does it use snaps? if so, pass ..
Great video but the sponsor messages are a bit much
Thanks, nnari. Appreciated. I tried to make the separation clear: first half is generic Ceph and latter half is about SoftIron which I think honestly is maybe the best Ceph implementation out here, that’s why I wanted to partner up with them. Hope this clears it a bit?
There was very little actual detail in this video. I felt like I was watching a sales presentation.
I’m the Alpha Ceph
I believe you. 😂
sounds like a commercial :-/ Dont like when it is not clear where advertisement ends and content starts...
Oh? That was not my intention at all. First of all, I thought that I mentioned it clearly enough in the beginning that this is a sponsored video. Was hoping that would set the expectations for the viewers. Secondly, the first half of the video is general intro to Ceph, and the latter part is about SoftIrons implementation of Ceph. Would not call any part of the video advertisement…didn’t even mention prices and where to buy, just explained how Ceph and one commercial implementation works.
CEPH has no corporate level tech support! No, thank you!