What is Ceph?

Поділитися
Вставка
  • Опубліковано 1 лип 2024
  • Disclaimer: This video is sponsored by SoftIron.
    My previous video was all about software-defined storage, or SDS, an alternative to traditional proprietary storage arrays. There are obvious pros and cons with both approaches but if you choose to go the SDS way, the burning question is: what SDS software to use? Ceph seems to be the benchmark SDS software for many.
    Join me with a closer look into Ceph, what are it's capabilities and what SoftIron HyperDrive adds to the Ceph experience.
    Ceph website:
    ceph.io/
    SoftIron website:
    softiron.com/
    Music: mixkit.co
    Who am I? Visit my website:
    markus-leinonen.com
  • Наука та технологія

КОМЕНТАРІ • 52

  • @TheKitMurkit
    @TheKitMurkit 2 роки тому +113

    The video starts at 2:15

    • @nnawaff
      @nnawaff Рік тому +1

      lol

    • @lpcamargo
      @lpcamargo Рік тому +2

      And ends at the ad

    • @patricknelson
      @patricknelson 11 місяців тому +2

      Yikes, 2 relevant minutes of a nearly 6min video. 😅

    • @---GOD---
      @---GOD--- 3 місяці тому

      Thank you

    • @rath6599
      @rath6599 3 місяці тому

      Bless you my friend

  • @minkyone
    @minkyone Рік тому +2

    Came to the video for Ceph, left as subscriber! Great work, Markus! Keep it up!

    • @TechEnthusiastInc
      @TechEnthusiastInc  Рік тому +1

      Glad to hear you liked it, minkyone! Thanks for the subs, great to have you around. 💪

  • @esra_erimez
    @esra_erimez 3 роки тому +45

    Came for ceph, stayed for the plant

  • @chromerims
    @chromerims Рік тому +5

    Nice vid 👍
    3:00 --- "You can provision object, block and file stores from the same Ceph cluster."
    3:18 --- "Crush is used to algorithmically store and locate the data."
    3:25 --- "Using erasure coding instead of RAID."
    4:42 --- Enterprises of every size doubtless want wider availability of "low-power ARM-based" compute.
    Meanwhile, others are seeing Longhorn (Rancher/SUSE/EQT) coming for Ceph.
    Kindest regards, neighbours and friends.

    • @pissnotime1894
      @pissnotime1894 8 місяців тому

      "Using erasure coding instead of RAID." so nice

  • @iBogart87
    @iBogart87 2 роки тому +3

    This dude illustrates backwards, left-handed, real-time... And only 10k views in 6months??? I'm amazed.

  • @unstoppable-ar3292
    @unstoppable-ar3292 2 роки тому

    Thanks man. Great video

  • @anhdungphan2388
    @anhdungphan2388 2 роки тому +1

    Thank you a lot for your work

  • @kedarhargude643
    @kedarhargude643 2 роки тому +6

    You should make more explainer videos, you are a lot easy to understand. I wonder why your channel has not grown to what it's capability is.

    • @TechEnthusiastInc
      @TechEnthusiastInc  2 роки тому +2

      Thanks, Kedar! It is my intention to make more in the future. Just been too busy with other things lately but there are many new videos coming still this year. Stay tuned! 😎

  • @IT-Entrepreneur
    @IT-Entrepreneur Рік тому +1

    Thanks for showing

  • @isbestlizard
    @isbestlizard 2 роки тому

    Hey what's the best good performance clustered file system to use right now? I'm building 4 gpu compute nodes over 50 GB/sec infiniband and each has 4x7000R/4000W u.2 drives so it's FAST and there should be a filesystem that could get each node peaking at 50 GB/sec and that would be the one to use which is it?

  • @paulfx5019
    @paulfx5019 3 роки тому +2

    Hey Markus, many thanks for video, I've always struggled with the concept of CEPH vs SAN's & NAS's (ZFS) and would like some clarity around what I perceive as "the elephant in the room", power consumption, based on my numbers CEPH nodes would use more power than traditional storage equipment, also my other concern is latency, if I understand correctly, ones application/vm needs to wait for success writing of data to all nodes ...if so, how is this concept better? especial in cloud environment....maybe I can't see the trees for forest and missing the obvious and would value your feedback. cheers

    • @AndrewMoloney
      @AndrewMoloney 3 роки тому +1

      Hey Paul. Andrew from SoftIron here. Power's an issue in data centres regardless, and most storage appliances are built on generic inefficient and non-optimized x86 designs. Run Ceph on them and you have the same problem as any other storage architecture. We went the single processor, super optimized, ARM route and have slashed the power and cooling reqm'ts. More info at: softiron.com/storage/tco/ Ref. performance, on the right platform, configured properly Ceph can outstrip traditional designs. See: softiron.com/storage/wire-speed-storage/ Ping us at info@softiron.com if you want to set up a chat to understand it some more. cheers Andrew

    • @paulfx5019
      @paulfx5019 3 роки тому +1

      @@AndrewMoloney Many thanks Andrew for your response and noted regarding optimisation. What about latency with VM guest, which is better replication or erasure coding? We backup all VM guests every hour and replication backup SAN to second DC. To-date all our SANs are Tiered flash, 15k, 10K & 7.2k and RAID10. I have posted on a number occasion questions related to latency with CEPH and no one has responded which to be honest is sending alarm bells to me. Cheers

    • @dannyabukalam8570
      @dannyabukalam8570 3 роки тому +2

      ​@@paulfx5019 Hi Paul - You're absolutely right regarding Ceph writes - a write isn't confirmed until all of the replicated shards are written, so the latency is the sum of the time for every write to be completed. How this differs for erasure coding will depend on the specific EC profile that you use, with typical profiles you will have more shards but they'll be smaller so the difference in latency could be negligible - definitely within bounds for VM writes as that workload is the primary design objective of Ceph's RBD.
      As for your second site - there are a number of ways to back up your VM guests in Ceph - mirroring is one option. The latency (physical distance) and link you have between your sites is a primary factor we'd consider when putting a solution together. You sound like you've done your research so I'm sure you're aware there are a number of advantages to a software-defined system like Ceph over traditional SAN - happy to talk to you more about it.

  • @brandonpatzold7367
    @brandonpatzold7367 Рік тому +1

    Is there a possibility in you creating video of a CephFS setup?

    • @TechEnthusiastInc
      @TechEnthusiastInc  Рік тому

      Good suggestion! Who knows. Maybe one day when I have some slack time. ;)

  • @abdelfiala
    @abdelfiala 3 роки тому +1

    Thanks for the video. Can I use Ceph as part of an HCI solution?

    • @TechEnthusiastInc
      @TechEnthusiastInc  3 роки тому +1

      Do you mean using Ceph as the SDS part of a HCI solution? Most certainly possible and that's something I would like to see actually. But as far as I know, there are no commercial HCI solutions based on Ceph yet.

    • @abdelfiala
      @abdelfiala 3 роки тому +1

      @@TechEnthusiastInc Yes, that is what I meant. Thanks for your answer and for your videos which I very much enjoy watching. Kiitos.

    • @TechEnthusiastInc
      @TechEnthusiastInc  3 роки тому +1

      Thanks a lot, donn, appreciated. Ole hyvä! 😂💪

    • @rand0msamurai
      @rand0msamurai 3 роки тому +1

      Proxmox bundles CEPHs as HCI and their support offering includes CEPH support. The caveat I would set is it is still up to you to choose, select and validate the hardware solution.
      pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster
      Support
      www.proxmox.com/en/proxmox-ve/pricing
      I have been using in production for just over to years now with CEPH
      Your closest alternatives (without CEPH) are:
      - ovirt (Redhat Virtualization with GlusterFS)
      - xcp-ng (CEPH is in their roadmap but I'm not sure the support scope)

    • @TechEnthusiastInc
      @TechEnthusiastInc  3 роки тому +1

      Whoa! Thanks a lot, rand0msamurai. This is really interesting stuff. Glad to see HCI ideology spreading to Ceph. 💪

  • @dillonhansen71
    @dillonhansen71 3 роки тому +3

    Proxmox has ceph build in ready for a HCI deployment

    • @TechEnthusiastInc
      @TechEnthusiastInc  3 роки тому

      Yep, that's what I've heard. Haven't had a chance to try it out but sounds awesome. Do you have any experience with it?

  • @user-dr3mz2pl2t
    @user-dr3mz2pl2t Рік тому +2

    2:48
    You said: "can scale ..vertically..infinitely large". That is not correct, as scaling vertically means, you add more ressources to the system, eg. CPU or RAM.
    Scaling horizontally means, you add more Systems to the Cluster which is what you do when you expand your ceph cluster - you add nodes.
    So it doesn't scale _vertically_ , but _horizontally_!
    3:34
    First: ceph _can_ use erasure coding. It can use replicated pools, too, which would be something like a RAID 1.
    Second: erasure coding is not neccessarily faster than RAID. EC and RAID are two different approaches of getting data redundancy and -safety. It depends on configuration, implementation and hardware which one is faster than the other one.

    • @TechEnthusiastInc
      @TechEnthusiastInc  Рік тому

      1) I actually said "virtually", not "vertically". Apologies for the unclear enunciation.
      2) You are correct, the two methods have their own use cases, pros and cons. And they are even not mutually exclusive, you can use both at the same time if you want. In general, erasure coding works best with larger drives, scales better in large environments and offers capabilities beyond RAID. But yes, it requires lots more computing power while in normal operation. However, what I was referring to is "recovering from drive failures" or rebuild times. That bit is faster with erasure coding than with RAID, especially (again) with large drives.

  • @Copernicus22
    @Copernicus22 3 роки тому +1

    Did anyone try canonical ceph yet? Seems quite easy :)

    • @TechEnthusiastInc
      @TechEnthusiastInc  3 роки тому

      Nope. Haven't tried. Interested to hear experiences too!

    • @silverismoney
      @silverismoney 2 роки тому

      does it use snaps? if so, pass ..

  • @narigoncs
    @narigoncs Рік тому +1

    Great video but the sponsor messages are a bit much

    • @TechEnthusiastInc
      @TechEnthusiastInc  Рік тому

      Thanks, nnari. Appreciated. I tried to make the separation clear: first half is generic Ceph and latter half is about SoftIron which I think honestly is maybe the best Ceph implementation out here, that’s why I wanted to partner up with them. Hope this clears it a bit?

  • @KenOttaviano
    @KenOttaviano Рік тому

    There was very little actual detail in this video. I felt like I was watching a sales presentation.

  • @jeboymikey4116
    @jeboymikey4116 Рік тому +2

    I’m the Alpha Ceph

  • @maxjesch
    @maxjesch 11 місяців тому

    sounds like a commercial :-/ Dont like when it is not clear where advertisement ends and content starts...

    • @TechEnthusiastInc
      @TechEnthusiastInc  11 місяців тому

      Oh? That was not my intention at all. First of all, I thought that I mentioned it clearly enough in the beginning that this is a sponsored video. Was hoping that would set the expectations for the viewers. Secondly, the first half of the video is general intro to Ceph, and the latter part is about SoftIrons implementation of Ceph. Would not call any part of the video advertisement…didn’t even mention prices and where to buy, just explained how Ceph and one commercial implementation works.

  • @sergeykopylov652
    @sergeykopylov652 3 місяці тому

    CEPH has no corporate level tech support! No, thank you!