I built another server cluster... - Promxox HA Cluster w/ Ceph

Поділитися
Вставка
  • Опубліковано 27 чер 2024
  • MB/CPU - SuperMicro X10SDV-4C-TLN2F / Xeon 1521
    Case - amzn.to/3CVaTvL
    Boot SSD - amzn.to/3pohRqe
    Ceph Storage - amzn.to/441AP55
    -------------------------------------------------------------------------------------------
    🛒 Amazon Shop - www.amazon.com/shop/raidowl
    👕 Merch - www.raidowlstore.com
    🔥 Check out today's best deals from Newegg: howl.me/cjNc3sze2O3
    -------------------------------------------------------------------------------------------
    Join the Discord: / discord
    Become a Channel Member!
    / @raidowl
    Support the channel on:
    Patreon - / raidowl
    Discord - bit.ly/3J53xYs
    Paypal - bit.ly/3Fcrs5V
    My Hardware:
    Intel 13900k - amzn.to/3Z6CGSY
    Samsung 980 2TB - amzn.to/3myEa85
    Logitech G513 - amzn.to/3sPS6yv
    Logitech G703 - shop-links.co/cgVV8GQizYq
    WD Ultrastar 12TB - amzn.to/3EvOPXc
    My Studio Equipment:
    Sony FX3 - shop-links.co/cgVV8HHF3mX / amzn.to/3qq4Jxl
    Sony 24mm 1.4 GM -
    Tascam DR-40x Audio Recorder - shop-links.co/cgVV8G3Xt0e
    Rode NTG4+ Mic - amzn.to/3JuElLs
    Atmos NinjaV - amzn.to/3Hi0ue1
    Godox SL150 Light - amzn.to/3Es0Qg3
    links.hostowl.net/
    0:00 Intro
    1:16 Proxmox Cluster Hardware
    6:46 Setting up the cluster
    10:17 Overall thoughts on my new cluster
  • Розваги

КОМЕНТАРІ • 191

  • @vaidkun
    @vaidkun Рік тому +62

    I am glad you did not break your motherboard please read MBD-X10SDV-4C-TLN2F manual (can be downloaded from supermicro home page) that 4pin is not for atx 4 pin power its for dedicated DC supply. You need to either use 24pin atx or 4 pin. DO NOT use them both! Quote from manual: "Do not use the 4-pin DC power at PJ1 when the 24-pin ATX Power at JPW1 is connected to the power supply. Do not plug in both PJ1 and JPW1 at the same time."

    • @RaidOwl
      @RaidOwl  Рік тому +26

      Oh neat! 🙃

    • @longnamedude3947
      @longnamedude3947 Рік тому +7

      RTFM - The truest words to abide by.

    • @RaidOwl
      @RaidOwl  Рік тому +12

      @@longnamedude3947 its too long

    • @justfasial01
      @justfasial01 11 місяців тому +1

      @@longnamedude3947 * insert Michael Scott NOOOOO gif here *

    • @boneappletee6416
      @boneappletee6416 6 місяців тому

      ​@@RaidOwlAnother is PEBKAC

  • @kunalbansal1927
    @kunalbansal1927 Рік тому +65

    I'm gonna be honest as well. I DO NOT NEED a lot of my homelab stuff. However, i do love cosplay. Specifically cosplaying as a sysadmin/automation engineer.

  • @AtGnat8
    @AtGnat8 Рік тому +20

    The PCIe passthrough definitely works on those boards. If you’ve set the vfio stuff in /etc/modules/ and “intel_iommu=on” in PVE and still not booting, make user the BIOS has the VT-d extension and IOMMU enabled. Thanks for the tour of the new cluster!

  • @KillBucket
    @KillBucket Рік тому +3

    Great tour! I love that you include the mistakes and it's not a "do this and the HA gods will bless you" tutorial. I also had a devil of a time getting iGPU passthrough to work on Proxmox, although I'm running it on a Dell 3930 (with USB-C-only iGPU display output). I had to use cpu=host, q35, Virtio-GPU, PCI passthrough with PCI-e & x-vga, USB port passthrough (for the dummy dongle to work). I would still get error 43, but a quick disable/enable cycle in Windows gets things back in order.

  • @DodgeHooker641
    @DodgeHooker641 Рік тому +13

    I really hope to see you do MORE videos about Proxmox!

  • @slightlyevolved
    @slightlyevolved 7 місяців тому

    8:59 THANK YOU!!!! I was looking all around to find out which order this went in!

  • @jeremyjedynak
    @jeremyjedynak 2 місяці тому

    Thanks for putting together this video and the previous one showing ProxMox VE HCI with less expensive hardware.
    The two 10GbE switches shown are each a single point of failure. To upgrade the networking to HA, these could be replaced with two switches configured with MLAG. VLANs can be used to create the two logical networks shown: Host and Ceph.
    For maintenance like internal drive or part replacement, having four nodes instead of the minimum three would allow one node to be safely removed at any time to perform orderly maintenance and upgrades.
    When one of only three nodes is intentionally made unavailable to perform maintenance, the two remaining nodes are in a degraded state for some services (including Ceph), and if anything unlucky happens to one of the two remaining nodes during the maintenance window, there is no longer a cluster.

  • @chris_schenkel
    @chris_schenkel 11 місяців тому +1

    "cyberbullied by some neckbeard" - priceless. That kept me smiling right to the end and then some. Thanks owl.

  • @LampJustin
    @LampJustin Рік тому +1

    Good thing you can directly use your ceph cluster as a csi backend! And if you create a cephs and mons you can even use RWX pvcs

  • @Jonteponte71
    @Jonteponte71 6 місяців тому +1

    I'm still in the "just got my first HP Elitedesk mini PC as my new docker host instead of my NAS" phase of my homelab journey but I still enjoy watching there videos even though I don't ever expect to rackmount my servers (unless perhaps it's a very small one that can mount miniPC's).

  • @T3hBeowulf
    @T3hBeowulf 11 місяців тому +4

    This is essentially the same route I took, except I used 5 Dell 7050s 1L SFF PCs, NVMe for ProxMox, 1 TB SSDs on each node for Ceph and a dedicated backhaul network for Ceph. All in, it cost about $750 and my only regret was not waiting until Prime Day to get 2TB SSDs for about what I paid for the 1TB drives. 🤦‍♂️
    I haven't made it the primary cluster yet though and am still trying to figure out what I really want to do with it. 😅
    Great video though!

  • @whatwhat-777
    @whatwhat-777 Рік тому +3

    I just love your content...hmmm feels like home ❤

  • @accesser
    @accesser Рік тому

    Great video love this style and subject

  • @jakobholzner
    @jakobholzner Рік тому

    Just a wonderful inspiring video thx 😊

  • @ryanmalone2681
    @ryanmalone2681 Місяць тому

    Damn dude, that’s a badass cluster setup. I just bought 3 2U PowerEdge R730’s, which is clearly less efficient, and you’re is really high performance. Me likey.

  • @Jared01
    @Jared01 Рік тому +1

    I'm seriously contemplating ordering a few of the Topton Intel 8505 router boxes and running them in a cluster like this... More powerful processor with lower power draw than the SuperMicro you're using, 6x 2.5Gbps NICs for direct host-to-host connectivity without a switch (and enough extra ports for network connectivity), and they're completely passive on cooling. Only real downside is there's no PCIe expansion to speak of, and it's not Xeon / ECC, but for the price (around the same price as you spent for each of these, if not a touch cheaper) they would make for an awesome cluster!
    I'm currently running one for my router and it's been rock solid, and I've got a couple of Chenbro 1U servers that are due for replacement.

  • @themorpheusmm
    @themorpheusmm 11 місяців тому

    Thanks for the video. Missing one thing though - simulated failure for one of the nodes

  • @richardjohnston5194
    @richardjohnston5194 Рік тому +2

    I'm just here taking up space!

  • @ManuelRodriguez27
    @ManuelRodriguez27 Рік тому +1

    I ran a mirrored set of 870 evos in my proxmox cluster and the performance was okay, as long as i didn't update more than one VM at a time or download and install large packages/binaries. I/O delay would cause random VMs to become unresponsive and general instability
    Promxox and ZFS really needs enterprise drives with larger caches and high endurance.

  • @The_Mup
    @The_Mup 6 місяців тому

    The cool thing about those inwin cases is that you can swap the position of the PSU and front I/O ports, swap the rack ears to the other side and then you have all your motherboard I/O and PCI slots at the front of the rack while leaving power at the back.

  • @SeanDion
    @SeanDion Рік тому +13

    Would love a non-neckbeard approach/mindset to Ceph/CephFS/Rook setup on that cluster as a follow-up. Fighting thru that on my own setup.

    • @RaidOwl
      @RaidOwl  Рік тому +2

      Yeah I got some stuff to try

    • @SeanDion
      @SeanDion Рік тому

      @@RaidOwl Awesome. More responses for the UA-cam algorithm overloards.

  • @ken23humphrey
    @ken23humphrey Рік тому +4

    "I'm just here so I won't get fined."
    - Someone else

  • @shephusted2714
    @shephusted2714 Рік тому

    you deserved this and you probably want to follow this thread down the line - you will find you do need this once you get all the kinks out - do a ha opnsense next - non virt total bare metal... you will want to max the ram out on these - more ram equals more better - great you have an upgrade path - you will want to go all nvme - that seems to be your weakest link - pls update with and make the cluster fabric 20g bonded and add a usb 2.5 for mgt inf - please explore other netfs options - nfs, zfs, ocfs2, sshfs, gluster

  • @Bill_the_Red_Lichtie
    @Bill_the_Red_Lichtie Рік тому +2

    I'm just here taking up space - For PV(C) in your k3s Cluster, I recommend using "Rook with an external Ceph cluster", i.e. the Ceph storage provided by ProxMox.

    • @RaidOwl
      @RaidOwl  Рік тому +2

      Someone else mentioned Rook. Ima look into it for sure

  • @jeredferrin6406
    @jeredferrin6406 Рік тому

    I'm sorry to say...
    I love you video's. Very well edited.

    • @RaidOwl
      @RaidOwl  Рік тому

      I’m sorry but….thank you

  • @Cowayger
    @Cowayger 10 місяців тому

    I love these cases. They are very hard to get hands on.

  • @mdiaztoledo
    @mdiaztoledo 11 місяців тому

    Hei, good setup, and very interesting video, thanks ^^

  • @canoozie
    @canoozie Рік тому

    I built a proxmox cluster using Supermicro M11SDV-8C-LN4F AMD Epyc 3251 board. 8-core, 16-thread, 65W total usage with 4 sticks of ECC ram and a SATA SSD boot disk under load. Another Mini ITX board, and though it's Zen 1, its power usage is the reason I chose it. I need fast networking too, but storage is handled differently for high availability in my network. So the 1 PCI-e slot is used for a 10 Gbit nic, because though Epyc 3xx1 supports 10Gigabit networking on chip, this board doesn't have 10gig ports.

  • @chrisumali9841
    @chrisumali9841 Рік тому

    thanks for another great video, awesome. have a great day

  • @sploders1019
    @sploders1019 3 місяці тому

    lol I love how the official Ceph documentation questions the need for multiple networks, but the users went nuts all over the internet and demand that you use it

  • @whatwhat-777
    @whatwhat-777 7 місяців тому

    Please make a follow-up video on this setup when it completes 1 year with your new learnings along the way. 🙂

  • @NV-Noah
    @NV-Noah Рік тому +6

    Regarding PCI Passthrough:
    Some vendors literally block it from properly working, HP ProLiant Servers for example. Ive been cracking my head for literal weeks with them.
    After trying it with some Lenovo Servers it worked instantly for me.
    Just a heads up, that sometimes its literally impossible to get it to work

  • @darrenoleary5952
    @darrenoleary5952 11 місяців тому

    I've just finished creating a pair of Proxmox servers for myself hosting my original 6x rPi's rebuilt as Debian VMs.
    Each machine has the following specs :
    - Inter-tech K-125L 1U rackmount case
    - Akyga 200W PSU
    - ASRock J5040-ITX M/board
    - 32GB DDR4 2400 RAM (2x16GB)
    - 1x 500GB Samsung 870 EVO SSD for the boot drive (overkill as I originally ordered 256GB but weren't in stock but the retailer supplied for the same price)
    - 1x 4TB Samdung 870 EVO SSD for VM storage (again overkill, but i have plenty of available space and they're sheap)
    - 3x Noctua 40mm NF-A4x20 FLX 5000 fans
    Both machines run super cool and quiet and have plenty of power for my current needs, with each only using

  • @Max78224
    @Max78224 Рік тому +1

    I would recommend you to replace those silicon power SSDs. I had a few of these in the datacenter running only as proxmox boot disk and all died after a few month.

  • @DJBounceBack
    @DJBounceBack Рік тому

    I’m just here taking up space!!

  • @sidneyking11
    @sidneyking11 Рік тому

    I would love to have a proxmox cluster for my home lab. I could not get GPU passthrough to work with my setup either. Does proxmox do load balancing were it would move a vm to the other host if it is less busy then the other host? Thanks for sharing.

  • @benndavison9171
    @benndavison9171 Рік тому

    I'm Just Here Taking Up Space...but love the content.

  • @jonathanzj620
    @jonathanzj620 Рік тому

    I watched the Livestream already, so I'm definitely just here taking up space

  • @richardcarpenter4378
    @richardcarpenter4378 11 місяців тому

    I am here for the chat!! Allways easy to learn from

  • @computersales
    @computersales Рік тому

    I don't know about proxmox, but with ESXi you can add variables to the VM to fix the code 43 error with GPUs. Although from my experience that error only came up with older NVIDIA cards. Basically you gotta tell the VM it isn't a VM.

  • @TazzSmk
    @TazzSmk Рік тому +2

    if you had NVME drive and need for bandwidth, I'd probably pick 25Gbe or so NICs for the available PCIe slot,
    another option would be to populate that PCIe slot with multi-nvme PCIe board, I think Sonnet makes some new one with 8 nvme bays on PCIe 4.0 x16 which is wild :D

    • @TheExard3k
      @TheExard3k Рік тому +2

      7400 Micron drives are NVMe. Faster in Ceph than any consumer drive can dream of. And the quad core can't handle more than a single NVMe at full load anyway.

  • @JasonsLabVideos
    @JasonsLabVideos Рік тому +1

    NICE!, used this same chassis in a firewall build. Good case but the back io shields were a PITA> !

    • @RaidOwl
      @RaidOwl  Рік тому

      Lol yeah I just avoided those

    • @JasonsLabVideos
      @JasonsLabVideos Рік тому

      @@RaidOwl I saw :) Nice setup sir !

  • @SouthbayCreations
    @SouthbayCreations Рік тому

    I’m here just taking up space 🙌
    -some dude

  • @JohnWeland
    @JohnWeland Рік тому +1

    I am running 2 Dell r620's with 10c/20t and 64GB ram each. I need 1 more to make a matching trio. The 2 Dells pull about 130w. I am using Harvester right now but really thinking about switching to Proxmox because that where the cool kids hang (thats where the projects and tutorial videos are), So hard to find Harvester content.

  • @urzalukaskubicek9690
    @urzalukaskubicek9690 Рік тому +3

    I would like to see some benchmarks. Ideally with database usage :)

    • @MorkOrk
      @MorkOrk Рік тому +1

      Some filesystem Benchmarks would be nice

  • @samegoi
    @samegoi 11 місяців тому

    what is the read and write performance of your Ceph Cluster?

  • @vollhorst140
    @vollhorst140 Рік тому

    Im here just taking up space 😂

  • @joshuamaserow
    @joshuamaserow Рік тому

    Love the realness, linux server nerd bro

  • @lindsaykid9947
    @lindsaykid9947 Рік тому

    Just here taking up space

  • @Redd00
    @Redd00 7 місяців тому

    "I am here just taking up space"

  • @johncarter2383
    @johncarter2383 9 місяців тому

    anything to be gained by glusterfs the 1TB X 3 SSD ?

  • @springsenior2006
    @springsenior2006 8 місяців тому

    I’m just here taking up space 😂

  • @markdownsouth1500
    @markdownsouth1500 Місяць тому

    I'm looking at replacing an existing vSphere Enterprise with a shared storage enterprise "grade" virtualization platform. Maybe I missed it, but I seem to be having a problem finding anyone who can demonstrate High Availability (HA) of the hypervisor nodes in these three scenarios below. Everyone has videos on setting up the cluster, live migration but I'm not seeing anyone doing actual tests of a complete or partial failure of one of the cluster nodes.
    1) Complete node fail -- just pull the power plug(s) out to simulate. How does Proxmox handle dozens of VMs powering on? Does it have a DRS type function where it will distribute the VMs across the remaining nodes? Is there an ability to have specified VMs prioritized over other VMs? Also, the ability to restart VMs in a specific order?
    2) Partial fail where the hypervisor is in some sort of hung state and the VMs are down but the storage is still accessible and any file locks (if applicable) are still held?
    3) Host isolation. What happens when the Proxmox host is unreachable from the management side but the VMs running are still accessible? Will it allow VMs to still run? Will it provide an option to restart VMs on other nodes?
    Thanks.

  • @FlaxTheSeedOne
    @FlaxTheSeedOne 7 місяців тому

    Probably the better way to setup ceph would have been to get 2 switches in an MLAG and do an LACP with the 2 ports to get 20git for ceph and VMs.
    Since you are in a non production environment where your servers get hit with 10g incomming traffic from the internet ceph has more ressources and failover capability

  • @balex96
    @balex96 3 місяці тому

    I'm just here taking up space.

  • @wngimageanddesign9546
    @wngimageanddesign9546 Рік тому +1

    I concur, those Silicon Power (SPCC) SATA SSDs do suck. I discovered the company hardcoded the SMART data! They are all fixed to display 40 C. No matter what the actual temperature is. This seems to be a response to a review on Amazon that their SSDs were running as high as 60 C and failing prematurely. And that reviewer noted the replacements all read 40 C. I bought a 1 and 2 TB SATA SSD and both of mine never waver from 40 C. Even when cold booting at a much lower ambient temperature. Or testing them under CrystalDiskMark. F-cking Amazon pulled my review down with my findings! Buyer beware.

  • @JosephHarry
    @JosephHarry Рік тому

    I am here just taking up space :P

  • @twder6577
    @twder6577 Рік тому

    My x10 gets quite warm. Have you done anything to the cooling?

    • @RaidOwl
      @RaidOwl  Рік тому

      Nah they have active cooling though. Get about mid 70s under load

  • @davidgates1887
    @davidgates1887 11 місяців тому

    did you use eltro past

  • @michaelrichardson8467
    @michaelrichardson8467 Рік тому

    Did you try and pass through the gpu without the rizer cable?

  • @timmoth6477
    @timmoth6477 Рік тому +1

    I have a similar setup, for those who want a 3 node cluster but don’t want to splash out on a 10g switch, you can use duel 10gig nics in a full mesh network so each node has a direct connection to each other, works well and removes a single point of failure! (The switch)

    • @viilaaja
      @viilaaja Рік тому

      or go for the quad 25G nic in the pci-slot way and use them in mesh network with dac/fiber and leave the 10G copper for outbound networking.

  • @james-cucumber
    @james-cucumber Місяць тому

    Kinda weird request, and I know I most definitely should not be buying hardware based on aesthetics, but could you let me know what these chassis look like with rack mounted ubiquiti gear? Do the two silvers look good together, or do they clash?

  • @lumikkode
    @lumikkode 6 місяців тому

    I'm just here taking up space :)

  • @barfnelson5967
    @barfnelson5967 Рік тому

    re: gpu. It's either going to be you need this in your file: vfio_iommu_type1.allow_unsafe_interrupts=1 or you need hugepagesz=1G default_hugepagesz=2M in your grub and hugepages: 2 and balloon: 0 in your /etc/pve/qemu-server/VMID.conf or your hardware just can't handle outputting to the physical ports on the gpu, in which case if you turn off default gpu on its setting in hardware it will still work but only over vnc/for computation, which is not that useful in your case probably (it's way more useful if you are importing gpus to to hardware encode/decode for plex/jellyfin.)

  • @javierchaparrooficial5376
    @javierchaparrooficial5376 9 місяців тому

    im just here taking up space c:

  • @LiebJohnson
    @LiebJohnson Рік тому +1

    What is your energy use under load at at rest? This might be just what I need.

    • @RaidOwl
      @RaidOwl  Рік тому +3

      Under load they pull just over 100W. At idle it’s like 85

  • @turbo5546
    @turbo5546 Рік тому +1

    I wonder if the 00 is the code the bios is reporting to the IPMI, Then I'd hazard a guess its a cracked ball joint under the cpu possibly. A reflow might fix it. I have no real experience in that, but its just a random guess based on other things I've seen.

  • @StephenBattey-by5cq
    @StephenBattey-by5cq 2 місяці тому

    How loud is it?

  • @alty3130
    @alty3130 15 днів тому

    I’m just here taking space

  • @thegoldenmoss7756
    @thegoldenmoss7756 2 місяці тому

    I’m just here taking up space

  • @corbynt
    @corbynt 4 місяці тому

    I wanted to mimic this setup but for the switch that has the dedicated CEPH network...if that switch needs to reboot say for an update....would that wreck a lot of stuff since all 3 hosts lose communication to each other over CEPH? Have you tested that?

    • @GapYouIn2
      @GapYouIn2 4 місяці тому

      you can put the ceph cluster in maint mode or just let it pause on its own. source, lost two switches powering a cluster.

  • @CVLova
    @CVLova Рік тому +1

    10:09 holy banana. no standoffs? :S

  • @NeptuneSega
    @NeptuneSega Рік тому

    I'm just space taking up here

  • @markowens5446
    @markowens5446 11 місяців тому

    I am just here taking up space.

  • @AndreyMir
    @AndreyMir 11 місяців тому

    Has it enough CPU power to encode /transcode 4K videos for Plex?

  • @nikiforos6
    @nikiforos6 Рік тому

    How does the secondary network for the CEPH storage work? Is it not at all connected to the main network? If so, do I have to manually assign IP addresses to the systems?

    • @RaidOwl
      @RaidOwl  Рік тому +1

      It’s connected to the main network but it has its own VLAN with proper DHCP addressing

    • @Mcs1v
      @Mcs1v Рік тому

      @@RaidOwl Its not recommended to use DHCP for Ceph network (Ceph is tightly bounded to IP address). Yeah, it doesnt really matter in a 3 node ceph cluster, but its a really bad practice

    • @GapYouIn2
      @GapYouIn2 4 місяці тому

      @@Mcs1v reservations make everything possible and just fine.

  • @adog1314
    @adog1314 Рік тому +2

    Lookup "Proxmox Kernel 5.15.60-1-pve Breaks PCI Passthrough" I spent days trying to get PCI passthrough working until I found out about kernel issue

  • @RGCFlick
    @RGCFlick Рік тому

    Would you still recommend the Zima board?

    • @RaidOwl
      @RaidOwl  Рік тому +1

      For sure, just manage your expectations

  • @jonathandavis4711
    @jonathandavis4711 2 місяці тому

    I'm just here taking up space, but I'm trying to understand the high availability part -- what sort of failures are we trying to protect against? the shared ceph pool seems to be the single point of failure that would take down the entire cluster? A single drive failure isn't an issue with raid, but what if that hardware failure that isn't a drive?

    • @RaidOwl
      @RaidOwl  2 місяці тому

      An entire server could blow up and everything would keep running

    • @JonathanDavisJJ
      @JonathanDavisJJ 2 місяці тому

      @@RaidOwl I think I've mixed something up then. Is there a 4th server that holds all the drives doing the ceph storage ( the larger 4U under the 3 nodes ), or is the ceph storage replicated across, and exists on, the drives in the node servers?

    • @RaidOwl
      @RaidOwl  2 місяці тому +1

      @@JonathanDavisJJ Ceph runs on each of the 3 nodes using each of the Micron SSDs in each of the nodes. So yeah, the ceph storage is replicated across all 3 nodes.

  • @amazingmation97
    @amazingmation97 Рік тому +1

    If you have a time machine you could go back in time to stop yourself but if not then. I am just taking up space.

  • @bobbiac
    @bobbiac 7 місяців тому

    Funny story.. those chassis are used by one of our client's vendors as NVRs

  • @hotrodhunk7389
    @hotrodhunk7389 9 місяців тому +2

    I'm just here...

  • @JoaquinVacas
    @JoaquinVacas 11 місяців тому

    Tried the spell too.
    Now how do I revert, there's no snapshot for that.

  • @champ666ZA
    @champ666ZA Рік тому

    1amp for all three? Whats the voltage in the US? 220?

  • @marvinbrando722
    @marvinbrando722 Рік тому

    Good for not depending on cloud

  • @lamar9525
    @lamar9525 Рік тому +1

    I'm just here taking up space, I won't spend that kind of money.

  • @victor2410
    @victor2410 Рік тому

    I'm just here taking up bandwidth

  • @TechnoTim
    @TechnoTim Рік тому +1

    I’d link my guide but it’s not the first result on google

    • @RaidOwl
      @RaidOwl  Рік тому

      Doesn’t count then

  • @iamweave
    @iamweave 10 місяців тому +3

    I think you're supposed to have TWO separate Ceph networks, one for their "private" and one for their "public" -- plus 10 gig for your proxmox vm network, making three, then a separate gig network for corosync and yet another gig network for proxmox system (separate from vm network). I'm just looking into this now though and I have read that putting the two ceph networks on one NIC is usually fine for most people. But I'm just figuring this out myself too.

  • @michaelamos75
    @michaelamos75 Рік тому

    I'm just here taking up space. 😅

  • @sammy-qd1oi
    @sammy-qd1oi Рік тому +1

    Gotta love those musical rodents

  • @jinal007
    @jinal007 9 місяців тому

    I must admit , when I first came across your channel, I found you and/or your method of presentation to be somewhat annoying. But the overlords at UA-cam and their algo kept on pushing your content to my feed and after watching more of your videos, I have actually started taken a liking to your awkward sense of humor. I also enjoy that you share all of your mistakes and blunders with us, which any homelabber can relate to. So I guess I’ll hit that subscribe button!

    • @RaidOwl
      @RaidOwl  9 місяців тому

      Praise to the almighty UA-cam overlords 🙏🏼

  • @SeanDion
    @SeanDion Рік тому

    I'm just here taking up space... again.

  • @pWAVE86
    @pWAVE86 11 місяців тому

    Not sure if I missed it .. but power consumption (idle/load) per node would have been nice too - otherwise cool video!

    • @RaidOwl
      @RaidOwl  11 місяців тому

      About 30-35W per node

    • @pWAVE86
      @pWAVE86 11 місяців тому

      @@RaidOwl that would be quite much for me .. is that idle or load? in the case you showed 1 node can run the VMs and stuff while the other 2 nodes are basically fully idle (until there is a crash of node 1). are the 30-35W idle (for node 2 and 3) or load (node 1)? thanks for the reply

  • @dlfzstuff4343
    @dlfzstuff4343 11 місяців тому

    l Bought the same motherboard but for the life of me it won’t connect to the internet tried many settings no IPv4 or 6 sends but wont receive any ideas would really help Thanks !!!!!!!

  • @sethual5982
    @sethual5982 Рік тому

    "I'm just here taking up space"
    -Me

  • @aciamage
    @aciamage Рік тому

    I'm just here taking up space 🤷‍♂️

  • @user-fu7jr5ps5u
    @user-fu7jr5ps5u 6 місяців тому

    Does anyone have a link to the U.3 to NVME adapter cable? I'm having trouble finding how the U.3 drive connects to an M.2/NVME slot on a motherboard.

    • @RaidOwl
      @RaidOwl  6 місяців тому

      These are what I used: amzn.to/3uTXR1b

    • @user-fu7jr5ps5u
      @user-fu7jr5ps5u 6 місяців тому

      Wow! thanks for the quickly reply. Can't wait to try it it out. I have the 2 of the XeonD-1540 boards and it might be time to get a 3rd ! @@RaidOwl

    • @RaidOwl
      @RaidOwl  6 місяців тому

      Wish I woulda gone with the 1540s haha

  • @TMoneyJones
    @TMoneyJones Рік тому

    I’m just here, late, taking up space.

  • @JavierChaparroM
    @JavierChaparroM Рік тому

    Lol Here just taking space