It will also put more data on the bigger drives, making them read&write more, yes. But just for lazy storage purposes, or learning it's not that much of an issue
if there are internal usb port, you could technically get a clover? (iirc) usb boot and points it to the PCIe m.2 SSD thus technically boot off of USB then off OS on the PCIe SSD.
Like I said in the video, i didn't like the idea of boot being split into two equally critical components. You could get a small USB with GRUB or Unified Kernel Image, but then you have additional critical component in your system. M.2 SATA SSD's are dirt cheap, and the passthroughs still exist (albeit they're getting more rare, especially those with SATA controller on them)
Just about that time you published your film I was setting up a new Ceph server on a single node (as in your example). I could have been switching from spreading data between hosts to between osds in advance by supplying the initial config. The next stumbling block was using more than one MDS per fs on that single node, not knowing that one has to ceph orch apply mds … and ceph orch daemon add mds … manually, authorizing a client (simple), mounting an fs on client (disaster) and finally getting to know that all documentation describing Ceph using two public_network-s is a lie. Only extracting/injecting a monmap can change the mon addresses and they can only by one (with two protocol versions at most). Thinking in the way several times about not easy but usable ZFS I still managed to restrain myself from deleting Ceph and continued through the stations of my Ceph's passion.
Thumbs up for geezers who remember Bad Apple
Remember too, using Ceph with dissimilar drives will limit the cluster's throughput and IOPs to the speed of the slowest drive.
It will also put more data on the bigger drives, making them read&write more, yes. But just for lazy storage purposes, or learning it's not that much of an issue
if there are internal usb port, you could technically get a clover? (iirc) usb boot and points it to the PCIe m.2 SSD thus technically boot off of USB then off OS on the PCIe SSD.
Like I said in the video, i didn't like the idea of boot being split into two equally critical components. You could get a small USB with GRUB or Unified Kernel Image, but then you have additional critical component in your system. M.2 SATA SSD's are dirt cheap, and the passthroughs still exist (albeit they're getting more rare, especially those with SATA controller on them)
Just about that time you published your film I was setting up a new Ceph server on a single node (as in your example). I could have been switching from spreading data between hosts to between osds in advance by supplying the initial config. The next stumbling block was using more than one MDS per fs on that single node, not knowing that one has to ceph orch apply mds … and ceph orch daemon add mds … manually, authorizing a client (simple), mounting an fs on client (disaster) and finally getting to know that all documentation describing Ceph using two public_network-s is a lie. Only extracting/injecting a monmap can change the mon addresses and they can only by one (with two protocol versions at most). Thinking in the way several times about not easy but usable ZFS I still managed to restrain myself from deleting Ceph and continued through the stations of my Ceph's passion.