Whats the faster VM storage on Proxmox

Поділитися
Вставка
  • Опубліковано 25 січ 2025

КОМЕНТАРІ • 132

  • @cryptkeyper
    @cryptkeyper 2 роки тому +8

    Finally a video that is straight to the point on what I wanted to know. Thank you

  • @zebraspud
    @zebraspud 11 місяців тому +2

    Thanks!

  • @theundertaker5963
    @theundertaker5963 2 роки тому +6

    Thank you for an amazing, straight to the point, and concise video. I have actually been spending a lot of time trying to put together all the bits and pieces of what you managed to put into this fantastic video for a project of mine I shall be undertaking soon. Thank you for the time you put into collecting, and presenting all the benchmarks.
    You have a new subscriber.

  • @RomanShein1978
    @RomanShein1978 Рік тому +7

    Great video.
    It is worth mentioning that it is possible to use the same ZFS pool to store all kinds of data (vdisks, backups, isos etc.). The user may create 2 datasets, and assign the first dataset as zfs storage and the second one as a directory.

    • @magnerugnes
      @magnerugnes Рік тому

      ua-cam.com/video/oSD-VoloQag/v-deo.html

  • @LiterallyImmortal
    @LiterallyImmortal Рік тому +19

    I’ve been trying to learn Proxmox the past couple days and this was SUPER helpful. Thanks a bunch man. Strait to the point and you explain your opinions on the facts presented.

  • @Rai_Te
    @Rai_Te 8 днів тому

    Very nice overview. I had wished for two things ... first, that you had mentioned the proxmox version you have based this comparison on .... second, that you had included Ceph into the comparison.

    • @ElectronicsWizardry
      @ElectronicsWizardry  8 днів тому

      Oops forgot the Proxmox version, but I think it was v7 from the upload time.
      Ceph is typically used for multiple nodes. You can make it work on a single system(maybe I'll look at that one day), but didn't seem to make sense to compare a cluster storage system to a single host system

  • @MHM4V3R1CK
    @MHM4V3R1CK 2 роки тому +2

    Thank you for these videos. Very clear and answers the questions that come up as I'm listening. Satisfying!

  • @Shpongle64
    @Shpongle64 8 місяців тому

    Bro I've been researching this topic for a couple hours on and off each day. Thank you for just combining this information into one video.

    • @Shpongle64
      @Shpongle64 8 місяців тому

      Generally what I gathered was set the physical storage to ZFSpool non-directory and then have the VM disks set to raw.

  • @gregoryfricker9971
    @gregoryfricker9971 2 роки тому +1

    This was an excellent video. May the algorithm bless you.

  • @alanjrobertson
    @alanjrobertson 29 днів тому

    Really helpful explainer, thank you! I was always wondering about LVM vs LVM Thin and also useful to know not all types allowed on ZFS and that ZFS Directory will be required for some.

  • @nalle475
    @nalle475 2 роки тому +3

    Thanks for a great video. I found ZFS to be the best way to go.

  • @jeevespreston4740
    @jeevespreston4740 3 місяці тому +1

    Very much appreciate your time and research it certainly took to obtain, have, and share, this level of knowledge…. 👍

  • @SteveHartmanVideos
    @SteveHartmanVideos Рік тому +1

    This is a fantastic primer on file storage for proxmox.

  • @paulwratt
    @paulwratt 2 роки тому +15

    For those interested, Wendell just did a "what were learned" review of Linus' (LTT) PetaByte ZFS drive failure - "A Chat about Linus' DATA Recovery w/ Allan Jude" - ZFS got another development boost (with more coming) as a result ..

  • @crossfirebass
    @crossfirebass Рік тому +4

    Not gonna lie...I need a whole new vocabulary lol. Thanks for the explanations. I kind of dove face first into the world of Virtualization and wow do I need an adult. I bought some pc guts off a coworker for $500. 64 x AMD Ryzen Threadripper 2990WX 32-Core Processor, 64 Gigs RAM (forgot the speed/version), and an ASROCK MB. I threw in 24TB of spinning rust and now learning how to VM/setup an enterprise. End goal...stay employed lol. Thanks again for the help.

  • @PeterBatah
    @PeterBatah Рік тому

    Thank you for sharing your time and expertise with us. Insightful anf informative. Clear and precise.

  • @BenRook
    @BenRook Рік тому

    Nice presentation of what's available and pros/cons... good vid!
    Will stayed tuned for future content...thx.

  • @pb8582
    @pb8582 Місяць тому

    Thank you so much for going though all of them!

  • @dgaborus
    @dgaborus Рік тому

    At 7:07 slight performance advantages? Performance is 3x faster with PCI-e passthrough than with ZFS or LVM. Although, I prefer ZFS as well for the flexibility.

  • @2Blucas
    @2Blucas 9 місяців тому

    Thank you once again for the excellent video and for sharing your knowledge with the community.

  • @dronealbania
    @dronealbania 6 днів тому

    Which would be the right solution in proxmox for seting up a HA infrastructure with FC block storage? I was looking at them but most didnt support snapshots. I tried ZFS over iscsi and it worked but to make it HA looks more complex.

  • @moonified4561
    @moonified4561 2 місяці тому

    I have a VMs running on an LVMthin drive, with the guests formatted to ext4 and NTFS. The drive is an SSD thats separate from the proxmox host OS which is on another SSD. I'm finding that I'm getting slow speeds (60-70MB/s) when on the guests, but the same drive performs fine when using something like hdparm directly from the host (370-390MB/s). I can't figure out why! Caching is off, and all drives are set to VirtIO Scsi Single.

  • @tulpenboom6738
    @tulpenboom6738 8 місяців тому +2

    One advantage of LVM over ZFS though, is that you can share it across hosts. If you have a cluster using shared iSCSI, FC or SAS storage (where every host sees the same disk) you can put LVM on that disk (on the first host, use vgscan on the rest), add it as shared LVM in the GUI and all other hosts see the same volume group. Allocate VM's out of that group, and it's easy and quick to do live migrations. ZFS cannot do this.

  • @AdrianuX1985
    @AdrianuX1985 2 роки тому +4

    5:00..
    After many years, the BTRFS project is still considered unstable.
    Despite this, Synology uses BTRFS in its commercial products.

    • @paulwratt
      @paulwratt 2 роки тому +2

      yay for network storage devices that use proprietary hardware configurations. ("right to repair" be damned)

    • @carloayars2175
      @carloayars2175 Рік тому +1

      Synology Hybrid RAID (SHR) uses a combination of BTRFS and LVM. It avoids the problem parts of BTRFS this way while still delivering a reliable file system with many of the main benefits of BTRFS/ZFS.

    • @ElectronicsWizardry
      @ElectronicsWizardry  Рік тому +2

      I think shr also uses mdadm. Mdadm is used for raid, btrfs is used for a filesystem and checksumming. If a checksum error is found md delivers a different version and the corrupt data is replaced. Lvm is used to support mixed drive sizes.

    • @mmgregoire1
      @mmgregoire1 Рік тому

      BTRFS is also used by android, google, facebook, SUSE and many more...

    • @gg-gn3re
      @gg-gn3re 11 місяців тому

      lots of things use BTRFS commercially for years as others have mentioned. BTRFS will be considered unstable for another 10 or more years, so don't let that stop you if you want to use it for some reason.
      Us home people don't have the issue of what license certain stuff has since we don't resell, so we can use many things that these vendors can't / won't.

  • @iShootFast
    @iShootFast Рік тому

    awesome overview and cleanly laid out.

  • @uSlackr
    @uSlackr 2 місяці тому

    Did you talk that all through from memory?!? Amazing.

  • @advanced3dprinting
    @advanced3dprinting Рік тому +1

    Really love your content i hate that channels with way less info but just do flashy edits get the attention when the guys that know their shxt don't get the same views

  • @kimsonvu
    @kimsonvu Місяць тому

    Lvm, ZFS BTRFS which on allow hdd sleep?

  • @adimmx8928
    @adimmx8928 2 роки тому +1

    I have a sql query taking 15 seconds long on a vm in proxmox stored on a nvme ssd. I created 3 other vm all running the same os but on different file system ext4,btrfs and zfs and installed only the mariadb server serving the same database but via tcp and i could not get the performance of the first initial vm. Any ideas why? I get close to performance with an lxc container only.

    • @ElectronicsWizardry
      @ElectronicsWizardry  2 роки тому

      Im. It sure what the issue your have is here, some of the things I think might be the issue. Check that virtio drivers are used for the vm to allow the best virtual disk performance. I’d guess the massive performance difference could be due to caching. I’m not sure how caching is setup for containers but if ram is being used as a cache that large of a performance delta would be expected. Also if your system supports it I’d try doing a pcie passthrough of the ssd to the vm as it should allow the best performance by removing the overhead of virtual disks.

  • @vusisemwayo
    @vusisemwayo 2 дні тому

    Can I convert LVM thin to LVM?

  • @angelgil577
    @angelgil577 Рік тому

    You are a smart Cooke. Thank you, this info is very helpful.

  • @DoubleJ8899
    @DoubleJ8899 3 місяці тому

    Great video. Thank you, sir.

  • @ponkoot
    @ponkoot 4 місяці тому +1

    Great video! Thanks

  • @daniellauck9565
    @daniellauck9565 10 місяців тому

    Nice content. Thanks for sharing. Is there any comparison or deep studying about centralized storage with iSCSI or fiber channel ?

  • @haywagonbmwe46touring54
    @haywagonbmwe46touring54 Рік тому

    Ahh thanks! I was looking for just this kinda video.

  • @VladyslavKudlai
    @VladyslavKudlai Рік тому +1

    Hello dear EW, please can you again review Proxmox 8 with ZFS vs BTRFS performance?

    • @ElectronicsWizardry
      @ElectronicsWizardry  Рік тому +1

      I think btrfs is in technical preview status still currently. I’m waiting for it to get in the full release and I’ll take a closer look.

  • @gsedej_MB
    @gsedej_MB Рік тому

    Hi. Is it possible to "pass" zfs-directory or zfs-subzfs to guest. The main idea is, that zfs is filesystem and guest needs to have own filesystem (e.g. ext4) which is overhead. So only the host should be doing filesystem operation, while guest would see as folder. I guess zfs would have to support some kind of server/client infratructure but without networking overhead...

  • @DLLDevStudio
    @DLLDevStudio Рік тому +1

    btrfs changed since this video was made. it should be a way faster today. i wish for an updated video....

    • @ElectronicsWizardry
      @ElectronicsWizardry  Рік тому +1

      I have been interested in BTRFS for a while now, and plan on taking a look at it in the future. It seems to still be in tech preview status, so I'm waiting for it to be stable before I look at it much more.

    • @DLLDevStudio
      @DLLDevStudio Рік тому

      ​@@ElectronicsWizardry Hello, brother. It appears that the system is stable when using the stable kernel. I wish it had some effective self-healing capabilities, which would allow it to replace ZFS in some of my applications. Although ZFS is excellent, Btrfs seems to be faster already. Meanwhile, XFS is still the fastest but lacks any kind of protection.

  • @andymok7945
    @andymok7945 Рік тому

    Thanks, very useful info in this video.

  • @ChetanAcharya
    @ChetanAcharya 2 роки тому +1

    Great video, thank you!

  • @perfectdarkmode
    @perfectdarkmode 5 місяців тому

    If you use ZFS, does that mean you would not want hardware RAID on the physical server?

    • @ElectronicsWizardry
      @ElectronicsWizardry  5 місяців тому +1

      Yup, ZFS typically likes to do RAID its self and have direct access to the drives. You can run ZFS ontop of hardware raid and get access to ZFS snapshots and send/recieve.

  • @mmgregoire1
    @mmgregoire1 Рік тому +1

    Ceph RADOS is definitely the way to go, I hope that the performance for BTRFS is improved in the future, I do not really care for RAID5 or 6 and prefer 10, 1 or none generally anyway. BTRFS send and receive is a killer feature. I prefer that BTRFS is licensed and in kernel, this make booting and recovery senarios based on BTRFS potentially better with some work on proxmox side.
    cross fingers for BTRFS.

    • @Larz99
      @Larz99 6 місяців тому

      I'm with you! Ceph works a treat in conjunction with Proxmox HA. Ceph let's any node see the disk image so there's no down time when migrating a VM. We get replication across disks or hosts as well as the raid-like erasure encoding. I have great fun shutting down nodes running VMs and watching the VM hop across the network to another node never missing a beat.
      The options offered by Proxmox are awesome!

  • @attilavidacs24
    @attilavidacs24 8 місяців тому

    I can't get decent speeds on proxmox with my nvme or HDDs. I'm getting a max of 250Mb/s on a 4 hdd raid 5 array virtualized even with PCI passthrough but unvirtualized its 750Mb/s. Even my NVMe drive virtualized starts off at 800Mb/s then drops down to 75 - 200mb/s and fluctuates. I'm running the virtio SCSCI controller. Why are my speeds slow?

    • @ElectronicsWizardry
      @ElectronicsWizardry  8 місяців тому

      That's a strange issue Ive never seen. What hardware are you using? Do you get full speeds on the Proxmox host using tools like FIO? Is the CPU usage high when doing disk IO?

    • @attilavidacs24
      @attilavidacs24 8 місяців тому

      @@ElectronicsWizardry I'm running a Ryzen 7900 CPU, LSI 9300 HBA connected to x7 HDDs on 2 vdevs, 1 cache SSD. One NVMe pve boot drive and I also have a samsung evo NVMe for VMs and a Mellanox 10g NIC. I will try some FIO benchmarks and report back. I have 64GB total RAM and the CPU usage stays quite low throughout all the VMs. My HBA is using PCI passthrough to a TrueNAS VM.

  • @smalltimer4370
    @smalltimer4370 Рік тому

    I'm in the process of building an nvme Proxmox server using a combination onboard nvme drive w/ 4 x 2TB nvme in zraid 10
    That said and based on your experience, would this be the optimal way to go for vm's?
    ps. having read multiple posts or comments on ssd wear, I remain a bit worried on my setup choice as I'd like to get the most out of my storage system without sacrificing the life of the devices - ie, 3 years would seem reasonable for a refresh imo

    • @ElectronicsWizardry
      @ElectronicsWizardry  Рік тому +2

      Yea a raid 10 makes a lot of sense for VMs due to the high random performance.
      I wouldn't worry about SSD wear much for home server use as most SSDs have more endurance that you would ever need, and they will go well over the rated limit. I'd guess the drives will be fine in 3 years. There are high endurance drives you can get if your worried about endurance.

  • @SEOng-gs7lj
    @SEOng-gs7lj 2 роки тому

    i don't quite understand the remark that ZFS can connect "to the physical disks and goes all the way up to the virtual disks" at 4:13. I mean, doesn't lvm/ext4 in proxmox provide the same?
    i'm trying to create a ubuntu VM with a virtual disk formatted as ext4, is this correct? if not, is there a demo showing the "better" way? thank you

    • @ElectronicsWizardry
      @ElectronicsWizardry  2 роки тому

      I think I said that wrong in the video. Other filesystems can be used as one layer between the disks and the VM. The point I was trying to get across was that ZFS has additional features that additional software would be needed if similar features were wanted in filesystems like EXT4. ZFS for example supports RAID and snapshots, and in order to have similar features on EXT4, MDADM for RAID and LVM/QCOW2 would have to be used for snapshots. I like using ZFS as there is one piece of software to handle the filesystem, RAID, snapshots, volume manager and other drive related operations.
      The filesystem your VM is using isn't affected by the storage configuration on the host, and using EXT4 on a Ubuntu vm will work well.

    • @SEOng-gs7lj
      @SEOng-gs7lj 2 роки тому

      @@ElectronicsWizardry cool thank you!

    • @SEOng-gs7lj
      @SEOng-gs7lj Рік тому

      i have proxmox(zfs) and ubuntu(ext4) guest, after installing MySQL in my ubuntu, it takes 3 mins to ingest an uncompressed .SQL, something is definitely wrong, any idea what I can check/fix? thanks!

    • @ElectronicsWizardry
      @ElectronicsWizardry  Рік тому

      I’d take a look at system usage during the import in the vm first. What is the cpu or disk usage. Then if it’s disk usage limited check if other hosts are using too much disk on the host.

    • @SEOng-gs7lj
      @SEOng-gs7lj Рік тому

      @@ElectronicsWizardry i'm hitting the 100% disk utilization.. but there is hardly any activity apart from mysql.. seems to be a configuration issue but i don't know where

  • @robthomas7523
    @robthomas7523 Рік тому

    What file system do you recommend for a server whose storage needs keep growing at a high rate. LVM?

    • @Larz99
      @Larz99 6 місяців тому

      Ceph. You can throw another OSD (drive) into a pool at any time. You have similar options for replication (mirroring) and erasure encoding (Zraid) as with ZFS or raid plus the ability to spread the storage across multiple nodes in a cluster. No need for periodic replication of your LVM-based images. Ceph does this in real time, continuously. All nodes see the same data at the same time.

  • @mikemorris5944
    @mikemorris5944 2 роки тому

    Can you still use ZFS for storage option if you didn't install ProxMox using ZFS format?

    • @ElectronicsWizardry
      @ElectronicsWizardry  2 роки тому +1

      Yea ZFS can be added to a Proxmox system no matter what the boot volume is set to. The boot volume only affects data that is stored on the boot drive, and storage of any type can be added later on.

    • @mikemorris5944
      @mikemorris5944 2 роки тому

      @@ElectronicsWizardry thanks again EWizard

    • @Goldcrowdnetwork
      @Goldcrowdnetwork 2 роки тому

      @@ElectronicsWizardry so if adding a USB storage device like a 2 terabyte WD Passport drive (I know this is not ideal but what I have laying around) ZFS would be a better choice than LVM or LVM-thin in your opinion for storing LXC templates and snapshots with Docker apps inside them?

    • @ElectronicsWizardry
      @ElectronicsWizardry  2 роки тому +1

      @@Goldcrowdnetwork For practical purposes, there will be almost no difference. The containers will run the same on both. Id personally use ZFS as I like the additional features like checksumming and like using the zfs tools. LVM would be a tiny bit faster, but it will likely be very limited by the HDD with both of these.

  • @perfectdarkmode
    @perfectdarkmode 5 місяців тому

    How does ZFS compare to Ceph?

    • @ElectronicsWizardry
      @ElectronicsWizardry  5 місяців тому

      There kinda different. ZFS in Proxmox is typically single system only, and Ceph is generally for multiple systems.

  • @VascTheStampede
    @VascTheStampede 2 роки тому +1

    And what about Ceph?

    • @ElectronicsWizardry
      @ElectronicsWizardry  2 роки тому

      I was only looking at local storage in this video so I didn’t include iscsi, ceph,nfs and similar. I don’t think there would be a easy way to compare ceph to on host storage as it’s made for a different use case and I isn’t have the correct equipment for testing currently.

    • @scottstorck4676
      @scottstorck4676 Рік тому +1

      Ceph lets you store data over many nodes, to ensure availability. If you need the availability Ceph provides, the kind of benchmarking done for this video is not something you would normally look at.
      I run a small six node Proxmox cluster with Ceph, and the performance it provides is not really comparable with filesystems on single nodes, as the resources are used on the cluster as a whole. There are so many factors when dealing with performance on Ceph, including the GHz of a CPU core, the network speed, the number of HDD / SSD / NVME used as well as their configuration. It is not something where you can compare benchmark results between systems, unless the hardware and software and configuration is 100% identical.

    • @Larz99
      @Larz99 6 місяців тому

      ​@@scottstorck4676... and the network speed, and the network speed. :) I'm still floored by how fast Ceph runs in real-world use.

  • @ScottZupek
    @ScottZupek 3 місяці тому

    For those of you who ahve shared stored (iscsi, nfs, etc), NFS was FASTER than LVM. Obviously this is dependent on your upstream connections (10Gbps minimum, 1Gbps works for backup, if you can fill it in the time game available)

  • @jasonmako343
    @jasonmako343 2 роки тому +1

    nice job

  • @Josef-K
    @Josef-K Рік тому

    What about draid?

    • @ElectronicsWizardry
      @ElectronicsWizardry  Рік тому

      I haven't looked at Draid, and will take a look at it soon and make a video

    • @Josef-K
      @Josef-K Рік тому

      @@ElectronicsWizardry well I was tinkering around today with a 4TB and 3TB that I wanted to mirror. I ended up splitting them into 1tb partitions, so it let me create Draid2 with only two drives (7 x 1TB partitions) one of which as spare. This got me thinking - can Draid be used as my root Proxmox (bare metal) in order to make proxmox even more HA? And now I'm also wondering - is there any kind of performance and/or reliability gain (maybe even across multiple nodes) if I have even more partitions per disk for Draid? The idea being you can sip each partition for its data across every partition in my cluster.

  • @JJSloan
    @JJSloan Рік тому

    Ceph has entered the chat

  • @jossushardware1158
    @jossushardware1158 7 місяців тому

    what about ceph

    • @ElectronicsWizardry
      @ElectronicsWizardry  7 місяців тому

      I didn't cover Ceph as its not a traditional filesystem/single drive solution like the other options covered. I plan on doing more videos on Ceph in the future. The quick summary is Ceph is great if you want redundant storage across multiple nodes that's easy to grow. Its typically slower than a single drive in a small environment due to the additional overhead of multiple nodes, and having to confirm writes across multiple nodes.

    • @jossushardware1158
      @jossushardware1158 7 місяців тому

      @@ElectronicsWizardry Thank you for your answer. I have understood that enterprise SSD with PLP is the only way to make CEPH faster. Of course node links has to be at least 10GB or more. Do you know does Mysql Galera cluster also confirm writes across multiple nodes? So would it also benefit of PLP in ssd?

  • @danwilhelm7214
    @danwilhelm7214 2 роки тому +1

    Well done! My data always resides on ZFS (FreeBSD, SmartOS, Linux).

  • @DomingosVarela
    @DomingosVarela Рік тому

    Hello,
    I'm installing the new version for the first time on an HP server with 4 300G disks, I want to know the recommended option for using the disks, keep proxmox installed on a single disk and use the rest in pool zsf mode for the vms?
    What option do you recommend?
    Thanks
    Best Regards

    • @ElectronicsWizardry
      @ElectronicsWizardry  Рік тому +1

      Does the server have a RAID card? If so I'd setup hardware raid using the included raid card. Then I'd probably go ZFS for its features, or ext4 if you want a tiny bit more speed. I will warn you that running vms on HDDs will be a bit slow for host uses.
      If it doesn't have a raid card, I'd probalby use ZFS for raid 10.

    • @DomingosVarela
      @DomingosVarela Рік тому

      @@ElectronicsWizardry thanks for your response!
      my server has a raid card and I disabled it because zsf doesn't work very well on top of the hardware configured raid, so I disabled the hardware raid. if I use raid10 with the 4 disks I will only have the value of one of them, on this same disk will I install proxmox and the VMs?

    • @ElectronicsWizardry
      @ElectronicsWizardry  Рік тому +1

      Yea I’d you can disable the raid card and use Zfs that’s what’s I’d do as I’m a fan of Zfs. Using hardware raid and ext4 would be a bit faster especially if the hardware raid card has a battery backed cache it can use.

    • @DomingosVarela
      @DomingosVarela Рік тому

      @@ElectronicsWizardry I'm using HP Gen10 it has a very good raid card, but I would really like to use zsf for its advantages associated with proxmox, so I need some support to understand the recommended option in using the disks, separate the proxmox installation with the VMs or use a raid10 for all disks and keep the proxmox and the VM in the same pool?

  • @lecompterc83
    @lecompterc83 6 місяців тому

    No idea what was just said, but I’ll piece it together eventually 😂

  • @dominick253
    @dominick253 Рік тому +1

    I feel like there's a code in your blinking. Maybe Morse code?

  • @davidkamaunu7887
    @davidkamaunu7887 Рік тому

    Ext4 isn’t faster than Ext2 because it is a journaling file system like NTFS. Journaling File Systems have overhead from the journaling. Like it wasn’t good to have a LUKS on ext3 or ext4

    • @davidkamaunu7887
      @davidkamaunu7887 Рік тому

      Another thing most people won’t cache on to. Never RAID flash storage (SSDs or NVMe) as you create a race condition that will stress the CPU and the quartz clock. Why? Because they have identical access times that are as fast as disk cache or buffer

    • @gg-gn3re
      @gg-gn3re 11 місяців тому

      NTFS on windows doesn't journal. It was designed to but never implemented. Just like it is also a case sensitive file system but windows disables that entirely.
      Their new filesystem has these features, mostly because ntfs breaks so much with their linux subsystem.
      All in all NTFS is more comparable to ext2 than it is ext4
      ext4 is also faster than ext2 when it is reading from HDDs (cuz of journaling)... SSDs it depends on type of data but not journaling can be faster sometimes

    • @daviddunkelheit9952
      @daviddunkelheit9952 11 місяців тому +1

      @@gg-gn3rethat’s an observation over your experience. You should always qualify your statements. Otw 😬

    • @daviddunkelheit9952
      @daviddunkelheit9952 11 місяців тому

      @@davidkamaunu7887Intel has a couple of functions in 8th and 9th generation processors that allow for PcIE port bifurcations. This allows the use of H10 Optane which has Nand and Optane on same m2 socket. There is Virtual RAID on CPU. VROC which is found on Xeon scalable and used for specific storage models. It requires an optional upgrade key in the D50TNP modules. RAID 0/1 5/10.
      These are VMD NVMe

    • @gg-gn3re
      @gg-gn3re 11 місяців тому

      @@daviddunkelheit9952 no that's a fact. Posted on microsofts website. The only automated journaling is metadata and that is recent.

  • @paulwratt
    @paulwratt 2 роки тому

    That statement you made about the layers you need to adjust individually is not reflected in any graphs anywhere, and thats a shame, because it clearly demonstrates _another_ main benefit of using ZFS over LVM ( yay, look BTRFS is way out in front, oh wait .. )
    Not sure how to take the "ZFS fakes Proxmox cache setting", for testing non-cached it _is_ relevant, but that is _not_ a real world scenario, to the extent that you could attach a drive/device which has _no physical cache_ and ZFS will still happily cache that device, a more authentic real world scenario ( _if_ you could indeed find such a device).
    The _best_ part about ZFS, as Wendell showed and admitted, when your (especially raid) drive pool goes belly up, to the point software tools can not even help, you can still reconstruct original data by hand if need be, as _everything_ is there needed to achieve that ..
    BTRFS _might_ "get there in the end", as ZFS has had an extra 10 years of use, testing and development up its sleeve, but those BTRFS "features" that have not been "re-aligned" for years, means it's never going to be a practical solution, except in isolated cases, its better of being used for SD-Card filesystems, where it can extend the limited life span of the device (if setup correctly), and speed is already a physical issue (as long as you dont want to use said SD-Card on a windows system .. ).
    thanks for taking the time to do the review ..

    • @AdrianuX1985
      @AdrianuX1985 2 роки тому +1

      For several years, the dedicated FS for SD cards has been F2FS (Flash-Friendly File System).

  • @WallaceReen
    @WallaceReen Рік тому +2

    You need to increase the wait time in your blink-function.

  • @RetiredRhetoricalWarhorse
    @RetiredRhetoricalWarhorse 11 місяців тому

    I am getting to the point of realization how much Proxmox is not anywhere near ready to be competing with Vmware.
    The way administration works, the absolutely bad documentation and all the resources online are just so jank...
    Too bad. I'm considering even aborting switching my homelab over. I see no benefits even to just running the current ESXi without patches indefinitely.

  • @shephusted2714
    @shephusted2714 2 роки тому

    big takeaway here is you want a nas with lots of ecc mem and zfs - a z440 with 256gb ram is about 1k making it a great deal

  • @teagancollyer
    @teagancollyer 2 роки тому +3

    I normally watch your videos in the background but actually focused on this vid today and noticed how much you blink which, no offense intended, i found a bit distracting.

    • @paulwratt
      @paulwratt 2 роки тому

      you probably could have _not_ said that, _no offense_ intended .. I think he is fully aware of it ..

    • @teagancollyer
      @teagancollyer 2 роки тому +1

      @@paulwratt yeah I thought about not including it, i just felt it rude without it and i meant it sincerely.

    • @AdrianuX1985
      @AdrianuX1985 2 роки тому +1

      I didn't pay attention, only your comment suggested it.
      I don't understand people who pay attention to such nonsense.

    • @MarkConstable
      @MarkConstable 2 роки тому

      @@AdrianuX1985 Because it is quite distracting. The quality of the content is excellent, but I had to look away most of the time.

    • @paulwratt
      @paulwratt 2 роки тому

      @@AdrianuX1985 its fine, you didn't need to reply (unless no one else did)

  • @typingcat
    @typingcat 2 роки тому +2

    Why blink so much?

    • @abb0tt
      @abb0tt 9 місяців тому +1

      Why not educate yourself?

    • @Larz99
      @Larz99 6 місяців тому +2

      Don't be an ass.

  • @philsogood2455
    @philsogood2455 Рік тому +1

    Informative. Thank you!

  • @Alex-sm6dx
    @Alex-sm6dx 8 місяців тому +1

    Great video, thank you!