(Quick VLOG) Replacing VM machine motherboard and drives...

Поділитися
Вставка
  • Опубліковано 10 січ 2023
  • Patreon: / peterbrockie
    Twitter: / peterbrockie
    IG (rarely tech related): / peterbrockie
    Disclosure: Some of the links below are affiliate links. This means that, at zero cost to you, I will earn an affiliate commission if you click through the link and finalize a purchase.
    Products used in this video:
    1TB Inland SATA SSD (cheap, DRAM-less): amzn.to/3XkvS3A
    ASRock x570 Steel Legend motherboard (supports ECC memory): amzn.to/3W7ur7s
    Supports ECC memory just like the previous board.
    Original server parts list:
    Rosewill RSV-4000U (8 drive bays, a bit smaller): www.newegg.com/rosewill-rsv-r...
    Rosewill RSV-4500U (15 drive bays): www.newegg.com/rosewill-rsv-l...
    CPU AMD Ryzen 5 3600 (cheap, no GPU): amzn.to/3xpcYNz
    CPU AMD Ryzen 5 5650 Pro (recommended, modern, has CPU and ECC support, but no PCIe 4.0): ebay.us/ZF0azo
    CPU AMD Ryzen 5 5600 (more modern, no GPU): amzn.to/3xLn2Sq
    CPU cooler (not needed with 3600/5600) Noctua NH-L9a-AM4: amzn.to/3b2Gt0g
    Sabrent SSD 2TB: amzn.to/3MIeBLX
    Fans - use free case fans or: amzn.to/3mUV7ZO (industrial, if you need to keep drives extra cool)
    Fans (high end): amzn.to/3xqd6w8
    Fans (cheaper): amzn.to/3mCEQIS
    Gigabyte B550M DS3H Motherboard: amzn.to/3QlNyZU
    Timetec 16GB ECC RAM: amzn.to/3HqnOra
    Fractal Ion+ 2 860W PSU (not the one used, but anything form Fractal, Seasonic, etc. is a good choice): amzn.to/3OavXlA
    Fan splitter: amzn.to/3HdxUeO
    M.2 cooler (passive): amzn.to/3Hdc7UF (different from video, but I've grown to like these ones)
    M.2 cooler (active): amzn.to/3HcvUUg (if you have bad airflow and need extra cooling)
    10GbE PCIe card (optional, intel): ebay.us/eP11vy
    10GbE PCIe card (optional, cheaper): amzn.to/3O41G82
    Dell SAS Expander eBay searches under its various names: (Affiliate link) ebay.us/veggdR
    Pinout for the SAS Expander: www.truenas.com/community/thr...
    Current Filming Equipment:
    Sony FX30: amzn.to/3uTmAzb
    Sony A7IV: amzn.to/3uqGsdT
    Sony FE PZ 16-35mm F4 G lens: amzn.to/3JjOdI5
    Sony FE 35mm f/1.4 GM lens: amzn.to/3ootA4e
    Sony FE 90mm f/2.8 G Macro lens: amzn.to/3bFBVYD
    RGB Video Light: amzn.to/3eqVGGs (not my exact light, but similar)
    Desview R6 monitor: amzn.to/34CfKUI
    AmScope ME300TZA-2L Trinocular Metallurgical Microscope: amzn.to/355bIkd
    DaVinci Resolve: amzn.to/3gm2fLR
    Buy me a cup of earl grey:
    Bitcoin: bc1qf5yzmguzxmx7wmal36ancfvqf34ge08juvtnfk
    Ethereum: 0x1B2df497F1c3bDa4b952Bd3cE266Ccf9D3d6cc30
    Chia: xch1qtmtxej0tz2c82wj45mu5ncsu4s6z7nageec44qmc0p4dqexdv7s4a4rlg
    Ravencoin: RKtLSABkqdFHBSL2ntb1pdC3PVEuc8Dt2Q
    Raptoreum: RSECAdkYjcrGh7BogBYjXdB3oaThYdyFxA
  • Наука та технологія

КОМЕНТАРІ • 23

  • @PeterBrockie
    @PeterBrockie  Рік тому +1

    Sorry for the slight crackle in the audio. That slipped by and now that I'm aware of the problem, it shouldn't happen again.
    First part of this where I put together a simple Ryzen VM machine. ua-cam.com/video/5ynOc8XvWTU/v-deo.html

  • @popcorny007
    @popcorny007 Рік тому +3

    Nice video! Welcome to Proxmox haha.
    The mezzanine card uses OCP 2.0, which was replaced by the super awesome tool-less OCP 3.0 standard

  • @EpicLPer
    @EpicLPer Рік тому

    I would've long gone to Proxmox and even tried using it a few months ago, even their Backup Server and all. But that was actually why I stopped using it again, their Backup Server...
    ESXi and Veeam Backup & Replication just work so well, never has failed me, it's a "set up and forget" kind of thing that just works. And in the past months I've been giving Proxmox Backup Server a try it failed me... twice. E-Mails just didn't go through anymore and I wasn't even notified of failed backups anymore, so after a while I switched back to ESXi on my old Gaming Rig I had laying around and it works so damn well again, just today set up Veeam B&R too again on a separate machine.

  • @DLTX1007
    @DLTX1007 Рік тому +1

    The second x16 slot is only x4 slot and comes from the chipset, not shared with the first x16 slot

  • @pmatous
    @pmatous Рік тому

    If I rule out the HW related things (RAM stick failure or improper settings via XMP/DOCP, partially damaged cables or connections, inappropriate temperature) then other reason for which I have seen increased drive failure rate with ESXi sometimes is when its syslog and/or scratch location is pointed to less reliable drives (SD cards, cheap SSDs).

  • @AlexMarhelis
    @AlexMarhelis Рік тому

    Might be MB BIOS Firmware issues with drives. I had similar issues and all went away after flashing latest version of BIOS update.

  • @GER-Thorgs
    @GER-Thorgs Рік тому

    I had the same kind of random Drive Errors once on a TrueNAS Server. Disks that randomly disappeared and corrupted Data. I changed the scrub Settings for my Drives to Daily and had way less Data corruption but still had random other Errors and System Reboots. After intensively testing every one of the 8 RAM Sicks (every one had to be testet separate for 8 hours each) turned out that one Stick was bad. Got in contact with Corsair and got a new kit with two Sticks.
    Now everything runs Rock solid.

    • @PeterBrockie
      @PeterBrockie  Рік тому +1

      In my case I replaced the whole computer and it was still happening. :P

  • @jumanjii1
    @jumanjii1 Рік тому

    It's obvious that your drives are overheating and failing. The case acts like an oven, cooking those poor little SSD chips till they fail. That plastic case is not a good dissipater of heat. The solution is to remove the internal drive from its case, glue some cooling fins to all the chips and that should stop them from overheating and failing.

  • @floogulinc
    @floogulinc Рік тому

    If any of your SSDs are 870 EVOs they happen to have major issues with data loss. Avoid those if possible.

  • @flinkiklug6666
    @flinkiklug6666 Рік тому +2

    HDDs are more stable and if you use raid you do not have any speed problems

    • @PeterBrockie
      @PeterBrockie  Рік тому

      In my personal experience SSDs have only died on me with weird VM related stuff. Never once had a failure (excluding garbage OWC first Gen SSDs from 10+ years ago).
      I have warrantied countless hard drive failures over the years.
      That being said, if there is some weird issue where it's constantly writing to the drive, HDDs will be more reliable, but still slow as hell for VMs. I don't think I can ever go back to a HDD for boot. :D

    • @PeterBrockie
      @PeterBrockie  Рік тому

      Do you happen. To have the model number of the drives?

    • @flinkiklug6666
      @flinkiklug6666 Рік тому

      @@PeterBrockie no, think not is some years ago

    • @flinkiklug6666
      @flinkiklug6666 Рік тому

      @@PeterBrockie with some crypto coins they die verry fast

  • @RomanShein1978
    @RomanShein1978 Рік тому

    SSD Raid-z1 seems to be a waste of IOPS, better use SSDs for mirrors (or spanned mirrors). You'll get a better IOPS profile, and less wear.

    • @PeterBrockie
      @PeterBrockie  Рік тому

      Wouldn't a mirror write more data to an array? Or is there just a lot of weird overhead with Z1?
      Z1: 50%(file)/50%(file)/50%(parity)
      Mirror: 100%(file)/100%(mirror)
      150% the data vs 200% the data. Plus it's spread over fewer drives.
      I won't argue the IOPS. I'm not really doing much disk activity though. :D

    • @RomanShein1978
      @RomanShein1978 Рік тому

      @@PeterBrockie For 3 SSD probably no difference, but in case of 4 ssds, you may go for striped mirrors and it will be -50% writes vs RaidZ1.
      In all cases mirror provides certainly much better reads than raidz1.
      In case of Proxmox you may split each SSD into many smaller partitions and use them for all tasks and purposes.
      In my case, I have 3 ssds and I have on them:
      1) Mirrored root pool (Proxmox itself).
      2) Storage for VM (I don't have SSDs dieing en mass, so don't mirror it). Another SSD has a SLOG for SSD VM pool. I believe it helps with fragmentation and reads from the main SSD.
      3) Triple mirror special vdevs for a couple of HDD pools.
      4) L2ARC and SLOG for those HDD pools too.

    • @RomanShein1978
      @RomanShein1978 Рік тому

      @@PeterBrockie A 4 striped mirror SSD pool under certain condition may provide up to 4 times read IOPS vs 4 SSD raidz1.

    • @PeterBrockie
      @PeterBrockie  Рік тому

      @@RomanShein1978 How does it work out to -50%? You split the file into halves, but you double those halves by mirroring them. That's still double the original file each write vs. 150% on a 3 drive Z1.
      I have a mirrored pair of Optane 118GB drives running as a metadata cache/small file cache which helps with the slower SATA SSDs. More than enough for simple VMs.

    • @RomanShein1978
      @RomanShein1978 Рік тому

      @@PeterBrockie
      1) VM images is mostly about random and most importantly synchronous writes. F.i., I believe ESXi forces all writes to sync irrespectively of guest intentions, although Proxmox is much more flexible about forcible write cache flushing.
      2 IOPS inside a VM turn into 4 IOPS for a mirrored pool and into 6 IOPS for 3 disk raidz1 or into 8 IOPS for 4 disk pool.
      2) It looks like you are talking about L2ARC here. Few remarks here:
      - It is believed that SSD pools do not benefit from L2ARC
      - Optane (or Optane mirror) will make an excellent SLOG device for the SSD pool.
      - Optane should be good as a mirrored special vdev device, good for SSD pool.

  • @bloeckmoep
    @bloeckmoep Рік тому

    Oh god, those inland ssds are total crap, no slc no ddr cache... they will die or corrupt even faster. Btw. have you completely partitionend your prior ssds or did you leave some percentage unallocated? If you completely allocated them, then it makes sense that they corrupted quicker, ssds can't really differentiate between allocated but free or allocated and written. For the memory controller the storage is used no matter if all zeros or ones. That's why overprovisioning is recommended usually, to give any ssd some REAL free headroom to breath. Usually that falls flat with enterprise ssds, since most of those have dedicated surplus memory not allocateable by the user, so those can perform its round robin of cell wearing and garbage collection independently and even if 99.9 percent of memory is set with data.
    Also, check your psu with one of those inexpensive atx psu testers, if the 5V rail is on the low side, lots of ssds like to make that rail even more unstable since they draw relatively much amperage on that anemic rail and it certainly doesn't help that the 3.3V rail is in most psus derived from the 5V rail. Add to this constant polling and multiple aborted ot retried garbage collection as well as wear leveling which is in essence a read then write operation internally, a hiccup on the psu side will brown out your drives... this could be one reason for your "unmount" events.