How Much Memory Does ZFS Need and Does It Have To Be ECC?

Поділитися
Вставка
  • Опубліковано 25 лис 2024

КОМЕНТАРІ • 126

  • @Jimmy_Jones
    @Jimmy_Jones Рік тому +38

    This will be a common video for all newbies to look up.

  • @marshalleq
    @marshalleq Рік тому +13

    Finally good advice without fearmongering. There is so much fear mongering with ZFS for some reason.

  • @edwardallenthree
    @edwardallenthree Рік тому +6

    Thanks for the comment about the Linux 50% rule with ZFS. zfs_arc_max is a critical setting to adjust.

  • @bdhaliwal24
    @bdhaliwal24 Рік тому +6

    Easily the most informative video/content I’ve seen yet on Truenas. Thanks for sharing this!

  • @healthy5659
    @healthy5659 Рік тому +36

    Nicely explained, however I am still not clear- if ECC is not strictly required and data integrity is still there without it, what precisely is the benefit of ECC? Or should I ask, in what situations would a non-ECC system fail where an ECC system would not?
    Thanks for the video, please keep uploading more great content!

    • @Prophes0r
      @Prophes0r Рік тому +42

      everything is just layers.
      zfs can provide some reliability. ECC provides reliability at a different step in the chain.
      Example: zfs loads data into memory to perform a checksum. A bit is flipped in memory. Checksum is calculated. Checksum no longer matches.
      So it tries again. Now the checksum matches. In the end it decides the data was fine and moves on.
      ECC would have fixed the single bit-flip and it wouldn't have had to do the extra work to make sure.
      Or, different ECC would have at least realized there was a problem sooner so it could redo the read before continuing.
      zfs assumes the disks are not trustworthy, but in reality nothing is. There are extra checks to hopefully recover from problems, but eliminating them before they can mess with a process is better.

    • @charleshughes7007
      @charleshughes7007 Рік тому +36

      ZFS helps detect and correct errors which are written to the media, but ECC prevents a potential source of errors before they can ever reach the media.
      It's nice for data integrity but I think ECC's main virtue is that it lets you know very promptly when your memory is failing or otherwise having issues. If those issues are not too severe, it can mitigate them enough to keep your system functional while you resolve the root cause.
      A system without ECC which has memory corruption will crash randomly, corrupt files, and/or just generally act unpredictably. All of these are awful in a NAS.

    • @baumstamp5989
      @baumstamp5989 Рік тому +4

      if data is in ram and written to disk and PRIOR to being written to disk a bit-flip occurs, then it is a problem. so i cannot agree with the statement that you do not need ECC if you want a proper zfs nas

    • @mikerollin4073
      @mikerollin4073 9 місяців тому +7

      @@baumstamp5989 "ZFS without ECC RAM is safer than other filesystem's with ECC RAM"
      - It took WAY too much reading to finally learn all of the fear mongering about ECC is just a myth.

    • @privettoli
      @privettoli 7 днів тому

      @@mikerollin4073 it really is not a myth. If you're okay with having corrupted files on your systems, or worse, not knowing when your files got corrupted, then using non-ECC is fine.

  • @bertnijhof5413
    @bertnijhof5413 Рік тому +4

    My ZFS memory usage is occasionally measured in MB not GB. My use case is running VMs on an Ubuntu desktop and I have only 1 pair of hands to keep the VMs occupied. My hardware is cheap; Ryzen 3 2200G; 16GB; 512GB nvme-SSD; 2TB HDD supported by 128GB sata-SSD as cache. My 3 datapool are: nvme-SSD (3400/2300MB/s) for the most used VMs; 1TB partition on begin of HDD with 100GB L2ARC and 5GB LOG for VMs; 1TB partition at end of HDD with 20GB L2ARC and 3GB LOG for my data. L2ARC and LOG partitions are together again 128GB :) I maximized memory cache L1ARC to 3GB.
    My nvme-SSD datapool runs with primarycache=metadata, so I don't use the memory cache L1ARC for caching records. My nvme-SSD access does not gain very much in performance using the L1ARC. The boot times of e.g Xubuntu improves from ~8 seconds to ~6.5 seconds. My metadata L1ARC size is 200MB, saving space to load another VM :)
    I have a backup-server with FreeBSD 13.1 and OpenZFS, it runs on a 2003 Pentium 4 HT (3.0GHz) with 1.5GB of DDR of which ~1GB is used :) So OpenZFS can run in 1GB :)
    The VMs on the HDD run from L1ARC and L2ARC, so basically they boot assisted by L2ARC and afterwards they run from L1ARC. After a couple seconds it is like running the VMs from RAM disk or a very fast nvme-SSD :) :) Here the VMs fully use the 3GB (lz4 compressed) say 5,8GB uncompressed and my disk IO hit-rates for the L1ARC are ~93%. Using a 4GB L1ARC I can get it to ~98%.
    For all the measurement I use conky in the VMs and in the Host. Conky displays also data from /proc/spl/kstat/zfs/arcstats and from the zfs commands.
    PERFORMANCE:
    The relative small difference between using nvme-SSD and nvme-SSD + L1ARC is probably caused by the 2nd slowest Ryzen CPU available. I expect most boot-time is used by the CPU overhead and decompression, so reading from nvme instead of memory does not add very much more delay. That would change in favor of the L1ARC with a faster CPU like e.g. a Ryzen 5 5600G.
    More memory would make the tuning of the L1ARC easy, just make it say 6GB. It would not make the system much faster, since the L1ARC hit rates for disk IO are already very high in my use case. However I could load more VMs at the same time.
    The 2TB HDD is new. In the past I used 2 smaller HDDs in Raid-0. It were older slower HDDs, but the responsiveness felt better. I expect, while one HDD moved its head, the other could read. Those HDDs had 9 and 10 power-on years, so one of them died of old age, so I don't trust the remaining one anymore for serious work. Another advantage was, that my private dataset was stored with copies=2, creating a kind of mirror for that data. Once it corrected an error in my data automatically :) I consider buying a second HDD again.
    My Pentium backup server has one advantage; I reuse two 3.5" IDE HDDs (320+250GB) and two 2.5" SATA HDDs (320+320GB)and it has one disadvantage; the throughput is limited to ~22 MB/s due to a 95% load on one CPU thread. That good old PC gets overworked during say 1 hour/week.

  • @Tntdruid
    @Tntdruid Рік тому +10

    Thanks for the easy too understand zfs guide 👍

  • @paulhenderson1462
    @paulhenderson1462 9 місяців тому

    A nice calm discussion. Thanks for a well reasoned argument about memory use in ZFS. In my shop, we have a general rule of thumb of 128GB of memory per 100TB of zpools served. IOW, if I have a 200TB zpool, the server managing it will have 256GB of memory. We get very good performance this way, with most of the memory mapped to ZFS, which is what you want.

  • @Ecker00
    @Ecker00 Рік тому +1

    Took me days of research to come to these same conclusions a few months ago, thanks for putting the record straight!

  • @Okeur75
    @Okeur75 Рік тому +10

    Well, to be honest I'm a bit disapointed by the video. I would have expected some benchmarks to show when TrueNas becomes unstable/unusable under a certain amount of memory. Or you could have an ECC system and a non-ECC system, overclock RAM on both of them until it's unstable and see what it does to your data.
    This very video does not show a lot, and I'm sure did not required a lot of work.
    What happens if you run TrueNAS with 2Go of RAM ? Or even 1G ?
    What happens if you run TrueNas with 8Go (the bare recommended minimum) but with 100Tb+ of storage and some loads ? How does it affect write and read performance ?
    How resilvering is affected by the lack of memory ?
    All these tests would be useful, interesting to watch, and would also offer a definitive answer to the question we are seeing to many times on the forum "how much memory do I need for my system" ?

    • @lucky64444
      @lucky64444 Рік тому +2

      There are too many variables to make benchmarks like those worth anything. It completely depends on your workload and your equipment. Everyones performance will be fairly unique. Not having enough ram is the difference between saturating your 10Gbe network connection and barely reading at 200MB/s.

  • @milhousevh
    @milhousevh Рік тому +3

    Timely video as I've just upgraded an old FreeNAS 8 server to TrueNAS. The performance I'm seeing definitely aligns with this video.
    HP Gen 7 Microserver N54L (2x 2.2GHz AMD Turion 64-bit cores), 16GB ECC RAM, LSI 9211-8i SAS controller (PCI-E 16x slot), Intel NIC (PCI-E 1x slot).
    TrueNAS Core 13.0-U5, booting off 250GB Crucial MX500 SSD (internal SATA port).
    * RAID-Z1 Pool: 4x 8TB IronWolf Pro 7200 RPM HDD (connected to 1st port on LSI Controller)
    * RAID-Z1 Pool: 4x 1TB Crucial MX500 SSD (connected to 2nd port on LSI Controller, via 2.5" 4-bay dock in optical drive bay)
    * Mirrored Pool (encrypted dataset): 2x 4TB IronWolf Pro 7200 RPM HDD (connected over eSATA to external 2-bay enclosure).
    This is an ancient system, massively underpowered these days, but for home use (ie. SMB/NFS file sharing - mostly media/movies/TV shows on HDD, plus the occasional git repo or document on SSD) it's still perfect as it saturates the 1Gbps NIC for pretty much everything (reads AND writes, even from the 4xHDD pool which has a sequential read rate of 640MB/s).
    At idle the system pulls about 55W, and maxes out at about 105W during a 4x HDD scrub. It's nearly silent but stick it in a closet (as mine is) and you absolutely won't hear it.
    Even the older and slower N36L can saturate the 1Gbps network with a similar controller/disk setup (I recently swapped out the N36L motherboard for the N54L as a final upgrade!)
    The only possible improvement now would be to upgrade the network side of things as that's definitely become the limiting factor, but to be honest for home use there's really no need...

  • @thisiswaytoocomplicated
    @thisiswaytoocomplicated Рік тому +5

    I'm running ZFS on my desktop. It has 8 NVMEs all mirrored in pairs which results in about 5TB of storage in total (not evenly sized - but I run it for reliability not optimal speed and 14/10 R/W GB/s is just plainly good enough for me).
    Doesn't really matter since that desktop is a bit beyond most normal stuff (5975WX, 512GB ECC RAM, etc.) and so it is more of only anecdotal value. And yes, that is too much RAM even for ZFS - it only uses about 50 -150 GB out of the box for those 5TB of storage. So I will need to look into it of how to tune it to do better caching. ;-)
    My file server on the other hand is only an old trusty workhorse (old i7 from 2015) until recently running Linux md-raid with 16 GB of non-ECC RAM. It is just a very normal home file server. Normal (recycled) PC hardware, running about 8 years 24/7/365 without issue. Only PSU needed replacement once so far.
    Was always running raid 6 with 8 drives. Last incarnation was 8x9TB. Of course after a few years that again now became too small.
    So a few days ago I replaced the 9TB drives with 18TB drives and this time I also switched from md-raid to ZFS (zraid-2).
    What can I say? It just works at least as good as before. Just a bit faster since the drives are a bit faster than before. Hardware is old but not super-slow. Memory is not much. But with 10GbE connection it simply is still good enough for me.
    md-raid certainly stood the test of time in my home so that I still can fully recommend it. With ext4 it simply is very robust.
    But now running ZFS of course has its added value. And when the hardware finally dies, I will switch this to ECC RAM, too. Of course.

  • @ashuggtube
    @ashuggtube 9 місяців тому +1

    Great work Tom. Good onya. Just watching this now because you posted it again in YT timeline. 😊

  • @henderstech
    @henderstech Рік тому +5

    I appreciate your videos so much. Thank you for your hard work. You are my hero.

  • @drescherjm
    @drescherjm Рік тому +2

    0:15 I have had zfs at work and at home for around 8 years. I usually don't come even close to the 1GB per TB on any system. It's usually closer to 1/3 of GB of memory per TB. The main reason is budget and the number of slots. And then some of my servers are 10+ years old and only have 4 dimm slots but at the same time have 20 or more hard disks.

    • @Prophes0r
      @Prophes0r Рік тому +1

      The only time it is ACTUALLY needed is for deduplication.
      You can get away with turning off ARC if you want. But deduplication just uses [X]bytes of memory / [Y]bytes of storage to function.

  • @be-kind00
    @be-kind00 Рік тому +1

    Another issue for us home lab folks is that if we want to build a low power small nas there are very few matx or it motherboards that have ecc support and the ones that do are spend. That's why we want to use a nas that uses zfs raid.

  • @chromerims
    @chromerims Рік тому +6

    Great vid 👍
    My brain reading title as: *"How much money does ZFS need?"*
    Kindest regards, friends.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Рік тому +2

      How much money does ZFS need seems somewhat accurate as well.

  • @jms019
    @jms019 Рік тому

    I’ve got a slightly nasty 32GB stick that writes incredibly slowly so would take hours to fill but works well as a cache device though has taken weeks to fill. Now it’s full (ZFS-stats -L) it has improved things beyond what just some smaller faster SSD cache partitions did on their own. So if you have “spare” USB memory sticks and ports no risk in stuffing them in as cache devices. As I only run the machine for hours per weeks persistent cache is good for me.

  • @artlessknave
    @artlessknave Рік тому

    note that there are, or at least used to be, a few, usually very rare, conditions where zfs can need loads and loads of RAM to recover a pool and if it can get it fails to import the pool.
    similar to how a dedup pool can reach a point where it cannot be loaded due to insufficient RAM.
    one of the reasons truenas puts swap on every disk is so that if RAM becomes urgently insufficient, it can at least swap. it will be slow as hell, but might have a chance of finishing.
    of course, if you have backups that mitigates much of that risk

  • @nixxblikka
    @nixxblikka Рік тому

    Thank you so much for bringing light I to this and also love the new frequencies of high quality content !

  • @udirt
    @udirt Рік тому

    two things to keep in mind:
    commercial appliances & memory: Oracle SFS boxes: 512GB/Node almost a decade ago. Tegile ZFS based systems - 48GB/Node at start, then 220ish GB (so 480GB per system *plus nvram*, Tegile in 2020... 980GB / system.
    There have been highly important patches to optimize L2ARC and dedup overhead those guys missed, but if you want to see low latency on ZFS you can either just pretend and diss people who ask about shitty performance admit how high the requirements actually are...

  • @zparihar
    @zparihar Рік тому +1

    Once again great video! Question for you. You mentioned S3 Target. Are you using Minio? And if so, how is the performance when it's running on top of ZFS?

  • @ofacesig
    @ofacesig Рік тому +1

    Could you speak more to how you set up your s3 buckets?

  • @STS
    @STS Рік тому +1

    Great video topic and timely for me! I am in the process of deciding how much to expand my TrueNAS Core usage. I currently only utilize it for iscsi (esx). Would like to move to editing videos off of TrueNAS vs copying all assets to my local machine, I was curious about the RAM usage - currently running 4x8gb DDR3 ECC Reg. I could probably stand to search for some 16gb or 32gb dimms.

  • @lukevenn8921
    @lukevenn8921 3 місяці тому

    Incredible video Tom, but how important is RAM speed for the typical home user e.g. 40TB Samba and some dockers/Plex/Tdarr?
    Many of us have to balance costs of newer vs ageing hardware.

  • @jonathanchevallier7046
    @jonathanchevallier7046 Рік тому

    Thank you for this explanations about ZFS.

  • @JoePosillico
    @JoePosillico Рік тому +1

    Good timing for me on this video. I currently have a Truenas server I built using an old intel i7 system with 32gb of ram and 5 spinning rust drives. I've been running it for a year, and it runs well for backups. I've been thinking about building one specific to VM storage that is more performant, using 4 - 2.5" SSD drives instead of HDDs. Is 128gb of ram just overkill for 15 VMs? Based on this video maybe 64 would be good enough? If there are some goto guides on this, please let me know, otherwise I may just ask this question on your forums.

  • @charleshughes7007
    @charleshughes7007 Рік тому +2

    I'm running TrueNAS SCALE on a Ryzen 2600 + X570 Taichi + 32GB ECC system with a 6x16TB RAIDZ2 and a 2x4TB mirror and it's been doing great. I'm sure it would work with less memory, but this gives me some space to play around with local VM hosting too.

  • @DiStickStoffMono0xid
    @DiStickStoffMono0xid Рік тому

    thank you for mentioning video productions using truenas / zfs as this helps me making a decision on a future server upgrade for video / vfx production. The machine is probably going to be NVME based with 100G at the server side and 10x 10G connections to the clients, but it really helps to know that there already are productions running on truenas or zfs because there is not a lot of information to be found with this special use case.
    btw, would you recommend with above mentioned setup to set the ram to only be a metadata storage and have all file transfers go directly to disk?

  • @dinkidink5912
    @dinkidink5912 Рік тому

    Just checked my home NAS, it's just a basic media/file server with no need for cache, a touch over 6TB of capacity, current used RAM according to htop is 500MB.

  • @jsclayton
    @jsclayton Рік тому +1

    Have you had any stability issues on Scale tweaking that memory usage switch to allow more than 50%? Seems someone from iX Systems very persuasively advised against going higher on Linux.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Рік тому +2

      Only if you are using other things that need the memory such as virtualization.

  • @eugenevdm
    @eugenevdm Рік тому

    Hi there,
    Thanks so much for the video! It's an eye opener as I thought there would be a "maximum" when running VMs, but clearly not. Unrelated question, which of your videos can I watch to determine if ZFS over iSCSI would be a good way to connect a Proxmox server to a NAS? I'm stuck trying to figure out this architecture? I get building the Proxmox server and I get building the NAS, but I don't know what file system, and what kind of switches for maximum performance?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Рік тому

      I prefer NFS over iSCSI as a storage for VM's and I don't use Proxmox, I use XCP-NG and I have a video here ua-cam.com/video/xTo1F3LUhbE/v-deo.html on using storage for VMs.

  • @MattiaMigliorati
    @MattiaMigliorati Рік тому

    thank you for this useful video!

  • @KarlMeyer
    @KarlMeyer Рік тому +1

    I wonder how this will apply to Unraid when it gets it's ZFS support update soon.

  • @Speccy48k
    @Speccy48k Рік тому

    Thanks for this video. I have plenty of ECC memory: would it be beneficial to use L2ARC or is it NOT required if enough available RAM for ZFS?
    My understanding is the L2ARC is the equivalent of SWAP so may impact performance.
    Also, what is interest of using a ZIL/SLOG special device like an Optane drive?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Рік тому

      RAM is better than L2ARC and Optane would be good for ZIL/SLOG. I have more details on how ZIL/SLOG here ua-cam.com/video/M4DLChRXJog/v-deo.html

  • @sharedknowledge6640
    @sharedknowledge6640 Рік тому +1

    Nice video and thanks for helping debunk the myths. The level of performance you can get from even a low end TrueNas server completely shames even a high end Unraid server because of ZFS intelligent use of RAM. It’s just Apples and Oranges with TrueNas being a Ferrari and Unraid being an ox cart while Synology and Qnap are somewhere in between. Further even without ECC memory TrueNas is way less likely to have data integrity issues. Unraid loves to kick perfectly good drives out of the array kicking off a series of unwelcome time consuming tasks that just further puts your data needlessly at risk.

    • @dfgdfg_
      @dfgdfg_ Рік тому

      you alright hun?

  • @UntouchedWagons
    @UntouchedWagons Рік тому +2

    I've read that the 1GB of RAM for every 1TB of storage is for deduplication but I have no idea. I have 32GB of RAM in my SCALE box, how do I tell ZFS to use more than half of it?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Рік тому +1

      Set ZFS Arc Size on TrueNAS Scale
      www.truenas.com/community/threads/zfs-tune-zfs_arc_min-zfs_arc_max.99361/

    • @Prophes0r
      @Prophes0r Рік тому

      @@LAWRENCESYSTEMS Don't forget to tune zfs_arc_sys_free as well. It is often left out, but is a good safety setting that can let you push WAY closer to the limit with your ARC Max without having to worry about emergency evictions from ARC if something else on the system suddenly wants more memory. zfs_arc_sys_free will start calmly evicting ARC as you get to the limit, instead of waiting until the system is about to OOM.

  • @RocketLR
    @RocketLR Рік тому

    I've been running the jankiest setup for 3 years now.
    one old gaming computer converted to a ESXi. Im talking ddr3 and a i7 4770k...
    Then im running a TrueNAS VM where I've hooked up 3 separate disks as datastores which each hold 1 single vm disk.
    Then that TrueNAS VM basically raids those 3 disks togheter.

  • @deathcometh61
    @deathcometh61 Рік тому +1

    Short answer is all of it. Can only hold 32g get 2tb ecc xl ing ram sticks and force it to your will.

  • @seeingblind2
    @seeingblind2 Рік тому +3

    How much memory do you need?
    *YES*

  • @loucipher7782
    @loucipher7782 Рік тому +1

    cant you just use 2tb nvme for the zfs cache?
    they're so much cheaper compared to that bulk of ram and i dont mind if its just slightly slower as long as they are faster than HDDs

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Рік тому

      That is a more complicated answer ua-cam.com/video/M4DLChRXJog/v-deo.html

  • @frankunderwood8357
    @frankunderwood8357 Місяць тому

    Building a Xeon Ivy bridge nas since the ram is cheap and cpu more than good enough. Hopefully 1TB will be sufficient

  • @blablabla8297
    @blablabla8297 Рік тому +1

    Does ZFS benefit from DDR5, or is it better to just buy a larger capacity of DDR4 for the same price?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Рік тому +2

      Faster memory is better, but that will come down to what your next bottleneck is such as NIC interfaces or workload type.

    • @blablabla8297
      @blablabla8297 Рік тому

      @@LAWRENCESYSTEMS Thanks. Yeah, I have a gigabit interface with spinning disks on my home NAS, so I thought that may as well go with more RAM as the bottlenecks would probably come from other places anyway.

  • @DrEverythingBAlright
    @DrEverythingBAlright Рік тому +1

    Very interesting.

  • @MisterPhysics511
    @MisterPhysics511 Рік тому

    Just making sure I got this right, your purple Nas is only used as a secondary backup server and barely uses 3gb of ram for 4x 8tb drives? Is it able to saturate a regular Gigabit connection on read/write? Thanks

  • @5654Martin
    @5654Martin Рік тому

    Is there an easy way to backup my TrueNAS storage to a third-party location with SFTP etc. in an encrypted and compressed manner?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Рік тому +2

      Yes, SFTP can be setup under Cloud Credentials as a backup option.

  • @Mr.Leeroy
    @Mr.Leeroy Рік тому +1

    To get a good idea about amount of RAM your pool actually wants is to check during a scrub.
    It will allocate a lot more in the process and free a lot upon completion.
    P.S. Looking a lot better with that monitoring dashboard in the background. At leas it makes sense.

  • @gjkrisa
    @gjkrisa Рік тому

    With zfs is there away to switch the os or if you broke your os and have to clean install is there a way to not lose that data?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Рік тому +1

      ZFS pools can be imported into another system that is at least running the same or newer version of ZFS

  • @CoryAlbrecht
    @CoryAlbrecht Рік тому

    Does TrueNAS Scale mean TrueNAS Core on FreeBSD is going to be abandoned?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Рік тому

      Not at this time, they recently released an update to Core.

  • @Prophes0r
    @Prophes0r Рік тому +1

    This is something that needs to be spread because I STILL hear it.
    The only thing zfs NEEDS ram for is deduplication.
    Everything else is just nice to have for ARC. That's it.
    If you need to, you can even disable ARC and have zfs use ZERO extra memory.
    I'm not sure what your use case would be, but it is doable.

  • @cristianr9168
    @cristianr9168 Рік тому +1

    Is 128gb overkill I want to turn my 5950x and 128gb into a nas.

    • @mistercohaagen
      @mistercohaagen Рік тому

      What is the purpose of the NAS? I ran that proc with 64GB of ECC 3200MHz dual rank dimms as a NAS for a while. I found it to be overkill, even with a bunch of VM's and IOMMU passing through a GPU and capture card for an OBS system. 10G ethernet is easier to saturate than you think. Chipset matters too, x570 is probably best for server use with a desktop AM4 chip. I now run a Ryzen 3 3100 & 32GB, and it still saturates the 10G all day, even with a quad NVMe card, and 8x SATA SSD's.

    • @Prophes0r
      @Prophes0r Рік тому

      Users? Type of data being stored? How much storage?
      It all matters.
      Memory for ARC is just a bonus for zfs unless you are doing deduplication.
      Give it however much you want to. But there will be a point where it doesn't actually do anything for you.

    • @LackofFaithify
      @LackofFaithify Рік тому

      Not overkill depending on what you want to do. If you want to use ECC, go check out the Asrock Rack motherboards for AM4, they are all serverie and such with ECC support, 10G connections, just be mindful of the limits and weirdness they can have regarding pcie lane usage.

  • @RocketLR
    @RocketLR Рік тому +1

    Lawrence what? Lawrence of Arabia? You sound like royalty to me! Are you royalty?!
    - FMJ Drill Sargent "Earl something something"
    I just had to get that out of MY system..

  • @lordgarth1
    @lordgarth1 Рік тому

    I have a TB of ECC memory on my TrueNAS server is that enough?

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Рік тому

      Depends on your workload, might want to consider more. 😜

  • @cmoullasnet
    @cmoullasnet Рік тому

    You look good with glasses 😎

  • @shephusted2714
    @shephusted2714 Рік тому

    why stop at 128gb? 512 does better and large memory is getting cheaper - it is the best upgrade but large arrays of ssd can easily saturate all but the fastest network links - more ram, nvme and fast network links are the best priorities to focus on with infrastructure upgrades and realizing optimal performance - they are all important

    • @Prophes0r
      @Prophes0r Рік тому +3

      The type of data being stored matters too. If blocks aren't accessed frequently, no amount of ram for more ARC is going to matter.
      The point is that there is a persistent myth that ZFS uses a ton of memory, and it is clearly false.
      Only deduplication NEEDS memory. Everything else is just a luxury to speed up bursty workloads, or blocks that are constantly accessed.

  • @luckyz0r
    @luckyz0r Рік тому

    love your videos, they are amazing.
    but..... where the f*** you buy your t-shirts? :D I really love them
    continue the good work ;)

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Рік тому

      I have links in the video descriptions that take you to the shirt store lawrence.video/swag/

  • @MikelManitius
    @MikelManitius 8 місяців тому

    LOL. love the t-shirt.

  • @Mr_Meowingtons
    @Mr_Meowingtons Рік тому

    All of it..

  • @tazerpie
    @tazerpie Рік тому

    Why wouldn’t you use a caching nvme ssd?

  • @johnroz
    @johnroz Рік тому

    1GB per TB right?

  • @thecman26
    @thecman26 2 місяці тому

    This newb went with 16gb of ram in my nas... Yeah I know NOW! it's using 15gb of that 16! Soon to upgrade to 64! If only Ryzen didn't have problems with all 4 slots populated... I'd go with 128!

  • @TechySpeaking
    @TechySpeaking Рік тому +1

    First

  • @thegorn
    @thegorn Рік тому

    I have 512GB ECC RAM is that enough?

  • @BealleMoriniE
    @BealleMoriniE 3 місяці тому

    Jones Anthony Moore Sandra Robinson Michelle

  • @WillFuI
    @WillFuI 7 місяців тому

    Me who got a great deal on 192gb of ram

  • @Itay1787
    @Itay1787 Рік тому +8

    ZFS need ECC RAM to avoid pool and file corruption I know this from experience…

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Рік тому +17

      Nope, does not NEED it, but it's a nice to have.

    • @NickyNiclas
      @NickyNiclas Рік тому +6

      ECC is more important for system stability, say if you have mission critical services running, it can help avoid crashes. Still, memory corruption is pretty rare anyway.

    • @drescherjm
      @drescherjm Рік тому

      Although I do now have ECC on every zfs system (8 to 10) I have between home and work, I did run zfs systems for several years in production at work without any corruption. The key is to make sure your system is stable before using it. For me that meant 0 errors on memtest86 for 72+ hours of testing. No overclocking of CPU or ram / only JEDEC standard speeds and timings.

    • @sopota6469
      @sopota6469 Рік тому +2

      @@LAWRENCESYSTEMS I don’t think he said that meaning it’s a mandatory requirement, but something that can avoid corruption so better be sure to have it.
      That said, I don’t have any confidence in a system doing very complex tasks like deduplication, full volume snapshots, caching, iscsi, etc. in volumes of 40TB+ without ECC memory. There are very good reasons servers use ECC ram. Saving a few bucks in a multi thousand $ project isn’t worth it.

    • @LackofFaithify
      @LackofFaithify Рік тому +2

      @@sopota6469 You really think the type of person that isn't interested in ECC RAM is going to also be the type that also sets up dedupe and all the other bells and whistles on a 40TB system? Or just an average home user and you just have to show off how smart you are?

  • @davebing11
    @davebing11 Рік тому +3

    if you DONT use ECC memory on a storage server, you are a fool

    • @LackofFaithify
      @LackofFaithify Рік тому +6

      If you don't use ECC memory on a storage server you were probably just an average person called a fool on a Truenas forum and went and bought a synology.

    • @f.d.castel2821
      @f.d.castel2821 Рік тому +3

      Yeah. My rubber duck died last year because I didn't use ECC RAM. You have been warned.

    • @LAWRENCESYSTEMS
      @LAWRENCESYSTEMS  Рік тому +4

      Or you are someone without a budget for it.

    • @be-kind00
      @be-kind00 Рік тому

      Disagree. There are thousands of people using synology, qnap, and many other appliances without ecc. How many incidents have we heard that the root cause a failure of a zfs system was a result of not having ecc ram? None in my 40 years of IT and none in the last year of reading hundreds of posts on forums or nas vendor specific user group collaboration sites.

    • @be-kind00
      @be-kind00 Рік тому

      Fool? A bit strong. Lots of people make educated decisions and are less risk averse that others.

  • @raghavmahajan3341
    @raghavmahajan3341 Рік тому

    Is it me or the color scheme and the thumbnail just look like LTT.

  • @msofronidis
    @msofronidis Рік тому

    Is the ZFS cache the memory swap file?

  • @Malcapatnaude
    @Malcapatnaude 2 місяці тому

    Wilson Anna Johnson Brenda Brown Charles