RAID: Obsolete? New Tech BTRFS/ZFS and "traditional" RAID
Вставка
- Опубліковано 2 жов 2024
- This video is the first in the storage series for managing storage in the enterprise.
This first video we talk about RAID, and the current state of the art for the "next generation" of RAID type devices. What RAID really means in terms of data integrity is shifting and this video starts the conversation.
Join us in the forums for non-swill discussion:
forum.teksyndi...
Wendell's explanation of RAID is far better than my professors in college.
Mine too
Ha Wendell and the internet is my teacher :P currently using ZFS + rsync to mhddfs backup drives absolute beast of a setup so little to worry about.
while in college I studied these topics way more deep than a youtube audience could tolerate, However Wendell's way of defining and explaining concepts is MUCH better than any teacher could give me.
Difference being, Wendell does this for a living, your Professors are paid to read a textbook and spew the information out to the students.
***** Most of the university professors I had worked in the industry and in national labs during the off season so they knew what they were talking about.
Too complicated, I'm just gonna store all my 1's on one drive, and 0's on another.
Also, subscribed.
I laughed much. Thank you, sir.
nihilusJ90, hard drives can't hold that many 1's, spindles get too heavy and break!
You have to throw some zeroes in.
as funny as this sounds, it is actually true for layer 1 of ethernet media encoding since 000000000000000 is DC voltage. cat5 cat5e cat6 needs AC at about 1MHz or higher to combat capacitance over long cables.
You would need a third drive to tell you when to switch between drives
better to partition a single drive, then mirror the 2 partitions. Yes, i actually seen someone write this in a review on amazon....
I wish this video had been around a few years ago when I first started in IT. It basically explains in around 30 mins what took me hours of reading and re-reading to understand about RAID and how it works. Thanks Wendell!
Like I'm going to trust a redshirt, if he really knew what he was talking about he'd be a reoccurring character.
More of these please D:
Give me all of these, I feel like I'm learning things!
Please make more of these types of podcasty videos I can listen to while driving. I learned a lot more from this than I did any of your other videos.
Man, Wendell is getting more brave with showing his face lately.
And he looks so much better with some face hair.
In ECC we trust.
Until the ECC flips a bit.
i did learn a lot. now it's very clear to me why everyone and their mother is using zfs and or btrfs. please keep it up. it's a very good series
You sir are spreading FUD.
It's so nice to watch a video and hear someone say RAID isn't a backup from the get go and then go onto to recommend a minimum 3 copies of data. SO many people call RAID a backup, it's insane. Nice video. Keep up the good work.
Cant trust these hard drives.
Best to have a printed hard copy of all the bits.
Hats off to the "black swan" reference.
Wendell, not sure you read these comments but: when setting up storage, like a freenas server, I always worry about the storage server itself being a single point of failure. Would be AWESOME if you could do a simple two-node storage cluster video, could be using anything, like freenas, Openfiler, OpenMediaVault, Rockstore, zfsguru or whatever else you want ;D
Two node clusters tend to have weird split-brain failure modes. Normally you'd jump from 1 node to 5 nodes so you can tolerate the failure of 2 machines.
@@NavinF thanks yes, that explains why a lot of solutions (like Ceph) prefer odd-numbers of nodes
wow, i wished I found this channel sooner, your an awesome tutor, this is exactly what i was looking for; keep up the good work
Well you learn something every day... no not the raid / zfs stuff that I already knew, but I did learn this en.wikipedia.org/wiki/Black_swan_theory
Love the detail and the explanations. Most folks don't seem to get that RAID is not a substitute for backups. Thanks for pointing that out. In the future would love to hear about setting up FreeNAS and other solutions. Would really love to hear about setting up second network just for backups or NAS to run on and securing it so it is not available to the internet. Thanks.
SyberPrepper I worked in a data center that had a second network for backups. It was trivially easy. You just need a second network port on each computer and a router. You can use file based name resolution (/etc/hosts or LMHOSTS) or you can set up a little DNS server on the LAN or make your backup server (the one that receives the files) also a DNS server. File based name resolution is the easiest to set up but a DNS server is the easiest to maintain. Keep the backup LAN on a separate subnet from the business network.
EdWittenable Excellent tips. I really appreciate it and look forward to working on it. Thanks!
The problem with "RAID is not a backup" is that it's already expensive as hell to have enough storage for an array of disks. So having to duplicate that a couple of times over just isn't reasonable for say someone building a home server in the 50+ TB range.
+TekEnterprise - I'm not trying to troll you, your knowledge is pretty advanced but I'd like to point a few things out.
First - the Perc6 is an LSI controller so no need to go find one. Your comment about RAID5 not being X number of drives plus a parity drive is true but that is because that is called RAID3. Personal opinion here, software or hardware RAID5 is crap and will ALWAYS bite you in the ass eventually. RAID6 was invented specifically because RAID5 is so bad. I don't trust either of them and I look suspiciously at anyone who does. Stick with hardware based RAID1 in a professional environment, though software RAID1 at home is fine and probably a better idea because RAID on consumer based motherboards is crap. OH - a SAN is not just a computer with storage, a SAN is a NETWORK of storage devices. I think it's funny when people buy a single storage array and think they have a SAN. Personally, everything I do is either RAID1 or RAID10 and when I say RAID10 I mean RAID1+0. All disks are mirrored first and then a RAID0 is striped across them. And because read and write speeds degrade as you get further out on the platter, you can stripe file systems across the RAID10 to lock in performance bands - typically in thirds. Also make sure you don't get stuck with RAID 0/1 - some people call this RAID10 but it is not. It's two RAID0s mirrored and a bitch to rebuild. I know you stressed backups and they have their uses, but they aren't helpful in an enterprise environment - they just take too long and are often corrupt. You are better off with snapshots, remote mirroring or transactional replication. Good luck in the future.
Holy shit I love those kind of answers. Both of you, do not stop talking, like ever.
What a brave man wearing a "red" Star Trek shirt! :)
i feel like in school, only i like this class.
Wendell, I'm thankful that someone like you exists online and gives this free information in such a simple format. I would replace some of my profs with you buddy. Keep it up!
Volume too low
Great information on RAID arrays and the like. Information anyone handling critical (or mass amounts) of data should know. #ZFS #BTRFS
Happy I watched this, all of the things you mentioned as 'things you need', such as battery backups etc are already implemented on my servers. Might be worth noting that if you're running ZFS from a USB as many are, that copies of that USB are important in case that fails you.
SAS interface is also full duplex, SATA is half duplex. SAS operates at a high voltage (playing into better ECC you mentioned)
Beautifully done. Thanks!
Thanks, I use from 3 year zfs on nas4free old computer workstation (hp 8300) and works great. I'm very satisfied from zfs the project born without a specific strategy from our company. I only have the need to have a storage on network to put files backup and disk images.
Starts with 5 disks of 500gb each raidz5 and at the starts of 2019 grow with 5 disks of 1tb each.
I'm very proud for this and at this days that nas is fundamental for our daily job.
My 50c.. Nothing more
Fabbri
Great video, been having the arguments with other Tech guys for years about why traditional RAID is bad!
I use btrfs on top of a write hole affected array to handle these situations. I’ve had write hole happen on my MDarray many many times. BTRFS just takes its sweet time and sorts itself out and I’ve never lost data in this situation. With a weekly scrub on all my arrays i eliminate bitrot too
What about raid 0 with two usb ssds? Just for having faster and bigger external drive? (backups made elsewhere)?
Wendell, your knowledge is impressive and the way you simplify complex subjects is great. Really appreciate the knowledge transfer!
Really well explained thank you Wendell!
Linux MD now has a journal/log feature that addresses the write hole problem. The journal has a (possibly noticeable?) performance cost for writes, but you can put it on an SSD.
I'm sitting here staring at my server with a raid card (5 drive raid 5) and feel like screaming in fear lol
And you are right so, RAID5 is dead (if used with larger then 500GB drives)
Ha oh yeah. I've got 5 3TB drives
I had a two drive failure at work on a RAID 5. I had a boss who believed that RAIDs were some how magical unicorns that could never fail. BAM... then they attempted to rebuild the RAID not knowing that two drives failed (assumed one). Complete lobotomy and a LOT of I told-you-so's. Someone high up on the food chain thought that RAID arrays were magical unicorns... cuz they are RAIDs. We now have better policies in place, back-up to LTO6 tape (in triplicate) as well as a second, much larger array in RAID 6 with four hot-spares. I consider that "reasonable" but it was like pulling teeth to get all of the purchases authorized even AFTER a fatal crash with data loss.
If you you have less than 12TB total you should be fine.... Mostly.
That's what I have lol 5x3TB = 15TB raw, 12TB usable. I'm rebuilding the server though using FreeNAS with 8 drives
Thanks for clearing this raid info mess. Learned alot! :)
Great content, thank you!
Before watching this video: "Man I feel freakin' good with my CentOS RAID-6 setup"
After watching this video: hello darkness, old friend...
2019: BTRFS is *still* 4-5 years behind ZFS, and will continue to be for the forseeable future :b
In 2020 don't use BTRFS for anything other than snapshots...
@@sbdnsngdsnsns31312 because only idiots use BTRFS for RAID
This was amazing. Informative.
really loved this long video very well done and informative thank you wendel
I just fucking understand nothing!
L1E - Thank you! I started as a fixer of machines. An intense hobby which lead to 24 yrs of IT issues. Raid in 99 was better than a back-up. BU-ps at that time I found crashed, journal-ling of the NTFS was better than F32. Drive stability - has gotten better in the last 15 yrs. And I have moved to linux to explore their file systems. In the end - 1 is 0 - 2 is 1 - 3 is a back-up. And it is nothing - if you never Test it - rebuild a clone - from the back-up. And 4 - is learning new tech to stay current. Cheers Sir!
29:40 - 29:45 :) :)
It's been 5 years, any updated video comming ?
if your hardware raid controller dies, your data goes with it.
+muffemod yea, raid is not backup. you need data elsewhere to protect u from device/raid controller failure
If my hardware RAID controller dies, my data is just fine.
I just replace it with the spare RAID controller I keep on hand. Just as I do with spare switches, spare power supplies, spare boot drives, spare towers…
gskibum Not everyone (every company/institution) has either the “vision” or the dollars to go full redundancy. But, yes, that would be it in a perfect world.
gskibum how does that save the array if the hw goes bad? You have the partition table for the array filesystem on the hw controller dont you?
I just realized that Wendell is Al and Logan is Tim #HomeImprovement
172 people have no clue what this video is about
Thanks, very informative video. It's got me thinking I might have to buy a UPS. :)
I learned a lot from Officer Riker with a sweet tooth :thumbsup:
great...and i just setup up an RAID 5 on my home server >_
I was about to say that you should have gone more in depth into how each typology worked, but after reading the comments I say that you've struck a good balance in between simplicity of understanding and technical detail, any more and quite a chunk of the audience would have simply been too confused to follow along.
Good work. Wendel. Explaining these things in that manner is no easy task.
the face I..... cant.... too....... no..... words......
frecza1999 DO NOT LOOK DIRECTLY AT THE FACE! MERE MORTALS CANNOT WITHSTAND THE BEAUTY OF IT.
Instead of ZFS, you should mention LVM that is widely available and it's baked natively in an ubuntu server installation. I recently played with it and I need to admit that so far, so good it's working great. I had 2 drives one 2TB second 5TB and it let me do the 1 Raid using 2 drives and I am still able to use the other 3TB storage space! With only raid technology, the other 3 TB would be lost!
ZFS is in fact not just an filesystem. It is a filesystem plus LVM plus other storage management. Do it the ZFS way is more easy because you are dealing just one tool set.
I learned so much today! Thanks!
Dude. You realize there is technically no such thing as "hardware RAID." There is raid controlled by a dedicated processor or one controlled through the OS. And as OS level file systems continue to become more sophisticated? You see the problem with your argument right?
Journalling file systems... it's as if you don't know what they do.
Drives that "lie"?! Are you out of your mind?! Can you give an example of a drive that lies? A machine, yet another microprocessor driven, self-contained computer device? Written to deceive the user in some way?
erroneus He was talking about how you can't trust SMART when you have a RAID controller
I have a hardware RAID controller for one test box, with enterprise drives to match but I also have eight or so different consumer drives that are sitting unused. ZFS sounds like a good candidate for those.
ive been working with computers for 13 years. I'm yet to experience a hard drive failure or have a client report a hard drive failure to me. Luck?
When i imagine a failure happening i can only think of losing "that" one special folder and i cant help but scream in terror.
How do you recommend mounting a hard drive on the wall?
hahahahahahahahahahahahahahahahahahahahahahaha
Daniel Reimer thermal paste ?
Make a file or directory and name it "the wall", then use the mount command and mount the drive to "the wall". :P
Howlingmad-wolf XD I will mount all my flash drives on "The Splonge!"
The Donut Deflector YOUR NOT A YES MAN ARE YOU!!!!!! Well now we're gettin' somewhere!
But I think I would prefer to mount my drives to "42".
I work as a DBA now and I´ve learned a lot from your video. It helps a lot to understand about the SAN and NAS products.
Wow this is good overview. In my day job, I'm a Shared Storage Architect so I know storage. I was really impressed by depth of this video. O BTW hardware RAID will not die any time soon because of big iron.
8:12. I do work in enterprise. And no, RAID is not dead, no where near it. While you can boot a host from most SANS (Boot from SAN), no one uses it. So the hosts are typically booted from 15k SAS Raid5 attached to a controller (perc6,h700 or similar) 90% of infrastructures I see on a day to day basis use that configuration. SANS are typically used for databases, Active Directory, mail servers and more commonly now Vmware. The VM hosts are clustered for redundancy, but they boot from their own drives.
Secondly, SANS (EMC/Dell Compellent) don't do anything fancy with raid.(Though i'm not sure what you meant by "More advanced than raid"? Do you mean things like Cloning/Snapshots/Replays? )
Both EMC and Dell's SANS rely on the underlying raid structure, they really don't do anything fancy, its just traditional RAID. Though with the growing size of disks Raid10 Dual Mirror and R6 Dual parity are now more common. EMC has a sniffer which runs across the drives checking for errors,(media errors/soft scsi errors/Sector errors) Compellent has a raid scrub which does the same thing. Both storage arrays are actually software raid controllers, The Flare (EMC) And SCOS (Compellent) does all the work. The difference,as you said, are the drives themselves. SAS and enterprise class SATA drive are infinitely better at self checking,(Keep your firmware up to date kids) Fibre channel drives are the best solution (But stupidly expensive).
So in conclusion
1) Raid is NOT DEAD.
2) Buy enterprise class Disks if you really value your data (even for a home solution)
3) Backup you Data to tape. (Currently the only long term solution)
4) Sans are not an all in solution(Sans can and do loose data as well, so have a backup)
For some reason, I think I could be marooned on a desert island with this guy and not want to eventually kill him.
So-called "Server grade" disks are no longer such reliable as it supposed. The answer is simple - more backups. Raid is useless nowdays - that's true.
Wendell probably knows by now that you don't _need_ 8GB of RAM to run ZFS on your system. >= 8 GB is what iXsystems support and require for FreeNAS, which is where all the misinformation came from. Add to that the fact under 16 GB, ZFS will reserve all memory minus 1 GB for the OS and under resource monitors, the RAM will appear as used, not cached, however, when your OS runs out of RAM, ZFS will step back its caching a little to let the OS and its services / application "breathe" a little. With 16+ GB or RAM, ZFS usually uses 50% of the memory. You can run ZFS with 2 GB of RAM and ZFS will use 1 GB and leave 1 GB for the OS. You can run ZFS on 1 GB of RAM and ZFS will not cache any data basically (obviously it will be slow). This is a philosophy very often seen in Linux and BSD: unused RAM is wasted RAM. And yes, you can run FreeNAS with less than 8 GB of RAM and it will work fine, just don't ask for support.
Additionally, you don't _need_ ECC to run ZFS, which is also a misinformation. ECC is good to prevent corrupted data coming from the RAM, however, ZFS has been proven to keep data integrity intact even without ECC and with AC current passing through the RAM and motherboard while writing data. ECC is a must for any file system if data integrity is a requirement, not just ZFS.
Deduplication is an enterprise feature of ZFS that is RAM hungry. iXsystems recommends (but doesn't require, very important) that you have 1 GB of RAM for every TB of storage under normal conditions. For deduplication, as a rule of thumb (again, not needed, but really recommended), you need to calculate how much RAM you need and multiply it by 5 (again, rule of thumb), because it's such a RAM intensive feature.
someone disliked the video in under 5 seconds :(
haters
***** well said dear leader :)
I started doing enterprize system support in 2005 and our storage RAID controllers had memory and battery backup on every single device even then :)
Wendell, Thank you for such an expert and well presented explanation. Are you familiar with Synology Hybrid RAID (SHR)? After watching your explanation I am curious as to whether or not SHR provides reliable data integrity. Any advice would be greatly appreciated.
Worked at a major storage vendor for about 8 years before retiring. The MINIMUM we suggested was double parity. And Raid does not equal backups. (They would do snapshots from LA to Florida to NYC for one of their customers. Amazing how a bad router will kill a 60 to 100 gig transfer.....Luckily, the systems detected the problem on the fly, so the network engineers at the big company finally found the problem... A blade in a big router was resetting because it couldn't handle the multi gigabit stream...... Storage at each location? 500 gig to 1 petabyte.... Scrubs were happening periodically. The biggest problem: one brand of large sata drives where 20-30% would fail....
This is the kind of stuff that needs to be on PBS or NPR or something like that. Everyone should know more about the technology that they use everyday! Ignorance is easy, learning and sticking with something is hard. Good one guys, I appreciate it !
I passed out at 15 minutes waiting for something to happen.
UnrealVideoDuke Lol.
How well would ZFS handle that situation you described in the beginning, when you are rebuilding an array after you replaced a failed disk with a new one, but there is another disk failure during the rebuild process? Obviously if the second drive completely fails then the data is gone, but that's not very likely compared to partial sector corruptions, is it?
I'm wondering if it's enough to set up a Z1 for a relatively small pool of larger drives (lets say 4x 8TB). Or if it's better to set up Z2 for a larger pool of smaller drives (lets say 8x 4TB). Redundancy ratio would be the same but I want to cram this in a relatively small case, and I'd like to minimize the energy footprint as well.
Hello . Thnx for the education. Im planing of making data storage in wich will be use for video rendering on the network im thinking freenas of unRAID, i order supermicro x10sdv tp8f and megaraid lsi 9260 8i ... I dont have any experience on building data server so if you can help me with software and raid configuration. Thnx ur the best
I looked at ZFS instead of md RAID and it was a pill I just couldn't swallow. If I can't understand something, that, to me, is worse for my data (i.e. user error erasing everything). I never understood about expanding it when I got more drives. It sounded like it would be something I would need to buy all my drives at once, which I could not afford.
Extremely informative as usual, but i have to know what your opinion is regarding RAID 0 SSD's outside of the enterprise and business space. I personally run three SATA SSD's in a windows disk management software RAID, is it really that bad ?
Uhh am I crazy or do I hear some chiptune? 😂
Nice radiator. :)
This series should be quite good. Although I wonder why you don't include storage spaces in your analysis. It is Microsoft's answer to ZFS/BTRFS essentially. I've used it a few times and it's decent (though has it's own set of problems and overhead). I've only ever used it in mirrored setups though. I've yet to put it to the test with real hardware and a decent amount of drives.
Anyway, in my experience thus far ZFS is the way to go for anything larger than a few TBs, but the overhead may be hard to get around for some (the need for a second or third box aside from the compute head). I've tried BTRFS and it's not quite ready for prime time yet. Especially not for SANs.
Matt Lovelace Fair enough assessment. As Storage Spaces isn't a file system, it wouldn't have any direct copies of features from ZFS (It's stack is a complete reversal from ZFS anyway, Pool below Vdisks as opposed to ZFS's Vdevs below Pool). However, as this was a video regarding RAID replacement, I still think SS should be looked at. How good is it? How bad is it? Nobody seems to talk about it much, sadly.
It's also clustering, which ZFS isn't. So it's easier to eliminate points of failure in SS than it is in ZFS for HA purposes.
After listening to this, I am totally depressed. The data minefield has snapping jaws. How about everything fails at the same time? Then what?
I wish this channel was more active
SHR-1 or alternatively SHR-2 with Brtfs on Synology NAS will do the trick, every. single. time. (For home and medium business). Coming back here now in the future, it's now 2024! Last post by me here was 2020: EDIT -- So SHR on Synology has self-repairing 0's and 1's each 24 hours, and has software RAID and is funtional with: SHR-1 or SHR-2. Check it out! Still using it! No data loss.
I am building home NAS in 2020 using old computer hardware and am trying to choose between OMV5 (RAID) and TrueNAS (ZFS). My hardware meets the basic requirements of TrueNAS (16GB RAMs, i3 CPU) and I really like the new UI however when you go to TrueNAS blog page, its full of comments from ppl loosing their data for one reason or other which is really concerning. You can also see youtube full of negative comments on FreeNAS solution. I'd really like to use TrueNAS for my home server but I feel like it will hit me back sometime in future when my HDD starts failing.
I'm working on a new "raid" format. All I need is a floppy cable with 2000 connections....
Thank you for sharing. I was always wondering how raid's prevent bit corruption. Finally someone who really knows what raid, backup and hash is all about.
ZFS and BtrFS aren't "new" RAID in any way, whatsoever. They are just old software RAID with new branding. Not even new branding, just confused end users. They are good tech, but they themselves are very old (ZFS is 15 years old!!) and wasn't even new RAID at the time in 2005. It was "ho hum" just more of the same software RAID that we always had. RAID is RAID. ZFS is really good, but not because it is something special.
Don't pity my software RAID. I find it reasonable well-featured and it's more flexible than a dedicated hardware RAID controller, at least for a home environment. Tying yourself to a specific bit o' hardware isn't the greatest idea. Obviously a RAID is not a substitute for backups. But as a foundation, RAID 10 is pretty good protection from hardware failures, gives good performance and rebuilds are not as traumatic as RAID5 or 6. I realize much of this video is aimed at enterprise methods, but If you have a Mac and are interested in RAID solutions, check out SOFTRAID.
Thks, ?Could you pls update especially on ZFS vs BTRFS?
Here's what I got out of your video:
Data was traditionally protected independent of its OS (like HW RAID with RAM buffer battery backup, corruption/error correction, versioned & deduplicated backups, etc). In theory if the OS or its data were corrupted/destroyed ; sysadmins could manually restore any desired version of backed-up data to another/same computer.
Modern OS filesys-s are starting to include/automate these data protection features within them (ex: ZFS, BTRFS, etc). However corruption of the OS (or its filesys) usually means corruption of its data or its data protections that they depend upon.
In a nutshell, data protections outside/independent of an OS tends to be better, more costly/complex, less convenient, etc than data protections within/dependent an OS (aka data protection trade space).
I am building my new home server and I was going to use an ESXI box with TrueNAS vs Windows Server built into it as a VM. My Server has a built in Hardware RAID chip (hardware RAID). So you are telling me that these things are obsolete, or at least not as supported
Great overview, as I have long expected (& avoided RAID, in favor of other options) of why nothing is perfect.
wow this is fantastic. there might not be as many views as the tek, but each view is going to be so valuable to us people watching... so thank you.
Interesting this was almost 5 years ago and RAID was looking as more unnecessary than I had managed to work out for myself. I'm amazed how much the NAS market still manages to convince entry level customers to use RAID 5 or 6.
It's wholly redundant for 90% or more of users frankly. It will never replace two solid backups with a working source. But if I understand correctly RAID may even corrupt the integrity of your data if it's not implemented correctly with the right hardware. If I'm understanding you correctly can this corrupted integrity corrupt the backups at all? If the RAID is acting as your working source.
The claim about mdadm is not true. Currently mdadm supports:
1. bitmaps (directly on disks or on separate device), which are fast enough and protects you from full resyncing (but write hole still not defeated completely).
2. Full logging of all operations on some fast non-volatile buffer, like nvme ssd or battery backed up memory. Which protects from any kind of write hole, just like "hardware" raid controllers do.
I used raid controllers(with battery backup) in JBOD mode with ZFS(freenas) to get the best of both worlds. In RAID MODE with ZFS is not recommended and you are just asking for a world of trouble. You will be in an unrecoverable situation in a blink of an eye with the raid controller firmware and ZFS butting heads fighting to recover data. Not a good situation.
The main advantage of a battery-backed hardware controller is you can cache writes on the controller to improve performance. However with copy on write file-systems losing a cache would not be catastrophic. ZFS also lets you use an SSD (preferably 2 in RAID 1) as a write cache, and since a good enterpise SSD has enough onboard capacitosr to flush it's cache to disk, you gain very little advantages with hardware RAID.
Windows really should open up it's block device subsystem so we can get a decent file-system on it.
Very cool video, you are very good at presenting.
+Alexandre Létourneau I just wish he was as good at researching and getting his facts straight.
+Bu Jin ;P
Do I trust RAID? Hell no. I've had multiple drive failure screw me over before.
Though, in hindsight, I should have also checked the drive serial numbers to make sure they weren't ALL the same batch (since drive manufactured at the same time tend to fail in groups)
Also random heisenbugs.
Raid explained by a man who's face we had hardly seen, glad you're showing yourself more on the Tek Channel :)
23.05 ZFS is only going to loose whatever data hadn't been committed to disk in the last sync transaction. Which means only a handful of files could be affected at most, and even with that, you still have whatever the last consistent state was. It's NOT at all like losing power on an Raid5 setup. In fact, it's better better to lose power on a ZFS Raidz setup, than it would be to lose power on a single NTFS drive.
ZFS is not more succeptible to non-ECC RAM bit flips than any other file system. In fact, it's probably the most resisilient.
Source: jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/
TekSyndicate Just a filthy casual here, not for a deep kind of understanding, more like someone watching fireworks, or looking at fancy cars. Still really interesting though Wendell and judging by the below comments...useful too :)
ok... answer me this.. why do my ds8000 disk systems have raid6 arrays and then volumes created from those? Nowhere is the million+ prices disk do you have zfs or anything else? I have been using raid for 30 years and yet to have bit rot. Also, to correct you on one thng, i am not sure what raid controllers you are talking about, but every one i have used do not rely the drive to tell them there is a problem. The raid controller (even the one i use at home) will kick a drive if the card detects problem, not the drive. The drive does not kick itself out of a raid. if this was such a problem, then I would have saw it over the Exabytes of storage i have managed over my career.... across everything that isnt called a mainframe.
Way over my head but I did finish the video and learned something in the process. Thanks!