I had lost a CF card back when we moved in 2003. I found it last year, we had left it in a small run down building as we moved our stuff onto our newly purchased property then. Most of our stuff was put in these little buildings as we rebuilt the house. A couple of winters ago that building was crushed by snow. There was little in there but cleaning up the area we found a camera case with the cf card in it all wet and nasty half buried in dirt. It went through decades of humidity heat and cold as low as -60f. All gold plated so no corrosion. I found a reader and all pictures on the card were in tact Pictures we had thought we lost of our children. I'm even using the cf in a retro PC today. So at least that tech was tough as nails.
CF was just a variation of the IDE interface with a different connector and of course different memory storage. It was primitive and robust compared to today's solid state memory.
My dad recently found an old micro SD card in our gravel driveway. Put it into the computer and it had all the video files from his old Sony Action Cam dating back to 2013-2014 completely intact. It managed to survive being crushed by cars daily, multiple seasons of snow, rain, heat and so forth like it was nothing. I didnt get a chance to do any tests on it but I think it proved itself enough as it was.
Nice! It would be nice to see if the data was actually intact. Video (and image) files can survive a lot of data loss too as long as the file header information are intact. The one thing to keep in mind too is that older flash used either SLC or MLC NAND which are much more robust and less susceptible to data corruption because they are either just 1 bit or 2 bits of data. Not 3 or 4 bits like TLC and QLC.
@@htwingnut When I have potential SD card corruption I run this on every file on it, which decodes all the frames and reports any errors: ffmpeg -v error -f null - -i video.mp4/mov/ts
True. But SSD's don't magically "recharge" data in NAND cells. They store a voltage representing data values. It requires a wipe and re-write of each page of data to refresh it, which takes time. Left idle long enough, the controller would certainly perform a wear leveling routine and rewrite all the data across the NAND, but there was no idle time for these SSD's to accomplish the task.
@htwingnut Baeed on that assessment it seems the test next year will result in the other worn drive will probably have lost charge to a lot more cells so your result should the way more unrecoverable and possibly more ecc corrections. A lot longer to scan as well. They will be toast..
@@htwingnut The part you're missing is that during the read, anything that needed too much error correction will have been automatically rewritten. So the worst of the situation is corrected immediately during that.
@@big0bad0brad I thought you were referring to the one year test read after another two years. Yes you are correct, the SSD with all the errors will have the bulk re-written, but it's still a tell-tale sign of its ability to maintain integrity. In an ideal world, I would have a dozen different disks at different time frames. But limited budget and all. Was really just a curiosity experiment.
Data retention heavily depends on whether it's SLC or MLC. MLC the reason we got cheap and large SSDs, but a slight variation in the cell voltage will cause havoc. SLC is much more robust.
But now SLC and MLC is only limited to server grade data center SSDs. Consumer SSD is limited to TLC or QLC, which offer higher data storage density per chip but in expense of worse retention
@@sihamhamda47 There isn't anything stopping consumers from buying server-grade SSDs as far as I know. But most consumers are more concerned with total capacity and price-per-bit than with endurance.
I had an SSD that had lain unused for nearly seven years and to my surprise it load and mounted perfectly and all the data was intact. I was able to pull off the data (mostly several hundred photos and a few hundred old documents) and transfer to current storage. Then I reformatted the SSD, and it did so cleanly, and I'm using it as, one of several, Time Machine backup drives. Around 6 months later it's still going fine.
I, unintentionally, performed a similar test. 7+ years ago, I left 2 SATA SSDs (Intel-70GB, Samsung-64GB) un-powered. Recently, I aggregated various media sources to modern storage solution and to my surprise, there were no definite signs of data corruption on any media type. Media included: All types of HardDrives, SSDs, DVD+/-R/RW, CD-R/RW, SD Cards, Sony MS cards, Various devices with NAND/UFS storage. There were a few (~1%) of DVDs/CDs and a couple (~30%) of old embedded computers with NAND that had issues but I can't be sure it wasn't that way initially. Some of the devices left unused for 15+ years!
I remember the days when backing up to CD's and DVD's was a good solution, but man did you have to be on top of the details of who's media, and what kind you were using, even what software you used to write to them, because sometimes just writing to them was unreliable, and error prone, and material integrity and resistance to degradation were important. I had DVD's that after writing, and checking via comparison, were good, and a month later when trying to fetch something from them were unreadable, so I stopped using them and just saved money to get larger drives, and the fact that the price per MB went down as their size increased exponentially, I could always keep up with the growth of my data. I was already transferring my huge record collection to hard drives in 1993 on Windows 3.1! It protected the music from my cats and clumsy friends, and made my small apartment a whole lot bigger for fitting in a few cubic inches rather than a whole room!!😁
I have a collection of USB thumb drives that I use to store information and data on all my woodworking, machine tools, Welding equipment, etc. I made the mistake of storing one of these drives in a tool box that is used to store some welding rods and other supplies. This tool box is located in the corner of the work area, but well away from any thing electrical. Unfortunately, I stored thoriated tungsten rods in a drawer just under the thumb drive. The thorium is a radioactive material, mostly gamma. The thumb drive was almost unreadable because of errors. All the other drives from this batch ( Bought at the same time, just for this purpose.) had no problem. This was after about 2 months.
High energy photons hitting those charge wells are going to leave extra electrons behind as they interact with the device. I see this all the time with a CCD imaging device doing long duration astrophotography. Thumb drives are built to a price point and likely have less margin in them; TLC flash the same way - how many different levels of charge can be distinguished per flash cell? Dumping extra electrons in there is also going to eat into the margin. Heat is also a concern if you're worried about long term storage and maintaining the charge over long periods of time.
I used to work in the largest SSD manufacturing plant in North America as a design engineer. I can tell you from the inside that MLC SSDs only work at all by virtue of ECC, even when brand new and in burn-in testing. You can't get any meaningful information from letting a given drive sit and seeing if it still works. The actual lowest level error rate data is locked in the controller and you can't get it without proprietary tools, often in conjunction with a JTAG probe too.
@@zelo6237 Depending on the controller, absolutely. It's as accurate as it needs to be for consumers. Consumers tend to freak at low soft error rates so it's better to not get them riled up if possible. ECC does much magic.
@@argvminusonebecause a DRAM cell is like SLC flash cell. It's robust enough. But DRAM cells get refreshed continuously on runtime. But beside all that there are working hardware attacks like Row-Hammer, to force bit swapping.
This is pretty great content to see. I moved house last year and found my first ever 250gb SSD, a samsung 8-something-0 evo drive from 2015 that I hadn't touched since 2019. It had been thrown in a parts box ever since I upgraded to my current outgoing desktop. Just over five years of unpowered room temperature storage and, to my delight, it functioned perfectly fine. What a neat little time capsule it turned out to be. From some of my last college works to some of cringe senior year high school stuff. Building my own NAS currently and I think I'll use that drive as a small cache. Content like this is a great contribution to us data hoarders, and while I doubt I'll be here for years three and four as I like to keep a thin sub list, I can't wait to see the outcome of your testing.
Unless you do a low level read where you receive the status of the read containing how many bit corrections were done to the datablock you read the flash controller in a SSD drive hides almost all it's dealings with weak data. It also seems like (unfortunately) many never update their S.M.A.R.T data either for correctable or even uncorrectable read errors.Your drives lost data while not updating this as well. It's good to see it did record the ECC recovery events though, and there is a good chance it did a read-refresh on those blocks. Most likely they were "weaker" (leakier) than the average blocks which is why they lost so much charge. As you can see though, there seems to be things happening in the background of your reads which may end up "reinforcing" data. I've coded flash drivers myself and you get status for 1-3, 4-5, 6-7 and more bits corrected before uncorrectable events happen allowing controller a lot of leeway in the way it wants to handle it. Ideally leaving one drive connected and powered up as well would have given us an indication if things happen without actual access. Some drives may not do anything unless prompted so act purely retroactively on data degradation and it would be interesting to see if your drives did this.
It's like when people check their old CD/DVD-Rs, but don't use a program that does low-level reads so it can see all the corrected errors. At a high level the only sign of errors is slowness due to re-reads.
@@gblargg Somewhat true yes. re-reads slow things down but on solid state memory you can usually see bit errors being corrected with no slowdown as it is part of the asic architecture on many chips. The corrections happen on the fly and as long as the level of correction is low you are not even informed about it (as the controller of a flash chip) and have to ask about it. You literally have to ask for the correction state of the last read if you are interested (which slows down reads due to you asking).
@@1kreature Oh right good point, if the errors are mild enough the error-correcting bits allow correction *without* re-reading, so no speed difference. I'd imagine that it's correcting errors of this sort all the time (same goes for hard drives).
Actually pretty impressive for a cheap, abused drive written far over its rated amount. Only a handful of errors after 2 years of sitting, someone would be able to save the vast majority of the data if they needed to.
I don't trust SSD for long term storage , I mostly rely on DVD . I have a 21 years old CD where I burnt data back in 2003 , and I can still access data . I'm planning on buying Blu-ray writer to store more data
@@tunkunrunkyou're lucky with that CD, most of them are eaten by bitrot by now, although way more risky with DVD-R(average amazon basic dvd only lasts 5 years before first signs of bitrot)
@@docvrilliant I have DvD archives from 2008-09 time and can read them without any problems, but none of CD's survived, all became unreadable coz of Diskrot.
@@dim0n1 If he booted up the OS and it didn't crash/glitch/etc on him then we can be sure that the OS files are fine. That's like 10GB of files, and if those files had been corrupt, then I doubt the OS would be functioning properly. Could other files on the drive be corrupt? I suppose, but you'd expect to see corruption across the whole disk, bits of it here and there, including the OS files. If 10GB of OS files are fine, or fine enough to boot the OS without any noticeable damage, then chances are, the rest of the drive is probably fine too. I'm more surprised that the *HARDWARE* booted. I have a laptop that's about 12-15 years old that I tried to boot and it wouldn't even power on while connected to AC. Probably a bad cap or some similar issue.
@BAT_SHooT_CRAZY not "high density SSD" but high density nand chips using TLC, QLC and more bits per cells. Relatively recent 2.5" SSDs with small capacity are barely a couple of QLC nand chips (some went down to a single chip, making them just crappy as usb thumb drives) and their enclosure is mostly empty with a tiny PCB in there.
@@PainterVierax Only high density ssd are tlc, qlc etc. SOOO, all the mess and nonsense you're adding is just confusing and not helping anyone. I stand with my answer cause it is correct, it's just you're too limited to understand its beautiful simplicity.
Just came to say thank you after watching this video I now know how to check the total lifetime usage of my drives to know if they need to be replaced or not
I plan to do a similar but more extensive test with flash drives instead. Flash memory is flash memory, the firmware and the controller doesn't matter in this case because it isn't being used when the disk is powered off. The only thing which will matter is if it's MLC, TLC, or QLC (I'm not sure if this type is used in flash drives or not). I plan to test certain things. First of all - record at +65C but store at -15 (people are saying that big temperature allows electrons to enter the gates easier, while lower temps slowing their escape). Then a few would be periodically connected but without rewriting the data, a few connected and data rewritten. I also want to test btrfs or zfs with scrubbing. And some other things, maybe sending +5v permanently?
I do imaging with a CCD detector which accumulates charge in wells as photons interact with the device. High-energy particles deposit extra charge in the wells as they interact with the CCD. I also get "thermal electrons" that accumulate in the well over time - it's very obvious and happens at a pretty well understood rate. The rate at which those electrons accumulate in the charge wells ("pixels") doubles for every 10 degrees C of temperature increase, which is why these scientific imaging cameras are cooled with peltier devices. I image at -20C to both minimize this effect as well as to have predictable calibration frames to remove these in the image processing pipeline. Now, I'm not saying that the charges are affected exactly the same in flash memory cells as in a CCD image sensor.. But both classes of devices are still using a silicon-based substrate and it's likely that some of the basic physics still is at play in both devices. So I'd say that from my directly observed experiences with imaging devices that lower temperatures are going to be your friend to avoid either or both of unwanted signal (extra electrons), or the charge leaking off at an accelerated rate.
I've had a QLC ssd crap itself after only 1DW. The read speeds dropped to 2kb/s. Data was still coming out but it wasn't worthy. I still try to buy MLC when possible. TLC only if there is dram. QLC is a total no-no
Today I learned there's an issue with un powered SSDs. Uh, some of my important ones haven't gotten power in 6 or 7 and years. I just assumed it was fine. Uh oh.
First of all thanks for the Super Thanks! Unpowered SSD's can lose data, but usually only if they are well worn. If they are well within their TBW rating they should be ok for at least a couple years. But yeah, 6 or 7 years may be questionable. Plus if you power them up and the directory contents are there, that just means the directory info is intact, but the files might still be corrupt. Only way to know is to open each file or if you saved checksums, then verify them with those.
My First Thumb drive cost 160 bucks..16 megabytes..used it on win 2000 computers and win 98 and win 9se too.. for win 98 there was a driver required. Used them thumb drives when building up computers. Had the drivers on the thumb drive.. A mid 1990s IBM Pentium box a 167mhz had a usb slot on the mobo. The IBM 365 Pentium Pro boxes I used had a USB on its motor too. The actual 1st "USB thumb Like" device I used was this kit that had a memory card and battery. Was the size of a pack of cigarettes and had a USB cable. Got it at a computer show..it had 4 megabytes and 2 chips for memory.
my first one was a 128mb one and it wasn't anywhere near that expensive, but was expensive enough that it was only worth having one and keep burning cds for the rest.
I would suspect ssds fail for 2 reasons. 1) Over 10,000 write to any memory block 2) A long time like 10 years lowering the voltage of the trapped data cells. So any SSD will probably have data corruptions of some sort after 20 years, but not before 10.
Interesting………but not surprising. I am a builder of high end gaming PC’s and I have no less than 4 Nvme and about 10 SSD drives that just failed for no reason and with no warning. These drives are fast………but unreliable long term. People are going to be very disappointed when they lose all their data in the near future. Consistently backing up these new solid state drives is even more important now than it ever had been with mechanical spinning platters. There isn’t a warning with a solid state drive. They just outright fail. At least mechanical drives gave you a warning……… and time to backup your data before complete failure. Your findings are exactly what I have been experiencing with solid state drives.
The data retention time doubles with every 5 C of temperature drop. There's a simple physics-related reason for that. I assume these tests were performed at room temperature, so if you stick brand-new SSDs in the freezer, the data retention, according to the theory, should be 2^6 times longer. We've had 2 years with no errors, so (2 years) x 2^6 = 2^7 years = 128 years. Add some parity files on top of that, and I think you're pretty safe for a few years.
But one problem: The memory cells might be fine, but the rest of the electronics might not like being frozen ... and of what use is a drive that holds all the data for 128years, but does not work any longer?
@@memyshelfandeye318 My SSD's electronics have been OK for a year in my -15 C freezer, but you may want to stick your SSD in the fridge if you're not planning on refreshing your data for more than 32 years.
Really appreciate the work you put in on this .... there is no substitute for time.....and the need/quest for semi-permanent storage is one we should all be aware of .....hard to trust something if its never been tested.
depending on where you live the biggest damaging factor for any electronics isn’t really data retention on the chips but more slight deterioration of the lanes between each component (copper wire that is just 1 tenths of microns get damaged just from being exposed to air, usually these lanes are too damage to send data after a good 20-30 years , but if you manage to fix the damages using a microscope you can restore pretty much all data still in the storage chips and even restore the full performance.
Great test and video! There isn't a lot of data out there on how long SSDs can maintain data integrity. What brand were those drives? Thanks for sharing.
Those are Leven with allegedly Samsung memory. In reality it's like some factory leftovers that didn't pass QC, because my 1 TB drive slows down under 10 MiB/s after 20GiB written. Lack of cache not helping here too.
I just fired up a machine with a 128GB install of Win 7. It was 80% full. I deleted the junk and it has just less than half but is working fine. The thing sat in a drawer for several years.
@@tolpacourt SSD. I have a 64GB on a EEEpc from ASUS running Mageia Linux also. Came with my 1st gen i5 laptop. I also have a museum of machines from Apple II's and first MAC's. 286's running compact flash. I still have some MB hard drives that work.
Thanks for doing this, for spending your own time and money on this. Long term storage is a huge issue. We could preserve so much of culture and society, but instead we'll probably have encrypted drives data stored on flash storage instead... you know why a period of the past is called 'the dark ages' ? ( we need to preserve old SG and BM videos somehow. 🙂 ) I remember a report from a decade ago, that a datacenter had turned off a bunch of machines and put them in storage for a month or so (the storage was a cold area) and all data was lost, the industry took notice and changed how SSDs work/were produced. I've searched back and found other reports about to warm being a problem as well. I've not seen such problems anymore in more recent times. It does show that flash-based storage is not like other storage.
Greatest threat to data preservation is censorship. There was a lot of Teenfuns porn around year 2004 but now it completely disappeared from clearnet like it never existed. Because 17yo nude girl is CP. Also a lot of really important knowledge get lost permanently whe TrueCrypt forums got shut down. I even have problems downloading Skyrim torrent that is 10 years old. Preserving everything You use on local storage and then sharing it is the way to go.
My Samsung 2TB EVO drive started massively degrading after 6 months of use, I even lost some of my data. In total I wrote 50 GB in total during those 6 months. At the same time I have 10 years old Kingston SSD in my PC, which is online at least 8h each day. Worn is 5%. It's a lottery with these SSDs. In my home lab PC I use mostly HDDs and one SSD that is more like a cache. I load the most critical stuff (with lot of writing) to a memory drive in order to avoid disk drives wear.
@@SuperConker you must have gotten one of the early 870s since later produced drives fixed the NAND failure. samsung has a history of selling SSDs that have problems on release. hope you didn't lose important Data
I only saw terrible no-names and a Silicon Power 240GB MLC ssd loosing data this way. It is hard for me to believe Samsung does that too. Can you show its smart and surface scan?
It is true that memory cells work with electrical charge, but I don't understand why and, above all, where this charge should disappear to. It doesn't matter whether an SSD is powered or not, the total charge always remains the same. Maybe people confuse an SSD with a battery, but the two things work in fundamentally different ways. A battery “loses” charge through chemical processes that are constantly taking place. This is not the case in a memory cell. The doped silicon does not “rust” or transform into another compound. As long as the silicon exist the charge will exist too.
When data is "written", electrons are trapped in a NAND cell using a floating gate which results in a set voltage. That voltage represents a data value (i.e. 0000, 0001,1010, 1110, etc). If the gate is worn, electrons can more easily "leak" past the gate, changing the voltage, resulting in data corruption. Hopefully I did a decent job explaining this in my first video.
before running those tests i would run check disk and defrag in cmd with different options set such as defrag c: /b /o /h /x and chkdsk c: /b /f /x and after that will run the tests.. i am sure the results would be different....
The old worn drive had errors but not the new drive that was not worn. I think this will always be the case due to physical degradation but it's Great news for backup strategies. People should make complete backups on SSD's then safely store those SSD's only to be used when you need to retrieve data. Simply clone that SSD again to keep 2 back up copies - one you draw from and the other you don't draw from as to not degrade that SSD. When your draw from SSD starts to degrade, replace it by cloning the SSD you don't draw from. In this manner, you will never lose data but yes, it does require at least 3 SSD's of the same size, one in your PC and two in storage.
The retired Microsoft developer, Dave's Garage, says the old HDD bit reader/rewriter SpinRite can be used to refresh the data on solid state storage too.
I figured using cheap SSD's would be a good baseline for expectation. However, NAND is NAND for the most part. There's just different degrees of performance and endurance.
@@htwingnut Great video! Yes, the NAND chips and how worn they are seems to be the key. Can you tell what NAND chips are in your SSDs? Brand would be interesting to know
I left a few since 2016 different brands, and powered them all up this year and all 4 of them booted into their operating syatems just fine and appeared fully intact
"Smart guys" online like to make jokes about folks who still use HDDs, never mind optical media and tape.. It's your funeral. Flash is NOT SUITABLE for long-term storage.. And this is with higher grade stuff you see in SSDs, never mind the basic cheap media used in USB sticks and SD cards. Newer isn't always better for every purpose, children. Spinning rust may be old and slow, but I have a HDD from 1987 that still reads - and the Cloud isn't your computer. If you want to keep data, you need different media and multiple backups. It'll be a sad day once everything goes to 100% flash.
I pretty sure it is not about time, but about worn out of ssd (which is even worse) Not sure about current state of things, but early.SSDs did literally melt memory cells during write. But but between writes it stays in solid state represanting value of the cell (it literally in the name: Solid State Drive)
For working I use SSD 2,5", in few months I guess with a new laptop I will get m2 NVME SSD, but for storage, I use always HDDs, whats stored there, will hold long.
I wonder how the drives were stored? Rule of thumb is higher temperatures should negatively corr3with data time retension. SLC vs MLS, TLC AND QLC would also be interesting. I have an 80GB intel SATA SSD which I didn't use much due to its limited capacity. It spent most of it's life since ca. 2010 on the shelf. It's holding up well just like my second oldest SSD, a Samsung 860 Pro SATA drive which hs been used in my daily driver laptop for like 8 years. To a degree the question of data retension is moot as SSDs are still way too expensive and small in capacity compared to the eust tumbling cousins.
Simples: LOST DATA is NOT a PERFORMANCE issue. It's just worthless unless 100% of the SSD is recoverable 100% of the time! No excuses about how it got on performance-wise. No excuses about how it recovered some border-line sectors. All that's been proved here is the ABSOLUTE NECESSITY for full and regular backups of everything to more long term reliable media.
@@pmcasella hdds degrade too, they also should be refreshed occasionally (I'd say once a year should be safe). Unlike SSDs though, they don't self-refresh. And 10 minutes a year for SSDs feels... not sufficient. SDDs won't be even able to query all the sectors in that time.
@@mikehigham23 you can put data on a HDD pack it away for sometime and still read from it, that's what I have done 3years later still very much readerble. SSD,S Not designed for long term storage, needs to be refreshed. Cf sd cards..good 👍
The SMART firmware on the drive most likely fixed a lot of issues while you were doing the checksum tests, so that by the time you did the read test, it was almost completel fixed, and had moved most of the hardest to read data to more reliable memory, and was faster.
I've a Nokia phone that my parents bought me back in 2010. It came preinstalled with a Nokia branded 2GB MicroSD card which still has all the data and is working fine.
7:15 Did the SSD not report read errors even though it wasn't able to correct some reads? Or does the hash utility not report these errors? The latter seems unlikely, the formers seems really bad behavior.
Did the slow SSD recover the old performance after the readout? Some SSDs rewrite the data after a performance degradation. A Crucial M500 for example do so.
No media is safe from degradation. The best way is to use the 3-2-1 backup method. 3 copies of data, with two different media, with at least one offsite. These days "two different media" is hard to come by, mainly hard drives. If you have a copy on your PC, one in the cloud (like with OneDrive, Google Drive, Dropbox, etc), and an external backup, you will be in pretty good shape. Plus any data you have you should validate periodically, once a year is usually sufficient, to make sure it's not corrupt. If it is, then restore from one of your other copies, and replace the faulty drive if necessary.
Could make combinatorial path ringers, you get infinite more combinations than zero carry, its just more process intessive, but path ringing means no wasted buses, because combinations parity has the 5's and sevens' zero's compliment by weight of padded radix set sizing.
Could a 'SSD data retention device' be made? Just a battery, maybe a solar cell as well, and circuitry that would power the SSD on a few times a year? Also, is just the powering on enough, or do all the cells have to be actively addressed? Of course, for *really* long data storage, the bigger question is, will tech readily available 20, 30 yrs from now even have a place to plug a current day SSD in?
Probably need a sample size... just four drives, eh. Any number of things could have happened to that drive that did the 40 minutes. That seems like a pretty insane jump, it could have been faulty hardware, or other issues. Wish there could have been at least two drives in each category, so that if both of them showed 40 minute times then you'd have more confidence in the test.
Nas is network access storage i presume? But what file format? NTFS do not have self healing like ZFS corporate system from Sun-Oracle or indestructible journal like NILFS2.
@@esecallum NAS is Network Access Storage. A box with a bunch of drives (2-12, 4-8 most common), setup into a redundant drive array (usually RAID-5) and accessed via a LAN.
@@fontenbleau yes NAS, the system/file format is not the point here, it is simply separate HARD drives for archive storage. Way safer that internal drive, or SSD off course. The long term safety of these and backups are a complex matter. Just saying, have something outside you PC that uses HARD DRIVES.
Sadly even with this testing you would also need to know & consider what type of cells the SSD uses and what 'long term life countermeasures' the drive firmware also uses to help retain (or prematurely damage) the cells. All flash NAND reads and writes are not created equal, you could write entire books about it so I'm not gonna bother doing it here. People have been fearmongering about SSD stability since they exist, but my real world user experience is I've kept the same SSDs in the same build powered on and off every day for 11+ years with zero speed or reliability degradation. Meanwhile all the spinning rust I still had sitting around died on their own while powered off by the time I could be bothered to try and salvage them into a centralised storage solution.
From the graph it looks like your torture tests never touched the capacity beyond 100 megabytes. The read speed jumps up and down until 100 meg than flattens out to the same as the fresh drives.
That be when the drive has internally refreshed the weak nand pages (a background scan runs to check if power level of the chip is to low if it is it rewrites the 256/128mb page elsewhere to avoid using ecc
Very InTeReStiNg (and revealing!)... Well done. Thank you for "taking one for the team". I can cross that off my bucket list. Definitely a sub-worthy video. I should have realized that anyone with "WingNut" in their name would have their act together. Cheers from So.Ca.USA 3rd house on the left (please call before stopping by)
Before I watch - I don't believe anyone has Ever seen data disappear from any ssd for being unpowered for any length of time. To me that's silly because they do not power themselves or use any power to store and hold that data. There are also no moving parts. If you search you find, "While SSDs can generally hold data without power, extreme conditions or very long periods without power could potentially lead to data loss" Neither of those two conditions have been met by this video.
SSD's do store a voltage in their NAND cells. Electron leakage can happen over time, which in turn can change the voltage state of that cell, which results in corrupt data because the data in that cell is not accurate. In my first video I cited sources and described this in detail.
Of course you would end up with less space, but there should be an option (if not default) to just mark the slow parts for 'never use again'. Then, your OS should show the new (smaller) size, or 'remaining' size.
one of my external ssd runs super slow, after moving all the files out (painfully slow), format it, now it runs flawlessly. my ps5 2tb ssd is currently doing the same, so i may need to format it too
Don't format an SSD. TRIM. Windows is intelligent enough to initiate the TRIM command if you use the Windows GUI. Actual defragmenting an SSD will just cause unnecessary wear.
@ not format the entire ssd, just a quick format. my ssds mostly are for data storage, long term. just some read and write from time to time. thanks for info though
True. But a worn NAND cell can "leak" electrons resulting in a change in voltage which in turn results in corrupt data because it's no longer storing the charge related to the appropriate data.
@@htwingnut Your SSD or USB is a lot safer than your CD or DVD with CD Rot setting in... but before that even happens... it will become unreadable due to... scratches. So much data lost to... scratches. Harddrives... the head will glue itself to the ramp and the rubber bumper will rot inside...
I have a number of 1st gen 36GB single level charge SSD's that are over 15 years old at this point. I tried them not long ago, when I was looking for something to put into an old laptop. Original data was all still there and intact, though I did not do a full CRC check, before reformatting. I've been on the SSD bandwagon since the beginning and only lost data to OS errors. Never to charge degradation, even for drives sitting on a shelf. I have a crate of them. I'll get 1-5 mechanical drive failures a year. For SSD's - nothing.
Nice. Yeah I still have an original Intel X25 SSD. SLC. SLC is very robust as is MLC. SLC only needs to determine between two charge states, MLC four. So they are less susceptible to corruption. But modern TLC and QLC are bottom of the barrel when it comes to NAND quality. They use all the tricks using pSLC cache and even DRAM cache to artificially improve performance temporarily. But in the end, most aren't any faster than a traditional hard drive.
Too bad you can't make them out of what flash drives are made out of. Mine has been washed and dried and has sat for a decade and no noticeable loss that I can detect
the worn one can probably be made very usable with some trimming. just stop using the bad and iffy sectors, lower the total capacity a bit for better usability of the rest.
i only have one ssd which i've had for a year, it's been installed in my other desktop and i just had my steam library on it, used it for a few months and i had to move in with someone for 3 months during a family emergency, when i came back my games were all corrupted.. i think it's just random chance but ya i don't think i'll be putting any important data like family photos and such on an ssd.
Good video! And yes, it depends very much on the NAND cells. I have an old Intel X25-V, small, slow SSD which after 10 years unused was still ok and with data (Windows) on it, all running fine. But a 2021 OEM Samsung drive from a laptop which was put in an USB RACK in 2022, stored in a relatively cold environment had now in 2024 errors when copying files after 2 years of no IDLE. And it was slower. I had to rewrite it... It was connected every 2-3 months for backup but not left hours on idle. But also, the SSD controller will to the optimization in background if connected through an USB-C rack and not directly to the motherboard? Because the racks could enter in IDLE or maybe there are some other factors. SLC vs TLC or MLC or QLC or whatever is very different.
Yes, I fully understand this. And this was brought up in the first video. It requires actively refreshing or moving pages of data to "recharge" the cells.
I have a portable SSD that I'm using just to back up files and I rarely ever load it. How often and for how long do I have to plug it in to keep it from losing the data?
I can't find if these jajs600m238c are SLC, TLC or QLC. I wonder if a QLC would last a year unpowered. I woud not be surprised by an SLC cooing with 7 yermars unpowered. Mind HDD also lose storage with time unpowered due to magnetism...though very slowly (seems my 20 years old HDD is still fine
Hello sir, when you say Worn SSD, that was abused with 60TB Data in/out. And the fresh out is just testing with data without abuse? Because I have so much SSD, and I am really afraid of losing them, though, just powering up helps, but with your test video, will also give me information about SSD and how long they can last without being abused, just purely storage. So far, all my data is intact, I am using crucial MX500 btw. I wanna share info with you, to atleast gave you an idea. I have most of my SSD's from 2020 though. The only abused one that I have is where I installed my O.S. but it is still doing great! I subscribe and bell for you, I will check your uploads with SSD test! I needed somebody who is serious with this data rotting test. The abused one should be something to be wary though because it will always go that way. But with 60TB and still recover some of it, it's good!
Ive been lucky with all my SSDs . i have never lost one. I've lost a few HDDs . my oldest SSD is over 14 years old. the drive I have are micron or Kingston drives. 128gb was my first up to 1tb. when I update the main drive, if I need more space. i buy a caddy and use the older drive as an external drive.
It's not only 4 errors, it's over 200 thousand errors, although corrected by ECC. 4 files were corrupted because of this, and also resulted in a significant speed reduction.
@ thank you for the correction. This is still much better than what is expected though. Consumer SSDs are just not great, and SSDs are not long term data storage.
@@igelbofh You can keep data on flash for very long time, but it mustn't be cycled much for it to be effective. Wear reduces data retention capability a lot. After all, pretty much all microcontrollers use flash for program storage, and they keep working for dozens of years. For example, BIOS on PC motherboard is stored on flash and one can obtain some 30+ years old motherboards that still work fine without any maintenance done on them.
@ yes, but it's not what you think. It's volumetric density. And voltages. A microcontroller will have in the same volume that a 1TB ssd nand chip no more than 64kb of data. Also microcontrollers are not using flash memory - the ones with really long life's are using eprom.
@@igelbofh EEPROM is memory operation mode, it might not be flash memory, it may be magnetic memory, for example. Flash memory is the type of device that used double-gate transistors to store data. Cells can be organized as NOR gates or NAND gates, which makes them different in dynamic operation (NOR faster to read/slower to write and has finer grained random access), but pretty much same in static performance. In cell structure storage gate separated by few nm of isolation from other metal gate that usually grounded, so any other electric field inside the chip be it cell-to-cell or layer-to-layer is going to be dwarfed by it. About density though, some small 128Mbit NOR flash (which also rated for program storage, so 25-50 years retention) has ~8mm2 die area. Storage density per layer per mm2 is worse, but only about 2-3 times worse than some arbitrary NAND. We'd get stacked NOR ages ago if anyone had seen a point in it.
Should note that hardware ecc corrections is normal, what is intresting and the more important smart attribute is 197 198 (c4 c5 c6) pending relocation (this is when the drive has detected uncorrectable data and requires a write to remap the sector) A read test can't trigger a relocation (so the logged remaps was due to a write) cheap, China china ssd's can be unreliable
you shouldve kept the seed used to generate the random data. then you could generate the exact same data now, and compare and see exactly what changed.
@@htwingnut ig it'd be interesting to see where the bytes had changed. if only a few bytes had changed or a whole lot, in patches or large swaths, etc.
I had some SSDs from 2009 that sat for 10 years and were fine. Alas, then I put them into an enclosure with a fault that essentially electrocuted the driver boards. 🤦🏻♂️
2 years isnt long enough most ssd uses micro supercapacitors coupled with each transistor for each bit with very minimal consumption just to retain the state of each bit it literally holds power for a long time just like how cmos battery can last for years without changing but not as long as hdd does
I would love to see a SpinRite v6.1 Level 3 recovery/refresh done on the drives after the year 4 tests are complete. Hopefully SpinRite v7 will be out by then.
I had lost a CF card back when we moved in 2003. I found it last year, we had left it in a small run down building as we moved our stuff onto our newly purchased property then. Most of our stuff was put in these little buildings as we rebuilt the house. A couple of winters ago that building was crushed by snow. There was little in there but cleaning up the area we found a camera case with the cf card in it all wet and nasty half buried in dirt. It went through decades of humidity heat and cold as low as -60f. All gold plated so no corrosion. I found a reader and all pictures on the card were in tact Pictures we had thought we lost of our children. I'm even using the cf in a retro PC today. So at least that tech was tough as nails.
SLC type storage is too OP in terms of data retention. It's as durable as conventional spinning hard drive
Flash memory just hadn't "advanced" enough to offer poor retention times. It was all SLC back then.
lucky find and its good you got your pics back. one day we'll have storage the speed of RAM and reliability of a mechanical HDD.
@@sihamhamda47 “too OP” 🤦♂️
CF was just a variation of the IDE interface with a different connector and of course different memory storage. It was primitive and robust compared to today's solid state memory.
My dad recently found an old micro SD card in our gravel driveway. Put it into the computer and it had all the video files from his old Sony Action Cam dating back to 2013-2014 completely intact. It managed to survive being crushed by cars daily, multiple seasons of snow, rain, heat and so forth like it was nothing. I didnt get a chance to do any tests on it but I think it proved itself enough as it was.
Nice to get some empirical evidence in here.
I lost a dashcam memory card years ago. I always hope I'll find it at some point in the driveway.
Nice! It would be nice to see if the data was actually intact. Video (and image) files can survive a lot of data loss too as long as the file header information are intact.
The one thing to keep in mind too is that older flash used either SLC or MLC NAND which are much more robust and less susceptible to data corruption because they are either just 1 bit or 2 bits of data. Not 3 or 4 bits like TLC and QLC.
@@htwingnut When I have potential SD card corruption I run this on every file on it, which decodes all the frames and reports any errors: ffmpeg -v error -f null - -i video.mp4/mov/ts
@@muteloch2798 🤦♂️
Umm.. if you check them at 2 years, then you can't check them at 4 years 'unpowered' -- or am I missing something?
True. But SSD's don't magically "recharge" data in NAND cells. They store a voltage representing data values. It requires a wipe and re-write of each page of data to refresh it, which takes time.
Left idle long enough, the controller would certainly perform a wear leveling routine and rewrite all the data across the NAND, but there was no idle time for these SSD's to accomplish the task.
@@htwingnutsome of them refresh cells in the background when not actively reading or writing. Not sure if your ones do it but e.g. Samsung does that.
@htwingnut Baeed on that assessment it seems the test next year will result in the other worn drive will probably have lost charge to a lot more cells so your result should the way more unrecoverable and possibly more ecc corrections. A lot longer to scan as well. They will be toast..
@@htwingnut The part you're missing is that during the read, anything that needed too much error correction will have been automatically rewritten. So the worst of the situation is corrected immediately during that.
@@big0bad0brad I thought you were referring to the one year test read after another two years. Yes you are correct, the SSD with all the errors will have the bulk re-written, but it's still a tell-tale sign of its ability to maintain integrity. In an ideal world, I would have a dozen different disks at different time frames. But limited budget and all. Was really just a curiosity experiment.
Data retention heavily depends on whether it's SLC or MLC. MLC the reason we got cheap and large SSDs, but a slight variation in the cell voltage will cause havoc. SLC is much more robust.
But now SLC and MLC is only limited to server grade data center SSDs. Consumer SSD is limited to TLC or QLC, which offer higher data storage density per chip but in expense of worse retention
@@sihamhamda47 There isn't anything stopping consumers from buying server-grade SSDs as far as I know. But most consumers are more concerned with total capacity and price-per-bit than with endurance.
aand QLC being even worse now xD
I had an SSD that had lain unused for nearly seven years and to my surprise it load and mounted perfectly and all the data was intact. I was able to pull off the data (mostly several hundred photos and a few hundred old documents) and transfer to current storage. Then I reformatted the SSD, and it did so cleanly, and I'm using it as, one of several, Time Machine backup drives. Around 6 months later it's still going fine.
I, unintentionally, performed a similar test.
7+ years ago, I left 2 SATA SSDs (Intel-70GB, Samsung-64GB) un-powered.
Recently, I aggregated various media sources to modern storage solution and to my surprise, there were no definite signs of data corruption on any media type.
Media included: All types of HardDrives, SSDs, DVD+/-R/RW, CD-R/RW, SD Cards, Sony MS cards, Various devices with NAND/UFS storage.
There were a few (~1%) of DVDs/CDs and a couple (~30%) of old embedded computers with NAND that had issues but I can't be sure it wasn't that way initially.
Some of the devices left unused for 15+ years!
They were still being powered by psionic energy though!
I remember the days when backing up to CD's and DVD's was a good solution, but man did you have to be on top of the details of who's media, and what kind you were using, even what software you used to write to them, because sometimes just writing to them was unreliable, and error prone, and material integrity and resistance to degradation were important.
I had DVD's that after writing, and checking via comparison, were good, and a month later when trying to fetch something from them were unreadable, so I stopped using them and just saved money to get larger drives, and the fact that the price per MB went down as their size increased exponentially, I could always keep up with the growth of my data.
I was already transferring my huge record collection to hard drives in 1993 on Windows 3.1! It protected the music from my cats and clumsy friends, and made my small apartment a whole lot bigger for fitting in a few cubic inches rather than a whole room!!😁
@Bob-of-Zoid Well, CDs replaced floppy discs, which contained even less data.
@@Tugela60 Well yeah, but I was very familiar with those too (computing since 79), and they were not well suited for that amount of data.🤓
@@Bob-of-Zoid How could you do it in 1993?
Hard disk storage was prohibitevely expensive and small back then.
I love watching these kind of informational videos about hardware. Thank you for doing this.
I have a collection of USB thumb drives that I use to store information and data on all my woodworking, machine tools, Welding equipment, etc. I made the mistake of storing one of these drives in a tool box that is used to store some welding rods and other supplies. This tool box is located in the corner of the work area, but well away from any thing electrical. Unfortunately, I stored thoriated tungsten rods in a drawer just under the thumb drive. The thorium is a radioactive material, mostly gamma. The thumb drive was almost unreadable because of errors. All the other drives from this batch ( Bought at the same time, just for this purpose.) had no problem. This was after about 2 months.
High energy photons hitting those charge wells are going to leave extra electrons behind as they interact with the device. I see this all the time with a CCD imaging device doing long duration astrophotography. Thumb drives are built to a price point and likely have less margin in them; TLC flash the same way - how many different levels of charge can be distinguished per flash cell? Dumping extra electrons in there is also going to eat into the margin. Heat is also a concern if you're worried about long term storage and maintaining the charge over long periods of time.
Fellow Tig enthusiast here, I better check my rods to make sure they’re not near anything.
I used to work in the largest SSD manufacturing plant in North America as a design engineer. I can tell you from the inside that MLC SSDs only work at all by virtue of ECC, even when brand new and in burn-in testing. You can't get any meaningful information from letting a given drive sit and seeing if it still works. The actual lowest level error rate data is locked in the controller and you can't get it without proprietary tools, often in conjunction with a JTAG probe too.
Meaningful to who?
would the ssd be reporting inaccurate SMART info?
@@zelo6237 Depending on the controller, absolutely. It's as accurate as it needs to be for consumers. Consumers tend to freak at low soft error rates so it's better to not get them riled up if possible. ECC does much magic.
It's kind of a miracle that DRAM without ECC works as well as it does.
@@argvminusonebecause a DRAM cell is like SLC flash cell. It's robust enough. But DRAM cells get refreshed continuously on runtime. But beside all that there are working hardware attacks like Row-Hammer, to force bit swapping.
This is pretty great content to see. I moved house last year and found my first ever 250gb SSD, a samsung 8-something-0 evo drive from 2015 that I hadn't touched since 2019. It had been thrown in a parts box ever since I upgraded to my current outgoing desktop. Just over five years of unpowered room temperature storage and, to my delight, it functioned perfectly fine.
What a neat little time capsule it turned out to be. From some of my last college works to some of cringe senior year high school stuff.
Building my own NAS currently and I think I'll use that drive as a small cache. Content like this is a great contribution to us data hoarders, and while I doubt I'll be here for years three and four as I like to keep a thin sub list, I can't wait to see the outcome of your testing.
Unless you do a low level read where you receive the status of the read containing how many bit corrections were done to the datablock you read the flash controller in a SSD drive hides almost all it's dealings with weak data. It also seems like (unfortunately) many never update their S.M.A.R.T data either for correctable or even uncorrectable read errors.Your drives lost data while not updating this as well. It's good to see it did record the ECC recovery events though, and there is a good chance it did a read-refresh on those blocks. Most likely they were "weaker" (leakier) than the average blocks which is why they lost so much charge. As you can see though, there seems to be things happening in the background of your reads which may end up "reinforcing" data.
I've coded flash drivers myself and you get status for 1-3, 4-5, 6-7 and more bits corrected before uncorrectable events happen allowing controller a lot of leeway in the way it wants to handle it.
Ideally leaving one drive connected and powered up as well would have given us an indication if things happen without actual access. Some drives may not do anything unless prompted so act purely retroactively on data degradation and it would be interesting to see if your drives did this.
It's like when people check their old CD/DVD-Rs, but don't use a program that does low-level reads so it can see all the corrected errors. At a high level the only sign of errors is slowness due to re-reads.
@@gblargg Somewhat true yes. re-reads slow things down but on solid state memory you can usually see bit errors being corrected with no slowdown as it is part of the asic architecture on many chips. The corrections happen on the fly and as long as the level of correction is low you are not even informed about it (as the controller of a flash chip) and have to ask about it. You literally have to ask for the correction state of the last read if you are interested (which slows down reads due to you asking).
@@1kreature Oh right good point, if the errors are mild enough the error-correcting bits allow correction *without* re-reading, so no speed difference. I'd imagine that it's correcting errors of this sort all the time (same goes for hard drives).
Actually pretty impressive for a cheap, abused drive written far over its rated amount. Only a handful of errors after 2 years of sitting, someone would be able to save the vast majority of the data if they needed to.
Of course hardware ECC correction on multiple re-reads is what I expect is slowing it down.
I don't trust SSD for long term storage , I mostly rely on DVD . I have a 21 years old CD where I burnt data back in 2003 , and I can still access data . I'm planning on buying Blu-ray writer to store more data
@@tunkunrunkyou're lucky with that CD, most of them are eaten by bitrot by now, although way more risky with DVD-R(average amazon basic dvd only lasts 5 years before first signs of bitrot)
@@docvrilliant I have DvD archives from 2008-09 time and can read them without any problems, but none of CD's survived, all became unreadable coz of Diskrot.
@@docvrilliant even tapes experienced rot. Humidity is a serious threat for long term storage.
Thanks for the detailed video on ssd data retention.
I have an old netbook, it’s got an 16gb SSD, it’s unpowered for 10 years, and I checked it, the data was still absolutely fine.
did u compare hash of every file? or did you tried to open every file to check, if its correct or just look at the size of files?
@@dim0n1 If he booted up the OS and it didn't crash/glitch/etc on him then we can be sure that the OS files are fine. That's like 10GB of files, and if those files had been corrupt, then I doubt the OS would be functioning properly. Could other files on the drive be corrupt? I suppose, but you'd expect to see corruption across the whole disk, bits of it here and there, including the OS files. If 10GB of OS files are fine, or fine enough to boot the OS without any noticeable damage, then chances are, the rest of the drive is probably fine too. I'm more surprised that the *HARDWARE* booted. I have a laptop that's about 12-15 years old that I tried to boot and it wouldn't even power on while connected to AC. Probably a bad cap or some similar issue.
The issues start with high density SSD, so high storage SSD.
@BAT_SHooT_CRAZY not "high density SSD" but high density nand chips using TLC, QLC and more bits per cells. Relatively recent 2.5" SSDs with small capacity are barely a couple of QLC nand chips (some went down to a single chip, making them just crappy as usb thumb drives) and their enclosure is mostly empty with a tiny PCB in there.
@@PainterVierax Only high density ssd are tlc, qlc etc. SOOO, all the mess and nonsense you're adding is just confusing and not helping anyone. I stand with my answer cause it is correct, it's just you're too limited to understand its beautiful simplicity.
Just came to say thank you after watching this video I now know how to check the total lifetime usage of my drives to know if they need to be replaced or not
I plan to do a similar but more extensive test with flash drives instead. Flash memory is flash memory, the firmware and the controller doesn't matter in this case because it isn't being used when the disk is powered off. The only thing which will matter is if it's MLC, TLC, or QLC (I'm not sure if this type is used in flash drives or not).
I plan to test certain things. First of all - record at +65C but store at -15 (people are saying that big temperature allows electrons to enter the gates easier, while lower temps slowing their escape).
Then a few would be periodically connected but without rewriting the data, a few connected and data rewritten. I also want to test btrfs or zfs with scrubbing. And some other things, maybe sending +5v permanently?
I do imaging with a CCD detector which accumulates charge in wells as photons interact with the device. High-energy particles deposit extra charge in the wells as they interact with the CCD. I also get "thermal electrons" that accumulate in the well over time - it's very obvious and happens at a pretty well understood rate. The rate at which those electrons accumulate in the charge wells ("pixels") doubles for every 10 degrees C of temperature increase, which is why these scientific imaging cameras are cooled with peltier devices. I image at -20C to both minimize this effect as well as to have predictable calibration frames to remove these in the image processing pipeline.
Now, I'm not saying that the charges are affected exactly the same in flash memory cells as in a CCD image sensor.. But both classes of devices are still using a silicon-based substrate and it's likely that some of the basic physics still is at play in both devices. So I'd say that from my directly observed experiences with imaging devices that lower temperatures are going to be your friend to avoid either or both of unwanted signal (extra electrons), or the charge leaking off at an accelerated rate.
I've had a QLC ssd crap itself after only 1DW. The read speeds dropped to 2kb/s. Data was still coming out but it wasn't worthy. I still try to buy MLC when possible. TLC only if there is dram. QLC is a total no-no
Today I learned there's an issue with un powered SSDs. Uh, some of my important ones haven't gotten power in 6 or 7 and years. I just assumed it was fine. Uh oh.
First of all thanks for the Super Thanks! Unpowered SSD's can lose data, but usually only if they are well worn. If they are well within their TBW rating they should be ok for at least a couple years. But yeah, 6 or 7 years may be questionable. Plus if you power them up and the directory contents are there, that just means the directory info is intact, but the files might still be corrupt. Only way to know is to open each file or if you saved checksums, then verify them with those.
My First Thumb drive cost 160 bucks..16 megabytes..used it on win 2000 computers and win 98 and win 9se too.. for win 98 there was a driver required. Used them thumb drives when building up computers. Had the drivers on the thumb drive..
A mid 1990s IBM Pentium box a 167mhz had a usb slot on the mobo. The IBM 365 Pentium Pro boxes I used had a USB on its motor too.
The actual 1st "USB thumb Like" device I used was this kit that had a memory card and battery. Was the size of a pack of cigarettes and had a USB cable. Got it at a computer show..it had 4 megabytes and 2 chips for memory.
my first one was a 128mb one and it wasn't anywhere near that expensive, but was expensive enough that it was only worth having one and keep burning cds for the rest.
I'm more interested in data retention within specified TBW, like 25%, 50%, 75% and 100%, because I don't use any SSD beyond its TBW limit.
I would suspect ssds fail for 2 reasons. 1) Over 10,000 write to any memory block 2) A long time like 10 years lowering the voltage of the trapped data cells. So any SSD will probably have data corruptions of some sort after 20 years, but not before 10.
Interesting………but not surprising. I am a builder of high end gaming PC’s and I have no less than 4 Nvme and about 10 SSD drives that just failed for no reason and with no warning. These drives are fast………but unreliable long term. People are going to be very disappointed when they lose all their data in the near future. Consistently backing up these new solid state drives is even more important now than it ever had been with mechanical spinning platters. There isn’t a warning with a solid state drive. They just outright fail. At least mechanical drives gave you a warning……… and time to backup your data before complete failure. Your findings are exactly what I have been experiencing with solid state drives.
The data retention time doubles with every 5 C of temperature drop. There's a simple physics-related reason for that. I assume these tests were performed at room temperature, so if you stick brand-new SSDs in the freezer, the data retention, according to the theory, should be 2^6 times longer. We've had 2 years with no errors, so (2 years) x 2^6 = 2^7 years = 128 years. Add some parity files on top of that, and I think you're pretty safe for a few years.
But one problem: The memory cells might be fine, but the rest of the electronics might not like being frozen ... and of what use is a drive that holds all the data for 128years, but does not work any longer?
@@memyshelfandeye318 My SSD's electronics have been OK for a year in my -15 C freezer, but you may want to stick your SSD in the fridge if you're not planning on refreshing your data for more than 32 years.
Really appreciate the work you put in on this .... there is no substitute for time.....and the need/quest for semi-permanent storage is one we should all be aware of .....hard to trust something if its never been tested.
depending on where you live the biggest damaging factor for any electronics isn’t really data retention on the chips but more slight deterioration of the lanes between each component (copper wire that is just 1 tenths of microns get damaged just from being exposed to air, usually these lanes are too damage to send data after a good 20-30 years , but if you manage to fix the damages using a microscope you can restore pretty much all data still in the storage chips and even restore the full performance.
impressive and nice to know, ive been way more worried about this then i needed to be :)
Great test and video! There isn't a lot of data out there on how long SSDs can maintain data integrity. What brand were those drives? Thanks for sharing.
Those are Leven with allegedly Samsung memory. In reality it's like some factory leftovers that didn't pass QC, because my 1 TB drive slows down under 10 MiB/s after 20GiB written. Lack of cache not helping here too.
I just fired up a machine with a 128GB install of Win 7. It was 80% full. I deleted the junk and it has just less than half but is working fine. The thing sat in a drawer for several years.
SSD or spinning rust?
@@tolpacourtbro
@@tolpacourt I would say SSD based on size... 128GB wasn't really a size for hard drives. 120GB would be more likely for a HDD.
@@tolpacourt SSD. I have a 64GB on a EEEpc from ASUS running Mageia Linux also. Came with my 1st gen i5 laptop. I also have a museum of machines from Apple II's and first MAC's. 286's running compact flash. I still have some MB hard drives that work.
@@tolpacourt spinning rust? They will never rust internally... they are hermetically sealed with air holes for pressurization only.
Thanks for doing this, for spending your own time and money on this. Long term storage is a huge issue. We could preserve so much of culture and society, but instead we'll probably have encrypted drives data stored on flash storage instead... you know why a period of the past is called 'the dark ages' ? ( we need to preserve old SG and BM videos somehow. 🙂 )
I remember a report from a decade ago, that a datacenter had turned off a bunch of machines and put them in storage for a month or so (the storage was a cold area) and all data was lost, the industry took notice and changed how SSDs work/were produced. I've searched back and found other reports about to warm being a problem as well. I've not seen such problems anymore in more recent times. It does show that flash-based storage is not like other storage.
Greatest threat to data preservation is censorship. There was a lot of Teenfuns porn around year 2004 but now it completely disappeared from clearnet like it never existed. Because 17yo nude girl is CP. Also a lot of really important knowledge get lost permanently whe TrueCrypt forums got shut down. I even have problems downloading Skyrim torrent that is 10 years old. Preserving everything You use on local storage and then sharing it is the way to go.
that was really interesting thank you for taking so much time!
My Samsung 2TB EVO drive started massively degrading after 6 months of use, I even lost some of my data. In total I wrote 50 GB in total during those 6 months.
At the same time I have 10 years old Kingston SSD in my PC, which is online at least 8h each day. Worn is 5%.
It's a lottery with these SSDs.
In my home lab PC I use mostly HDDs and one SSD that is more like a cache. I load the most critical stuff (with lot of writing) to a memory drive in order to avoid disk drives wear.
Did you use a samsung 990 EVO? and more importantly did you do a firmware update?
@@dedpoolnotmarvel It was most likely the 870 EVO (sata) which is Samsung's most unreliable SSD to date.
@@SuperConker you must have gotten one of the early 870s since later produced drives fixed the NAND failure. samsung has a history of selling SSDs that have problems on release.
hope you didn't lose important Data
I had a really old 512gb 860evo I used to run torrents, it had 0% drive health left but still ran for over a year with no issues.
I only saw terrible no-names and a Silicon Power 240GB MLC ssd loosing data this way.
It is hard for me to believe Samsung does that too. Can you show its smart and surface scan?
It is true that memory cells work with electrical charge, but I don't understand why and, above all, where this charge should disappear to. It doesn't matter whether an SSD is powered or not, the total charge always remains the same.
Maybe people confuse an SSD with a battery, but the two things work in fundamentally different ways. A battery “loses” charge through chemical processes that are constantly taking place. This is not the case in a memory cell. The doped silicon does not “rust” or transform into another compound. As long as the silicon exist the charge will exist too.
When data is "written", electrons are trapped in a NAND cell using a floating gate which results in a set voltage. That voltage represents a data value (i.e. 0000, 0001,1010, 1110, etc). If the gate is worn, electrons can more easily "leak" past the gate, changing the voltage, resulting in data corruption.
Hopefully I did a decent job explaining this in my first video.
YOU DIDN'T NOTICE THE HEALTH STATUS, IT TELLING YOU THE DRIVE IS FAILING.
can't they make an ssd that can store without being powered?
Sure, but it'll cost a lot more since it will be lower density, and the market for SSDs that keep data when unpowered is small.
before running those tests i would run check disk and defrag in cmd with different options set such as defrag c: /b /o /h /x and chkdsk c: /b /f /x and after that will run the tests.. i am sure the results would be different....
Wow. Thank you very much.
Very interesting test.
The old worn drive had errors but not the new drive that was not worn. I think this will always be the case due to physical degradation but it's Great news for backup strategies. People should make complete backups on SSD's then safely store those SSD's only to be used when you need to retrieve data. Simply clone that SSD again to keep 2 back up copies - one you draw from and the other you don't draw from as to not degrade that SSD. When your draw from SSD starts to degrade, replace it by cloning the SSD you don't draw from. In this manner, you will never lose data but yes, it does require at least 3 SSD's of the same size, one in your PC and two in storage.
What is the drive mechanism shown at ~3m05s that enables you to front load those SSDs?
I'd also like to know.
Now let's do: How long can M-DISC store data unpowered? ☺
Year 200 update: HTWingNut body passed away, but his mind got preserved on M-DISC.
The retired Microsoft developer, Dave's Garage, says the old HDD bit reader/rewriter SpinRite can be used to refresh the data on solid state storage too.
Given these are cheap, generic drives, do you think better, higher quality, branded drives would fare better? This is good for what it is though...
I figured using cheap SSD's would be a good baseline for expectation. However, NAND is NAND for the most part. There's just different degrees of performance and endurance.
@@htwingnut Great video! Yes, the NAND chips and how worn they are seems to be the key. Can you tell what NAND chips are in your SSDs? Brand would be interesting to know
x25-e were SLC and 50nm massive cells, should be most robust, but capacity is only 64GB.
I left a few since 2016 different brands, and powered them all up this year and all 4 of them booted into their operating syatems just fine and appeared fully intact
"Smart guys" online like to make jokes about folks who still use HDDs, never mind optical media and tape.. It's your funeral. Flash is NOT SUITABLE for long-term storage.. And this is with higher grade stuff you see in SSDs, never mind the basic cheap media used in USB sticks and SD cards. Newer isn't always better for every purpose, children. Spinning rust may be old and slow, but I have a HDD from 1987 that still reads - and the Cloud isn't your computer. If you want to keep data, you need different media and multiple backups. It'll be a sad day once everything goes to 100% flash.
I pretty sure it is not about time, but about worn out of ssd (which is even worse) Not sure about current state of things, but early.SSDs did literally melt memory cells during write. But but between writes it stays in solid state represanting value of the cell (it literally in the name: Solid State Drive)
I just found a 2 Gb SD card that I have had since 2007. It still has photos and it hasn’t been touched since 2010.
@@Dragunov-1svd back then qlc did not exist
For working I use SSD 2,5", in few months I guess with a new laptop I will get m2 NVME SSD, but for storage, I use always HDDs, whats stored there, will hold long.
Tape has a horrible successful data restoration track record from multiple studies; that we're not done by tape manufacturers.
Decisive scientific methodology. Especially the checksums. Tre Bien!
I wonder how the drives were stored? Rule of thumb is higher temperatures should negatively corr3with data time retension. SLC vs MLS, TLC AND QLC would also be interesting.
I have an 80GB intel SATA SSD which I didn't use much due to its limited capacity. It spent most of it's life since ca. 2010 on the shelf. It's holding up well just like my second oldest SSD, a Samsung 860 Pro SATA drive which hs been used in my daily driver laptop for like 8 years.
To a degree the question of data retension is moot as SSDs are still way too expensive and small in capacity compared to the eust tumbling cousins.
Ohh dear 😮 it’s bad enough for our poor PC but on a Mac with SSD soldered on the motherboard? Could music production tasks do that?
Simples: LOST DATA is NOT a PERFORMANCE issue. It's just worthless unless 100% of the SSD is recoverable 100% of the time!
No excuses about how it got on performance-wise. No excuses about how it recovered some border-line sectors.
All that's been proved here is the ABSOLUTE NECESSITY for full and regular backups of everything to more long term reliable media.
The standard consumers SSD are tested against though only requires one year of unpowered data retention.
No backup guarantees 100% reliability, so by your own definition it's worthless
ssd's should be cycled once a year for 10 min for good health, long term storage use standard HDD drives
@@pmcasella hdds degrade too, they also should be refreshed occasionally (I'd say once a year should be safe). Unlike SSDs though, they don't self-refresh.
And 10 minutes a year for SSDs feels... not sufficient. SDDs won't be even able to query all the sectors in that time.
@@mikehigham23 you can put data on a HDD pack it away for sometime and still read from it, that's what I have done 3years later still very much readerble. SSD,S Not designed for long term storage, needs to be refreshed. Cf sd cards..good 👍
I have a 10years+ digital camera, those pictures stored in the 2GB SD card still work without issue
The SMART firmware on the drive most likely fixed a lot of issues while you were doing the checksum tests, so that by the time you did the read test, it was almost completel fixed, and had moved most of the hardest to read data to more reliable memory, and was faster.
It can be as low as 1 year. It depends on storage temperature I read somewhere.
I've a Nokia phone that my parents bought me back in 2010. It came preinstalled with a Nokia branded 2GB MicroSD card which still has all the data and is working fine.
7:15 Did the SSD not report read errors even though it wasn't able to correct some reads? Or does the hash utility not report these errors? The latter seems unlikely, the formers seems really bad behavior.
Did the slow SSD recover the old performance after the readout? Some SSDs rewrite the data after a performance degradation. A Crucial M500 for example do so.
Wait, so if my files are stored in my usb thumb drive that I don't use, then they will just become corrupted/lost? How/where do I store data then?
No media is safe from degradation. The best way is to use the 3-2-1 backup method. 3 copies of data, with two different media, with at least one offsite. These days "two different media" is hard to come by, mainly hard drives.
If you have a copy on your PC, one in the cloud (like with OneDrive, Google Drive, Dropbox, etc), and an external backup, you will be in pretty good shape.
Plus any data you have you should validate periodically, once a year is usually sufficient, to make sure it's not corrupt. If it is, then restore from one of your other copies, and replace the faulty drive if necessary.
@htwingnut ty. So I can keep using my old thumb drive, just replace the files?
Holy cow, I had no idea you could drag and drop into the command window
Could make combinatorial path ringers, you get infinite more combinations than zero carry, its just more process intessive, but path ringing means no wasted buses, because combinations parity has the 5's and sevens' zero's compliment by weight of padded radix set sizing.
I didnt even know there is a time limit..... thank you i will go boot up my old ssds now.
One of the reason I don't go backups on SSD, As Data retention my have a correlation to how Written Times they have
Great video, thanks
Great work! Thanks a lot!
in case I get this issue ,how I can use it again ?
Could a 'SSD data retention device' be made? Just a battery, maybe a solar cell as well, and circuitry that would power the SSD on a few times a year? Also, is just the powering on enough, or do all the cells have to be actively addressed?
Of course, for *really* long data storage, the bigger question is, will tech readily available 20, 30 yrs from now even have a place to plug a current day SSD in?
Probably need a sample size... just four drives, eh. Any number of things could have happened to that drive that did the 40 minutes. That seems like a pretty insane jump, it could have been faulty hardware, or other issues. Wish there could have been at least two drives in each category, so that if both of them showed 40 minute times then you'd have more confidence in the test.
This is why you should ALWAYS use spinning disc HDDs for any long term retention, GO NAS!
Nas is network access storage i presume? But what file format? NTFS do not have self healing like ZFS corporate system from Sun-Oracle or indestructible journal like NILFS2.
@@fontenbleau Nas stands for Network Attached Storage.
Who Is Nas
@@esecallum NAS is Network Access Storage.
A box with a bunch of drives (2-12, 4-8 most common), setup into a redundant drive array (usually RAID-5) and accessed via a LAN.
@@fontenbleau yes NAS, the system/file format is not the point here, it is simply separate HARD drives for archive storage.
Way safer that internal drive, or SSD off course. The long term safety of these and backups are a complex matter.
Just saying, have something outside you PC that uses HARD DRIVES.
Fascinating. 🤔
I always thought it would be totally wiped after such a long time.
Sadly even with this testing you would also need to know & consider what type of cells the SSD uses and what 'long term life countermeasures' the drive firmware also uses to help retain (or prematurely damage) the cells. All flash NAND reads and writes are not created equal, you could write entire books about it so I'm not gonna bother doing it here.
People have been fearmongering about SSD stability since they exist, but my real world user experience is I've kept the same SSDs in the same build powered on and off every day for 11+ years with zero speed or reliability degradation.
Meanwhile all the spinning rust I still had sitting around died on their own while powered off by the time I could be bothered to try and salvage them into a centralised storage solution.
Interesting test!
From the graph it looks like your torture tests never touched the capacity beyond 100 megabytes. The read speed jumps up and down until 100 meg than flattens out to the same as the fresh drives.
That be when the drive has internally refreshed the weak nand pages (a background scan runs to check if power level of the chip is to low if it is it rewrites the 256/128mb page elsewhere to avoid using ecc
Very InTeReStiNg (and revealing!)... Well done. Thank you for "taking one for the team". I can cross that off my bucket list. Definitely a sub-worthy video. I should have realized that anyone with "WingNut" in their name would have their act together. Cheers from So.Ca.USA 3rd house on the left (please call before stopping by)
Get help
@@njpme Hey! I resemble that statement. Just saying...
Good video Thank you for sharing :)
Before I watch - I don't believe anyone has Ever seen data disappear from any ssd for being unpowered for any length of time. To me that's silly because they do not power themselves or use any power to store and hold that data. There are also no moving parts. If you search you find, "While SSDs can generally hold data without power, extreme conditions or very long periods without power could potentially lead to data loss" Neither of those two conditions have been met by this video.
SSD's do store a voltage in their NAND cells. Electron leakage can happen over time, which in turn can change the voltage state of that cell, which results in corrupt data because the data in that cell is not accurate.
In my first video I cited sources and described this in detail.
Of course you would end up with less space, but there should be an option (if not default) to just mark the slow parts for 'never use again'. Then, your OS should show the new (smaller) size, or 'remaining' size.
are there any critical sectors like hdd, if it fails the drive is unrecoverable in ssd?
one of my external ssd runs super slow, after moving all the files out (painfully slow), format it, now it runs flawlessly. my ps5 2tb ssd is currently doing the same, so i may need to format it too
Don't format an SSD. TRIM. Windows is intelligent enough to initiate the TRIM command if you use the Windows GUI. Actual defragmenting an SSD will just cause unnecessary wear.
@ not format the entire ssd, just a quick format. my ssds mostly are for data storage, long term. just some read and write from time to time. thanks for info though
Thanks for doing this!
does the defrag with various options improve it's lifetime? chkdsk command in cmd with different options set ?
An SSD does not require power to retain it's contents, any more than a permanent magnet requires power to remain magnetized.
True. But a worn NAND cell can "leak" electrons resulting in a change in voltage which in turn results in corrupt data because it's no longer storing the charge related to the appropriate data.
@@htwingnut Your SSD or USB is a lot safer than your CD or DVD with CD Rot setting in... but before that even happens... it will become unreadable due to... scratches. So much data lost to... scratches.
Harddrives... the head will glue itself to the ramp and the rubber bumper will rot inside...
I've got one 12 years old and no problems
I have a number of 1st gen 36GB single level charge SSD's that are over 15 years old at this point. I tried them not long ago, when I was looking for something to put into an old laptop. Original data was all still there and intact, though I did not do a full CRC check, before reformatting.
I've been on the SSD bandwagon since the beginning and only lost data to OS errors. Never to charge degradation, even for drives sitting on a shelf. I have a crate of them. I'll get 1-5 mechanical drive failures a year. For SSD's - nothing.
Nice. Yeah I still have an original Intel X25 SSD. SLC. SLC is very robust as is MLC. SLC only needs to determine between two charge states, MLC four. So they are less susceptible to corruption.
But modern TLC and QLC are bottom of the barrel when it comes to NAND quality. They use all the tricks using pSLC cache and even DRAM cache to artificially improve performance temporarily. But in the end, most aren't any faster than a traditional hard drive.
They can fail. I had an SSD sit for 2 or 3 years, when I tried to put it back in a pc, windows didn't even see it. Worked just fine when removed.
Why not freeze it and try it?
@@esecallum Did not think of it.
Too bad you can't make them out of what flash drives are made out of. Mine has been washed and dried and has sat for a decade and no noticeable loss that I can detect
the worn one can probably be made very usable with some trimming. just stop using the bad and iffy sectors, lower the total capacity a bit for better usability of the rest.
I think it was data loss, not damage.
i only have one ssd which i've had for a year, it's been installed in my other desktop and i just had my steam library on it, used it for a few months and i had to move in with someone for 3 months during a family emergency, when i came back my games were all corrupted.. i think it's just random chance but ya i don't think i'll be putting any important data like family photos and such on an ssd.
Good video! And yes, it depends very much on the NAND cells. I have an old Intel X25-V, small, slow SSD which after 10 years unused was still ok and with data (Windows) on it, all running fine.
But a 2021 OEM Samsung drive from a laptop which was put in an USB RACK in 2022, stored in a relatively cold environment had now in 2024 errors when copying files after 2 years of no IDLE. And it was slower. I had to rewrite it... It was connected every 2-3 months for backup but not left hours on idle.
But also, the SSD controller will to the optimization in background if connected through an USB-C rack and not directly to the motherboard? Because the racks could enter in IDLE or maybe there are some other factors.
SLC vs TLC or MLC or QLC or whatever is very different.
You know everyone’s drive is the same as leaving it unplugged because that data floating gate will not regenerate its charge by just applying power.
Yes, I fully understand this. And this was brought up in the first video. It requires actively refreshing or moving pages of data to "recharge" the cells.
I have a portable SSD that I'm using just to back up files and I rarely ever load it. How often and for how long do I have to plug it in to keep it from losing the data?
Clear and simple, thanks
Good technology video, thank you!
I can't find if these jajs600m238c are SLC, TLC or QLC. I wonder if a QLC would last a year unpowered.
I woud not be surprised by an SLC cooing with 7 yermars unpowered.
Mind HDD also lose storage with time unpowered due to magnetism...though very slowly (seems my 20 years old HDD is still fine
Hello sir, when you say Worn SSD, that was abused with 60TB Data in/out. And the fresh out is just testing with data without abuse? Because I have so much SSD, and I am really afraid of losing them, though, just powering up helps, but with your test video, will also give me information about SSD and how long they can last without being abused, just purely storage. So far, all my data is intact, I am using crucial MX500 btw. I wanna share info with you, to atleast gave you an idea. I have most of my SSD's from 2020 though. The only abused one that I have is where I installed my O.S. but it is still doing great! I subscribe and bell for you, I will check your uploads with SSD test! I needed somebody who is serious with this data rotting test. The abused one should be something to be wary though because it will always go that way. But with 60TB and still recover some of it, it's good!
Ive been lucky with all my SSDs . i have never lost one. I've lost a few HDDs . my oldest SSD is over 14 years old. the drive I have are micron or Kingston drives. 128gb was my first up to 1tb.
when I update the main drive, if I need more space. i buy a caddy and use the older drive as an external drive.
Actually the standard for SSDs is 1 year unpowered retention. Storing it for 2 years with only 4 errors does not mean the SSD is on its way out
It's not only 4 errors, it's over 200 thousand errors, although corrected by ECC. 4 files were corrupted because of this, and also resulted in a significant speed reduction.
@ thank you for the correction. This is still much better than what is expected though. Consumer SSDs are just not great, and SSDs are not long term data storage.
@@igelbofh You can keep data on flash for very long time, but it mustn't be cycled much for it to be effective. Wear reduces data retention capability a lot. After all, pretty much all microcontrollers use flash for program storage, and they keep working for dozens of years. For example, BIOS on PC motherboard is stored on flash and one can obtain some 30+ years old motherboards that still work fine without any maintenance done on them.
@ yes, but it's not what you think. It's volumetric density. And voltages. A microcontroller will have in the same volume that a 1TB ssd nand chip no more than 64kb of data. Also microcontrollers are not using flash memory - the ones with really long life's are using eprom.
@@igelbofh EEPROM is memory operation mode, it might not be flash memory, it may be magnetic memory, for example. Flash memory is the type of device that used double-gate transistors to store data. Cells can be organized as NOR gates or NAND gates, which makes them different in dynamic operation (NOR faster to read/slower to write and has finer grained random access), but pretty much same in static performance. In cell structure storage gate separated by few nm of isolation from other metal gate that usually grounded, so any other electric field inside the chip be it cell-to-cell or layer-to-layer is going to be dwarfed by it.
About density though, some small 128Mbit NOR flash (which also rated for program storage, so 25-50 years retention) has ~8mm2 die area. Storage density per layer per mm2 is worse, but only about 2-3 times worse than some arbitrary NAND. We'd get stacked NOR ages ago if anyone had seen a point in it.
Should note that hardware ecc corrections is normal, what is intresting and the more important smart attribute is 197 198 (c4 c5 c6) pending relocation (this is when the drive has detected uncorrectable data and requires a write to remap the sector)
A read test can't trigger a relocation (so the logged remaps was due to a write)
cheap, China china ssd's can be unreliable
Interesting, SSDs fail now similarly to HDDs - by failing sectors first. It seems the old warnings work again.
Seems to me that you are not actually doing a two year check, it is a second 1 year check.
First set of disks were offline one year when I checked them last year. This second set of SSD's for this video were offline for two years.
@@htwingnut My mistake, I thought it was the same set of disks in both tests.
you shouldve kept the seed used to generate the random data. then you could generate the exact same data now, and compare and see exactly what changed.
The checksum shows what files are corrupt. I could do a binary comparison, but not sure if that granularity would be all the helpful tbh.
@@htwingnut ig it'd be interesting to see where the bytes had changed. if only a few bytes had changed or a whole lot, in patches or large swaths, etc.
I had some SSDs from 2009 that sat for 10 years and were fine. Alas, then I put them into an enclosure with a fault that essentially electrocuted the driver boards. 🤦🏻♂️
2 years isnt long enough most ssd uses micro supercapacitors coupled with each transistor for each bit with very minimal consumption just to retain the state of each bit it literally holds power for a long time just like how cmos battery can last for years without changing but not as long as hdd does
Now check it with spinrite 6.1 that can fix slow ssd's
I would love to see a SpinRite v6.1 Level 3 recovery/refresh done on the drives after the year 4 tests are complete. Hopefully SpinRite v7 will be out by then.
Level 3 is NOT RECOMMENDED for SSD and SMR spinning drives
You can read this info in menu where you select SpinRite Level