Great video! I recently switched from shucking drives to buying manufacturer refurbished drives. In a couple of large Synology NASes, my failure rate on shucked drives (12TB and 14TB) has been roughly 50% over 3-4 years. I'd like to believe that the refurb drives have received extra attention from their manufacture. It's also pretty clear that external drives -- thus shucked drives -- are binned devices that don't pass the requirements to be used as internal drives. Lastly, refurbs are warrantied and cost significantly less than the same models new.
Nice work Alex! Big fan of yours from the JB podcasts. Fun to see you doing your own thing. At some point we'll need an updated post or video tour of your workspace. Mega-desk armada 2.5!
the only potential snag I see with shucking drives..is if you end up with an SMR drive. Since it's a gamble kinda what drive is inside be in SMR, CMR or PMR
I had the same experience with Seagate drive failing it was 750GB data loss was a horrible experience (I was also studding this time). My disk was on warranty so I got a replacement unit after few weeks and here is drum roll ..... Which failed exactly the same after week of working in my PC.... I got next replacement but promised myself never to buy Seagate again ... I know that it might happen with WD that I'm currently using but my sorrow for Seagate persists still. PS. Also I love your podcast Self-Hosted😁
i have find that 1 or 2 tb drives last longer , i have some over 10 years , if drive is bad will fall in the first year , for most thing i have a backup i have a working server that power on 24 7 for my active project and plex , and i have my archive server that i power 4-5 times a year , just wen i need some stuff or dump once a year all the data i create in that year , and i have one more server that is the backup of the archive
and i find if you go and buy hard drive directly from shop is more safe , that have it deliver to your home , less risk of having the drive banging around in the box of motorcycle courier and get damage in the process , or just in case the drive is bad is faster and cost less to just walk in to the shop and get a warenty replace than send it by mail
I'm still waiting for price parity between SSD's and spinning rust. I can afford to put 20 terabyte hard drives in my media server. A 20 terabyte SSD cost more than my truck.
Meanwhile here I am buying only 4TB WD Red Pro hard disk for literally all my systems. I buy a couple of new ones a year and so far my total storage amount has only been growing. It also makes it super easy to replace a dead drive since they're all the same.
@@ktzsystems given my two main pools are 8 bay and only filled up 40% at the moment they should last me at least another decade. 4TB was the biggest size you could buy back in 2012 it was very expensive and seemed comically large. But in the end it was totally worth it. I should have worry free storage for at least 2 decades before I have to think about an upgrade path.
Thats better then my ssd fail rate i had 5 ssds 4 of them no longer can write and before thst there speed was down 55% when that happens i back up the data and wait for them to die to buy another now i just run everything off of a naz
Just letting you know I know what kills them is that I run full virus scans every 3 days because I download weird shit and i take all the processes that you should upload to virus total in chunks if you can run them in a vn before you put it on main but I'm just paranoid
what if want to use different manufactures for drives but also create a ZFS vdev on each of them for redundancy? shouldn't drives in a redundant setup be as similar as possible or are speed (eg SATA 3) and capacity the only factors that should be the same?
Yes that is true. Zfs is a whole other topic when it comes to drive selection and performance. For most people it’s about making sure you can maximize throughout to at least gigabit and everything else is a bonus. I’ve not found any issues mixing and matching similarly sized drives with different manufacturers in my mirrored ZFS pools but I’m not a mega perfectionist on the performance side either.
ive never had a hdd fail. however ive had even nvmie drives fail or corrupt or not even work right most of those are qlc though. ssds are nice hdds as long as thier high end can be great, you just have to know thier limitations and use primocache and good ssds instead of crap ones to create a symbiosis that perfectly works for bolth.
You mentioned Unraid. I've been running Unraid for a few years now, similar to your experience with old/bad hard drives. Do you still use Unraid, or have you moved on to something more flexible? Love me some #Selfhosted
Would this be a bad time to point to perfectmediaserver.com? :) I used unraid for many years and it taught me a lot but eventually I outgrew its safety rails and found a comfortable home on "real" Linux.
Yes, Yes it Would. J/K. Just ran across that the other day and bookmarked it. Glad I found it. Been going through my old External 8TB HD from numerous files put on there through the years. Many, many duplicates I’ve founds and many different programs I’ve tried. It’s time for NAS system. I’ve wasted sooo much time. Eventually in 3-5 yrs I’ll let AI run locally on there to organize all the pictures and videos that are not named correctly and some for sure are duplicates. Thanks for your channel and just looked up your podcast, Saved.
Great video! I recently switched from shucking drives to buying manufacturer refurbished drives. In a couple of large Synology NASes, my failure rate on shucked drives (12TB and 14TB) has been roughly 50% over 3-4 years. I'd like to believe that the refurb drives have received extra attention from their manufacture. It's also pretty clear that external drives -- thus shucked drives -- are binned devices that don't pass the requirements to be used as internal drives. Lastly, refurbs are warrantied and cost significantly less than the same models new.
Nice work Alex! Big fan of yours from the JB podcasts. Fun to see you doing your own thing. At some point we'll need an updated post or video tour of your workspace. Mega-desk armada 2.5!
Yeah I guess so!! That’d be some fun content to shoot. Might need to tidy up first LOL
Great vid Alex and great to see you on YT mate. Looking forward to your Ansible content 😄
Great to see you here, a very good start all the best
Thanks a ton!
Long time listener to the JB podcasts, super happy you’re on UA-cam now!
The 3M's are quite interesting, never thought about it. Thanks!
For a sec I forgot my own script and thought you meant 3M adhesives... LOL
the only potential snag I see with shucking drives..is if you end up with an SMR drive. Since it's a gamble kinda what drive is inside be in SMR, CMR or PMR
I had the same experience with Seagate drive failing it was 750GB data loss was a horrible experience (I was also studding this time). My disk was on warranty so I got a replacement unit after few weeks and here is drum roll ..... Which failed exactly the same after week of working in my PC.... I got next replacement but promised myself never to buy Seagate again ...
I know that it might happen with WD that I'm currently using but my sorrow for Seagate persists still.
PS. Also I love your podcast Self-Hosted😁
i have find that 1 or 2 tb drives last longer , i have some over 10 years , if drive is bad will fall in the first year , for most thing i have a backup i have a working server that power on 24 7 for my active project and plex , and i have my archive server that i power 4-5 times a year , just wen i need some stuff or dump once a year all the data i create in that year , and i have one more server that is the backup of the archive
and i find if you go and buy hard drive directly from shop is more safe , that have it deliver to your home , less risk of having the drive banging around in the box of motorcycle courier and get damage in the process , or just in case the drive is bad is faster and cost less to just walk in to the shop and get a warenty replace than send it by mail
We’ve all seen how couriers treat boxes! No one cares about your stuff like you do.
Thanks, sound advice.
I'm still waiting for price parity between SSD's and spinning rust. I can afford to put 20 terabyte hard drives in my media server. A 20 terabyte SSD cost more than my truck.
Until then tiered storage will have to do!
Have you ever thought of having audio books of children's bedtime stories. SOOOTHING
Meanwhile here I am buying only 4TB WD Red Pro hard disk for literally all my systems. I buy a couple of new ones a year and so far my total storage amount has only been growing. It also makes it super easy to replace a dead drive since they're all the same.
Simplicity of your drive fleet will really help you in the long run. Have you considered density at all? Data ports are limited.
@@ktzsystems given my two main pools are 8 bay and only filled up 40% at the moment they should last me at least another decade. 4TB was the biggest size you could buy back in 2012 it was very expensive and seemed comically large. But in the end it was totally worth it. I should have worry free storage for at least 2 decades before I have to think about an upgrade path.
Thats better then my ssd fail rate i had 5 ssds 4 of them no longer can write and before thst there speed was down 55% when that happens i back up the data and wait for them to die to buy another now i just run everything off of a naz
Just letting you know I know what kills them is that I run full virus scans every 3 days because I download weird shit and i take all the processes that you should upload to virus total in chunks if you can run them in a vn before you put it on main but I'm just paranoid
what if want to use different manufactures for drives but also create a ZFS vdev on each of them for redundancy? shouldn't drives in a redundant setup be as similar as possible or are speed (eg SATA 3) and capacity the only factors that should be the same?
Yes that is true. Zfs is a whole other topic when it comes to drive selection and performance. For most people it’s about making sure you can maximize throughout to at least gigabit and everything else is a bonus.
I’ve not found any issues mixing and matching similarly sized drives with different manufacturers in my mirrored ZFS pools but I’m not a mega perfectionist on the performance side either.
Did you ever make the hard drive burn in ritual video?
ive never had a hdd fail. however ive had even nvmie drives fail or corrupt or not even work right most of those are qlc though. ssds are nice hdds as long as thier high end can be great, you just have to know thier limitations and use primocache and good ssds instead of crap ones to create a symbiosis that perfectly works for bolth.
You’ve never had a HDD fail YET. They’re always plotting against us.
You mentioned Unraid. I've been running Unraid for a few years now, similar to your experience with old/bad hard drives. Do you still use Unraid, or have you moved on to something more flexible? Love me some #Selfhosted
Would this be a bad time to point to perfectmediaserver.com? :)
I used unraid for many years and it taught me a lot but eventually I outgrew its safety rails and found a comfortable home on "real" Linux.
Yes, Yes it Would. J/K. Just ran across that the other day and bookmarked it. Glad I found it. Been going through my old External 8TB HD from numerous files put on there through the years. Many, many duplicates I’ve founds and many different programs I’ve tried. It’s time for NAS system. I’ve wasted sooo much time. Eventually in 3-5 yrs I’ll let AI run locally on there to organize all the pictures and videos that are not named correctly and some for sure are duplicates. Thanks for your channel and just looked up your podcast, Saved.
🔥