@@wertigon Oh gawd not even close. My company would charge maybe a few hundred a month for server monitoring. It's actually quite inexpensive, you don't need a dedicated employee. It really depends on how many hours of work that's needed. Usually for management, it's a couple hours a week tops. If there's a failure, just get someone on site to do a swap, press a few buttons and you're good to go. For heavy errors, there are several alternate methods.
@@lilkittygirl But then you're not paying for an IT staff but for one guy to come over every once in a while, just like an electrician or a plumber. A full time decent IT guy can easily make $60k a year, and that's just counting what the IT guy gets in his pocket.
Can't wait until 15 years later when there's going to be "The Exabyte Project", which theoretically can fit 1000(or 1024) of these petabyte servers in the same space
Not sure about IT department but what you guys REALLY need is a montage of every hard drive locked in with nice clicking sound like they do on LEGO building channels. 100% satisfaction guaranteed.
I watch a guy tear down engines and he edits his videos so that the cracking loose of the head bolts are ripple-edited for that effect. It's one of my favorite things he does.
Guys, I'm absolutely loving the fact that you have a new Petabyte project but please get a new IT guy. As an entire team of people who are well-versed at networking and computers, it doesn't make sense to get an IT guy at first, but in the grand scheme of things, you need someone who's able to attend to that equipment 24/7
Proper IT is a different skill set than "tech stuff" and I don't think anyone we've seen qualified. Yet either way they definitely need someone even if only because of limited time for current employees to fix the servers.
About bot destroyer, we can't be sure bcz we don't know what comments are being deleted and how many genuine ones are being flagged as spam, but I believe ltx did some monitoring on it so ye just be careful with it I guess
@@ncb4_69 Mine got deleted, and it was just me talking if the drives were sent to LS for free. So yeah, we will have to deal with it. for the greater good
PSA: zfs also lets you add in a hot spare that will _automatically_ replace a failed drive. You can configure it to be the hot spare for multiple zpools. When a drive fails, it drops itself into the array to restore parity, then when you've replaced/fixed the bad drive it'll go back to being a spare until the next time it's needed.
Really hope Linus can take advantage of this. Even if it’s not automatic, just telling the server to rebuild with the spare already inside could be really convenient!
@@tim3172 I think they determined to NOT do that and just rely on alerting from the software raid controller. Their first video says it's to expensive to back it up and the data isn't that important, so still No.
@@johnathanera5863 Not quite, because your missing all the files needed to make them as well, from adobe templates, to the specific components of the video that get put into their video editor of choice. Their is a lot more to it than just 'its on UA-cam, why care' because your seeing the finale product, not every part used in the whole process.
I can remember installing hard drives in that form factor -- 1" tall, 3.5" wide -- that could hold 120 MEGABYTES. And I couldn't believe then how much data they contained, considering that there were 80MB drives that were 2.5" tall and 5.25" wide still knocking around. And now these drives will store more than 160,000 TIMES as much data in the same space. Boggles the mind!
@@nesyboi9421 indeed, it was. I’ve been doing PC upgrades/repairs since 1988 or so. The first PC I ever built from scratch was a 386DX running at 25 MHz, 4 MB of RAM, an 80 MB SCSI hard drive, and a VGA card with 512 KB of VRAM on it, plus a 15” VGA color monitor. Cost a stupid amount of money, too - a bit over $3K in April of 1989. That’s the equivalent of about $7K today. 😳
@@fynkozari9271 They basically save footage, everything, which isn't worth it in my opinion, I would only save the final edit and a few important clips. Especially if they don't take care of it like they did and let the data degrade.
@@farkasambrus5741 Yep, not really worth it just for the footage itself. They say it themselves, partly it's more for the fun of it, and as a pretext to getting these absurd servers so they can make videos out of it.
Didn't Linus say in Wanshow at some point that "No one would want 20Tb harddrives?" Because of their sata interface it will take stupidly long to rebuild parity drives in case of a failure. So it would be quite likely that another drive fails during the rebuild. But hey I guess another data loss will make for more content that they don't have to plan.
It made e shudder seeing the 20TB drives, Linus did say you dont want 20TB drives in a regular server, this is archive data and hopefully they will actually manage the the server and swap it out in time.
I love Jake and Linus' banter and chemistry. please do more videos with you two, and only you two. it is hilarious. one of your best videos comedy wise in my opinion.
Now thats a healthy work relationship between a boss and employee.. Where they both rip at each other without whole "Remember who you are talking to" Hopefully you can recover the data and that it will take even longer until another data loss happens.
@@xzaz2 I suppose that depends on if your work says so..but I don't think it's unprofessional at all.. obviously it's not unprofessional at LTT because you see various in employees in shorts
@@jubuttib They are 285 american bucks on amazon. But if you say in Euros it literally depends on where you live. In Germany they sell the 16TB version for 260€ at alternate but only let you buy one (the dumbest thing I've seen for an enterprise drive where nobody ever runs a single drive).
Old Linus: Oh noes, we lost data because we did not have sufficient redundancy to overcome our poor preventive maintenance habits. New Linus: Lets deploy a new cluster, with even more storage to maintain, but with less redundancy, and then have the same people manage it.
45 drives is an amazing company. I had a very specific request for a custom server chassis design and they put it together no problem. Great people over there!
Just get one of those lifters that are used for motors by car mechanics. Use suspensions, dumpers and springs to minimize vibration and you are set! You can even make your own to carry heavy stuff around the office.
Oh man, pickle Linus is such a flashback. The old "livestream from a garage" days, so much nostalgia. I bet Edzel doesn't miss editing on an X79 ShuttlePC. For some reason I also just remembered the stream where Linus had to explain how a thrown USB flashdrive shut down the forum for a few hours. I can hardly believe the fact that I've seen *every single* episode of what we now call the WAN show. It was also the reason I made a Twitter account in the first place.
Lol, I literally remember Linus saying like 4-5 years ago that he would never want a 20TB hard drive because if it fails in a raid it would take way too long to rebuild it.
@@Wicked_Carnifex he actually talked about that in the last video. It’s a mix of ‚nice to have‘ and ‚it makes for good content‘. They get the drives for free, so why wouldn’t they do it
My dream used to be a 1 TB storage server because when I started in IT in the late 70's the data center I worked at was one of only 13 TB data centers in the world. Now I can get that on a postage stamp.
Linus and Jake poking fun at each other adds quite a bit to the entertainment of this. It makes a workday a lot more pleasant aswell as opposed to some super serious strict work only approach
I remember the day when 10MB in an old XT computer was about the size of a shoebox and you would have to use programs to low-level format the drive occasionally to keep things "Lined up". A company I worked for spent $1,000,000 for 1MB of RAM spread over 4, 7 foot by 19" cabinets. Whoa Dude! We are living in the future!
@@tanmay4217 One of these days you're going to say, "I remember when they came out with the 25 TB drives!" Some young punk is going to reply, "WHY? That's so small that you can't even put an operating system a drive that small. What did you use it for, still photos? LOLOLOL" When that happens...I want you to remember this day and your comment.
@@nobody7817 normal people don't use that much data but in linus case they use that much space for their work which takes a lot of space like you in the video
it's a pity that 3-2-1 backup strategy is falling out of favor for some years now. As usual Linus is badly advised by its systems engineers/partners: nowadays the best/most effective backup strategies are 3-2-1-1-0 and 4-3-2 :-)
The essential data is backed up, such as whnnock server is backed up twice, once offsite at the lab and again in a much farther away backup server in a datacenter.
One of the best videos in a long time. Entertaining to see Jake and Linus mocking each other. Also, great editing transferring the chemistry between them. It never gets boring, even though such a topic has the potential to do precisely that. Thanks!
I actually had my 1tb Samsung 970 trip SMART and lock itself a couple of weeks ago with no warnings. After reading it in another PC it failed for extended high temps. It made the entire drive Read Only and I have no way to re-flash the SMART status to re-use the drive besides sending it to Samsung (which they may or may not do).
Well the first statement was in regards to normal consumers. Linus is an enterprise consumer. There isn't much of a need for normal consumers to have 20TB at the moment. Maybe in 10-15 years file sizes might reach a point where 10s of terabytes make sense.
Funny even though Linus' critique on R/W Speed vs. Data Density ratio of current HDDs (and thereby it results in compromises to reliability and other stuff) is actually valid (especially for normie consumers)... Most server users don't have much choice is the counter argument which he himself is ironically suffering now. I mean the two other alternative in the extreme is: 1) Pure SSD build (too expensive for 2PB, and an overkill performance for them), or 2) LTO Tapes (cheap and dense, but need a separate infrastructure for to be used effectively... unless they put Jake on 24/7 shifts to load tape back and forth).
@@shawno8253 annoying thing is thats self fulfilling prophecy, "oh it didn't do well in Australia 10 years ago, so we aren't going to ever try again" I just hate how that is the excuse, cause it does mean as an Aussie tech, its really fricking hard to source parts cause "we don't usually get much call for that" then again, we also don't get a lot of movies or shows cause apparently "we pirate everything"...its almost like if they released it over here, we wouldn't have the need to consider that?
I've been trying to import Dell constellation ES.3 (3tb) from the states to Australia too as its pretty cheap per tb (about $25). I haven't had a failure from them yet.
Linus: "Andy likes it" Jake: "You pay him to like it" Linus: "I pay YOU to like it, just one of you is better at their job" Funniest thing I've heard in ages
Make sure to double check your ashift is set correctly on the pool! Can't change it without trashing the pool. Usually the auto detect works but I'm always paranoid the SSDs will throw it off.
Not sure what "ashift" is in this context, but the thought of a high end software system having terrible defaults is both scary and familiar as someone planning major server rework today.
No they have the other 2 that are Linux tech proficient (mostly as result of last server setup fail) hopefully he is using z3 this time (nope still z2 with 15 disks per vdev) smart scans and zfs scub with all notifications enabled (he using truenas so not a roll your own setup he used last time) Selecting each disk individually is not needed it can select all the disks automatically when creating the pool
@@user-np7kr4wx7g not only does he not have enough experience and time to deal with servers constantly, but a great IT person generally has a different skill set and background/training than even the most knowledgeable consumer-tech enthusiasts. While we don't see the server-operations much I don't think anyone they have is fully qualified for such a position and those who are 90% there have a lot of other stuff happening.
Cool stuff. I used to work in a NOC that dealt with a lot of drive repair. Cold & hot spares both have their uses and should be kept available. More devices failing while you rebuild is what gets you so you do *not* want to waste time on having a replacement shipped if you can avoid it. Few hundred bucks on an idle spare is way cheaper than data recovery or a catastrophic loss.
It also depends on economy of scale. You need a huge number of drives to make keeping your own stockpile of spares profitable compared to waiting a few days for each new drive.
I bought a 30 drive Storinator cause of you guys. It’s been great. Went with 18TB drives. Put Unraid on it. Stuck with the Intel Xeon Gold 26-core though. 45 Drives that company is awesome to deal with. Thank you guys. Lovin the content as usual.
As an actual IT sys admin I am so, so sorry. You poor soul. But hey, if it's just for your home Plex server and you're not actually trying to run a business off it, you'll probably be OK.
@@Chakratos It depends on what you're doing with it. How much nuance do you want? If you're building a home storage server that doesn't host any critical data - like a Plex server - this is fine. If you're running a multi-million dollar media, merchandise, and software company and storing business critical data, this is NOT fine. This kind of setup has no redundancy and barely any fault tolerance. If more than two drives fail, which this LTT series demonstrates is entirely possible and even likely, you can lose literally everything. Nevermind if your backplane fails or your motherboard fails or you get a power spike that fries the whole thing. LTT's previous same-as-this archive server failed, which is why we get this video in the first place. That failure means a half dozen of their staff had to spend weeks investigating, troubleshooting, attempting repairs, speccing a replacement, shopping, ordering, sorting, prepping for the build, performing the data migration, checking the data, rechecking the data, attempting to repair faulty data, scrubbing old data, and repurposing the old hardware. That's just the technical staff who will loose months of productivity fixing this problem, many of them working nights and weekends, ignoring their families and personal lives. That does not include the writers, videographers, gaffers, or editors who had to drop what they were doing to make this video. That does not include the accounting staff who has to budget all of that capital and labor expense. That does not include the lost productivity when editors have to twiddle their thumbs for weeks waiting for this recovery to finish so they can access the data they need. The cost for this failure can easily hit half a million dollars before you even count the cost of the hardware they just bought. And hell, we haven't even considered the emotional toll this is taking on everyone involved, including the families of the people LMG employs. A proper storage server, or better, a SAN, is fully redundant, fully fault tolerant, highly available, and warrantied. It might cost $100k for the 2pb of storage, but it *just run * for at least 7 years. Every drive, every component, covered under a 4 hour onsite replacement warranty. If your life is the only one that gets ruined when your home media server goes down, do what ya like, this kinda thing is fine (and certainly makes good ego-fodder). But when you have a couple dozen people whose livelihoods depend upon your servers being reliable, you don't half-ass DIY your IT infrastructure and cross your fingers.
I find that hilarious - when someone buys “enterprise hardware” with much lower performance, capacity and reliability at a much higher price “for the support” which when it comes down to it , may or may not save your data
@@tomorrow6 Any drive going into a Raid array needs to be an Enterprise drive as a standard drive will degrade and fail much faster, Not worth the risk when storing large amounts of data across multiple drives. 20 years of Experience in PC/Server builds I can assure you no one is paying for the support whatsoever!
@@spendy26 I’d suggest that they fail as quickly as consumer drives of similar capacity (or did before SSD’s) - Backblaze published drive failure statistics were a good source for non enterprise disk failure rates which didn’t differ too much from enterprise. However - enterprise raid controllers with sufficient cpu and battery backed cache did allow for special drive setups for maximum fault tolerance at the cost of more wasted capacity. And of course SCSI drivers (including SAS) failed much more cleanly than consumer IDE drives Plus the vendors did publish their own firmware updates based on faults experienced at other customers, especially if you paid for the extended support past four years. Enterprise software did offer predictive drive failure notification which allowed drives to be swapped out in raid sets online with minimal chance of data loss (albeit a slowdown as the raid array was rebuilt)
13:46 all these years of advancements and we don't have two tiny LEDs on those drives (one the "back" side of course, where they'd be visible). Have a power and a status LED, and you'd be all good to go
That's extra electronics not built for the process of the component doing its task and wastes power, especially for how many are in a server rack at a time, plus a power and status LED doesn't help as their are multiple power and status states of drive that no one could remember color code to just by memory..
I mean, if it's going to be neglected--maybe do full mirror, those last just about forever. And investigate why so many drive failures--that's too many for a small (in total drive count) setup.
Linus, I love how even though the advertising segments are obviously recorded in a different scene, at a different time, and probably well in advance, you are smiling from ear to ear. Chuffed, just knowing you're gonna have a great segway when the advert is eventually used. It's like past-future smugness. xD
A few years back LTT bought a Tape Drive for backing up the Petabyte server, was that still doing backups? If so, I'm hoping that we didn't lose all those other Linus Pickle pictures in existence.
The tape drive was for business critical files, and to the best of my knowledge is still being used for it. The data LTT lost was non essential archive videos, which have a complete backup on UA-cam. So any further backup was both unneeded and impractical.
I like to imagine that in a few decades, people will be looking back at Petabyte server videos the way we currently look back at videos of the Saturn V computer and how the bits in it's memory also had to be put together by hand, like how they have to put in the drives into this larger case. Very cool thought
I would love to just see or touch some of this tech in person in my lifetime. I'm way too poor for a gaming PC. Can't even afford a console of any kind rn lol. Love watching nonetheless!
He keeps referring to what he had as "bit rot" it's not, it was mostly failed disks combined with a healthy dose of incompetence. I'm not going to say there was *no* bit rot, but it was probably pretty close to none, bit rot is slow and fairly rare, caused over long periods of time by cosmic rays and things, they will flip a bit here and there. What he had was varying degrees of disk failure. ZFS and enterprise/nas class storage devices do not try very hard to read problem areas, they just fail the disk and move on. If you get into a situation where you have an irrecoverable number of disks failed then that array needs to be taken offline, and the disks in an error state either sent to a data recovery house, or, start making an image yourself using things like ddrescue which will try quite hard to create a complete image of a semi functional disk. I've had multiple clients over the years with parity breaking levels of drive failure and yet I've never not been able to get the data back. It's probably too late for them now though if they onlined the disks and ran a scrub.
I don't understand this either. Why they would actually turn on the drives and actually connect them to an array instead of backing them up with dd one by one is beyond me. That's one of the basics of how to recover from a broken disk.
@@entelin Yeah, I’m not so much of a recovery expert, I never had to do it myself, by ‘dd’ I meant ‘disk cloning tool that copies raw device blocks vs file system level’, I’m sure there’s tools out there as you said in your main comment that are designed for this.
Bit rot isn't caused by cosmic rays. It's caused by the magnetic field on the metal disk platter decaying, which is a function of the material, size of the bit, and quantum mechanics. Bit rot is a lot more common as you go up in disk size. Which is why RAID consistency checks are a thing.
I know this stuff is fun and of course is great content, but you would have thought you would learn your lesson by now. When you have this much storage, just get some actual, purpose built enterprise storage. For backups, do actual enterprise class backups with something like a Data Domain, LTO, or even Amazon Glacier as the backup target. At the VERY least, get some hot spares in the array so that if a drive does go down, it can start rebuilding right away.
His last issue was with important mid production videos. Ones not published to UA-cam. This is an archive server, used for back up of old videos. He cares much less about if they fail. Because UA-cam backs them up compressed. He mentioned on the Wan Show that cloud storage, or even a second server costs too much to justify.
@@darrenhjki That's why I mentioned LTO tape backup, those are fairly inexpensive. AWS glacier is also pretty cheap and most of the costs involved come with recalling the data. While not as cheap as LTO, would still be a pretty inexpensive solution. Just seems when you are talking about this much data a DIY solution might not be the best route. Since this is mostly backups anyway, cold storage, like LTO, will always be cheaper than this beast they are building, especially when you factor in electricity, the manpower to build it, and the labor time spent when it ultimately fails in a few years.
@@nospam4chris Linus is just in love with the idea of being able to pull in any footage from his entire online life, instantly. It does drive content and lets him play with big-boy hardware and concepts; however half-assedly.
Just looked at the datasheet for those drives, and it blows my mind to learn that they are CMR, not SMR! Seagate rates them at 285MB/s, which is bananas!
i love these kinds of videos where i cant understand a single thing they say, but i know if i continue to watch their vids for a few months and i will probably have enough knowledge to teach others
Please Please Please make more footage like this, it's actually interesting and makes me want to watch more. You could even create a new channel and incorporate all the LTT behind the scenes server stuff including the new house build ;-)
Dude, I hope my future boss is like Linus, he seems so cool and fun to work with, and in the far future I hope to be like him :) in these challenging times I wish all the good for you and your crew Linus! 😊
As a Storage Architect for a living, allow me to say with the complexity you have in mind and the storage pool sizes, I HIGHLY recommend using Starwind as your virtual SAN software. It's been around for many years and has amazing capability like spanning physical NAS hardware presenting one storage pool. The big thing is it's incredibly reliable and data loss prevention and excellent recovery is their strength guys.
I love this kind of content! Would be cool if Seagate also sponsored this video with like a 2 20tb drive NAS setup so us normal people had a chance at even sniffing this kind of storage solution.
@Monochromatik Seagate Exos is a different beast than standard consumer Seagate or even ironwolf. Same for WD Gold vs WD red, especially considering WD owns Hitachi storage.
Didn't you said that the 20 TB hard drives are rubbish, because if one fail, it takes an extremely long time (several days) until all the data is recovered to a new one?
I've had so many HDD failures over the years that I'd still be terrified of my only backups being on them. I know SSDs are significantly more expensive but it's worth it for the peace of mind, IMO. Not that SSD failures are impossible, but far more unlikely in my experience. Oh, and they're a lot lighter to carry. ;)
This level of data storage should be done with a tape drive to be more secure. Keep the tapes for cold storage (off-site in case of fire), HDD for faster storage and SSD for maximum speed.
These kinds of professional level HDDs are much less likely to fail, and having them in a raid configuration like this means that you have to lose at least 3 disks before you lose data, and they'll be alerted if any of them fail so that they can rebuild from parity data
13 drives in RaidZ2? Can't wait to see this video again in 3 years or so. At least go RaidZ3 for this many drives in a vDEV. Especially with 20TB drives. Resilvering can take long enough for a 13 drive 20TB vDEV that you can lose 2 drives during rebuilding the array, just give up the extra drive.
It's actually even worse than that: they put 15 drives in each vdev and it didn't look as if they set aside some drives as hot spares. At this point they are just begging to lose their data - again...
Wasnt their 16TB drive "cutting edge" when they used it? Sure it failed in the end, but that is also related to the fact that no one checked them ever.
Spread out over multiple servers is actually a good thing. At least when it fails you won't lose everything at once and it will be faster to rebuild when one does fail. Nothing is worse than taking a week to rebuild only to have another failure during that process (usually an unrecoverable catastrophe).
MicroSD aren't dirty cheap MicroSD might perform worse than a hard drive Also, 20TB in that hard disk is a achievement cuz, it's just few years ago that we had low capacity hard drives
@@cry-0432 in terms of storage density by volume it's still way more impressive than spinning rust. If I have a 20 TB 3.5" in front of me I'm really not impressed anymore. Show me 8 trillion bits that fit onto my fingernail and I genuinely have trouble believing that's possible.
I wish I could work at a place like this sometimes. I love everything computers, even the crazy glitches that happen seemingly for no reason until one setting change fixes it.
Linus: Our Seagate drives had a bunch of failures and we lost our data. Also Linus: We got a lot more Seagate drives for our new storage server. Seagate has a long-standing history of making garbage.
6:57 Yvonne Ho has a brain and a adult human brain has the ability to store the equivalent of 2.5 million gigabytes digital memory . Wrong choise Linus
Your storage failed because you failed to monitor it, rather than accept the same process will work again, use a proper enclosure with status LEDs on all the bays. If your regular monitoring notifications fail, atleast you'll still notice when someone is in the server room that there's a red light.
Their storage failed because it was done wrong from the start. And they're doing it wrong again. Don't "use a proper enclosure", don't rely on status LEDs. Buy a real SAN with active monitoring and predictive failure alerting. Hire a real IT staff and do it the right way. Yours is the second comment I've seen about status LEDs and all I can think is ya'll must have had a help desk job in the 1980s working in the stone age before e-mail or CRM existed. Here's how it works today: You buy a Dell EMC SAN. You set up alerts to send an e-mail to your IT mailbox. Your IT mailbox is monitored by a CRM that generates a ticket, sets a priority, and assigns it to an IT tech. For predictive failures and maintenance, the tech sees the ticket in their queue and works it FIFO. For hardware alerts the ticket priority initiates a page to your on-call tech who opens a support case with Dell who dispatches a tech with replacement hardware. The Dell tech arrives within 4 hours (usually under 2) of you opening the ticket and replaces the hardware for you. No downtime, no data loss. If you skip the SAN and just want a storage server, you buy a Dell, install OMSA, and your RMM picks up the hardware alerts in the SYSTEM event log, generates a ticket in your CRM, and all the same stuff happens. A storage server won't give you the scalability or redundancy that a SAN would, but could be a few pennies cheaper. Of course, if you need petabytes of storage you need a SAN. Period. No status LEDs. Reliable automated monitoring and alerting. If you think you need a backup for your monitoring, then your monitoring isn't done right in the first place. When IT is done right, nobody ever sees it. It just works. You might set foot in your dc or mdf once in a year to patch a network port. What good are status LEDs when you're never there to see them?
@@pufthemajicdragon I'm not suggesting to exclusively 'rely' on status LEDs, which is why I said 'if regular monitoring notifications fail'. RMM / SMTP alerts frequently fail (take 365's new anti-malware policies filtering IP whitelisted connectors or SPF record updates - an onsite system such as this could easily be forgotten about and the emails simply fail to deliver). A reliable system has multiple layers of redundancy and regular auditing would also be a part of it too. Dell EMC and HPE are both very expensive options and put a significant premium on their drive prices for the same day onsite warranty service, he's not going spend that or pay to put on IT staff to actively monitor a storage archive that was a cost cutting measure to start with. 'No status LEDs' - Notice the Dell EMC Sans still have status lights? Rather than assume people are lesser for not suggesting the options where money grows on trees, consider the realistic one.
I love this project, its so outrageous for a regular PC user that might have 1 or 2 TB onboard, but I can't say I'm not disappointed. Its called PETAbyte project yet you only get 895TB of writable space. I know theres a difference between RAW and formatted storage, but I want to see a PB of usable storage dammit!
9:35 "Are they SAS or SATA?" Technically both I guess? They are SAS ports and those can pass through SATA connections. You can easily find SAS to SATA breakouts for that reason.
This is pretty decent price. I just ordered an additional shelf for my company's storage with 24x960GB SSDs for 50kEUR :D. Vendor locking and service fees suck.
@@carltonleboss that's not how it works. Having, for example 4 6TB drives rather than 2 12TB drives gives you: - more speed, since you can write to all 4 at once - more redundancy and reliability, more of them can break and you'd lose less or no data - I thought I had more but I don't, unless you count more flexibility in upgrades or something.
Linus in 2016: loses data
Also linus: this will never happen again
Linus in 2022:
see you in 6 years when Linus loses more data
Probably has some serious PTSD
@@supercool_saiyan5670 Might as well call the series “Linus’ Bizarre Server Adventure” given the recurring nature of server loss over there.
And it's always related to Seagate :-D
@@supercool_saiyan5670
12 years later
Linus: guys my entire server is gone and my IT team left me
i hope this doesn't happen fr tho
I get hyped for seeing more of the long-term projects at LMG/LTT, like Petabyte Project/Server Room stuff, Pyramid PC and Linux Challenge.
Yea man same
It's like the LTT lore and the other videos are filler episodes
@@neuro3423 fair, but at least with LTT the filler is entertaining
Always fun when they have trouble...
Nothing will ever live up to ‘Whole room water cooling”. Event if it didn’t actually work.
1 Petabyte of Data: $37k
lacking an IT department: $0
Linus making multiple Petabyte project videos: Priceless
to be fair $37k is like, six months worth of a good IT technician, so...
@@wertigon LPT: just brute force more storage to avoid paying a tech admin.
$37K for a regular person. $0 for him. He got all those drives for free.
@@wertigon Oh gawd not even close.
My company would charge maybe a few hundred a month for server monitoring. It's actually quite inexpensive, you don't need a dedicated employee. It really depends on how many hours of work that's needed.
Usually for management, it's a couple hours a week tops. If there's a failure, just get someone on site to do a swap, press a few buttons and you're good to go.
For heavy errors, there are several alternate methods.
@@lilkittygirl But then you're not paying for an IT staff but for one guy to come over every once in a while, just like an electrician or a plumber.
A full time decent IT guy can easily make $60k a year, and that's just counting what the IT guy gets in his pocket.
Can't wait until 15 years later when there's going to be "The Exabyte Project", which theoretically can fit 1000(or 1024) of these petabyte servers in the same space
The fact that the zettabyte project will probably be a thing in our lifetime just blows my mind.
@@TH_5094 Unless we ditch computers entirely by then
@@drakonua Yep, it's back to rocks and sticks by then.
Linus: *coughs* "well these random coughs sure gets to you when you are old, sorry"
by then we will be using crystals for storage.
Not sure about IT department but what you guys REALLY need is a montage of every hard drive locked in with nice clicking sound like they do on LEGO building channels. 100% satisfaction guaranteed.
I watch a guy tear down engines and he edits his videos so that the cracking loose of the head bolts are ripple-edited for that effect. It's one of my favorite things he does.
@@TechGorilla1987 channel name?
@@muzameela2845 I Do Cars is the channel name. He does teardowns on various engines.
Its on their onlyfans
Too bad fan noise would ruin it
Guys, I'm absolutely loving the fact that you have a new Petabyte project but please get a new IT guy.
As an entire team of people who are well-versed at networking and computers, it doesn't make sense to get an IT guy at first, but in the grand scheme of things, you need someone who's able to attend to that equipment 24/7
Proper IT is a different skill set than "tech stuff" and I don't think anyone we've seen qualified. Yet either way they definitely need someone even if only because of limited time for current employees to fix the servers.
Get a new IT guy? They need to get someone as a dedicated IT guy first, they've never had one, as far as I remember.
They keep going with Seagate drives too... seesh it's like they don't learn.
Ya he went over this in the last video on the subject
Not even a guy, just a monthly up keeper
I can’t wait to see the result of this project.
Also, the bot destroyer seems to be working well.
About bot destroyer, we can't be sure bcz we don't know what comments are being deleted and how many genuine ones are being flagged as spam, but I believe ltx did some monitoring on it so ye just be careful with it I guess
@@ncb4_69 Mine got deleted, and it was just me talking if the drives were sent to LS for free.
So yeah, we will have to deal with it. for the greater good
@@Ave-S But you don't know if it was the bot destroyer or YT itself.
@@Ruhrpottpatriot Oh shit. fair point
@@ncb4_69 we can’t measure the collateral damage, yes, but we can clearly see a night and day difference in the amount of spam in the comment section.
Probably one of my favourite LTT episodes purely because of the back and forth between Linus and Jake 😂 just like my boss and I at work. I love it.
You should marry the boss.
@@fynkozari9271 So true
PSA: zfs also lets you add in a hot spare that will _automatically_ replace a failed drive. You can configure it to be the hot spare for multiple zpools. When a drive fails, it drops itself into the array to restore parity, then when you've replaced/fixed the bad drive it'll go back to being a spare until the next time it's needed.
Not a PRO. Hot Spare additions are not automatic because a complete failure needs to flagged, and its rarely the case.
Really hope Linus can take advantage of this. Even if it’s not automatic, just telling the server to rebuild with the spare already inside could be really convenient!
Question, will it be on while it's sitting there doing nothing?
@@nneeerrrd Linus knows a lot of things. Not necessarily well in any, though.
IT person interviewing for job: What is your data retention policy?
Linus: Yes
IT:How many years?
Linus: Yes
Yes, except when we misconfigure our storage and lose hundreds of TB ¯\_(ツ)_/¯
With single point of failure, both answers are "no"
@@tim3172 I think they determined to NOT do that and just rely on alerting from the software raid controller. Their first video says it's to expensive to back it up and the data isn't that important, so still No.
@@ScottZupek it's on youtube. That's their backup lol. They dont need literally any of this
@@johnathanera5863 Not quite, because your missing all the files needed to make them as well, from adobe templates, to the specific components of the video that get put into their video editor of choice.
Their is a lot more to it than just 'its on UA-cam, why care' because your seeing the finale product, not every part used in the whole process.
I can remember installing hard drives in that form factor -- 1" tall, 3.5" wide -- that could hold 120 MEGABYTES. And I couldn't believe then how much data they contained, considering that there were 80MB drives that were 2.5" tall and 5.25" wide still knocking around. And now these drives will store more than 160,000 TIMES as much data in the same space. Boggles the mind!
Wow that must have been a while ago
@@nesyboi9421 indeed, it was. I’ve been doing PC upgrades/repairs since 1988 or so. The first PC I ever built from scratch was a 386DX running at 25 MHz, 4 MB of RAM, an 80 MB SCSI hard drive, and a VGA card with 512 KB of VRAM on it, plus a 15” VGA color monitor. Cost a stupid amount of money, too - a bit over $3K in April of 1989. That’s the equivalent of about $7K today. 😳
I still have one of those hard drives!
At this rate, i wouldnt be surprised if this amount of data today could fit in a small usb later on the future
I remember paying an additional $700 to have a 20MB HD in my computer instead of a 2nd 3 1/4 inch floppy.
Linus 2 years ago: "Why you DON'T want a 20TB Hard Drive"
Linus now: Buys 60 20TB Drives
And says filling them will take at least 2 weeks
What did he put in those petabytes of hard disk drives??
@@fynkozari9271 They basically save footage, everything, which isn't worth it in my opinion, I would only save the final edit and a few important clips. Especially if they don't take care of it like they did and let the data degrade.
@@farkasambrus5741 lots of film and tv got lost to time with that thinking
@@farkasambrus5741 Yep, not really worth it just for the footage itself. They say it themselves, partly it's more for the fun of it, and as a pretext to getting these absurd servers so they can make videos out of it.
Didn't Linus say in Wanshow at some point that "No one would want 20Tb harddrives?" Because of their sata interface it will take stupidly long to rebuild parity drives in case of a failure. So it would be quite likely that another drive fails during the rebuild. But hey I guess another data loss will make for more content that they don't have to plan.
lmao great point.
That's fine they don't use monitoring or scrubs so they never need to rebuild!
they're using raidZ2 tho, so if another drive fails during a rebuild it's still ok. 2 drives failing while the 1st one rebuilds is still very unlikely
bah, just made that comment myself and then saw yours. So much of what they do is eyerolling but at least it's entertainment.
It made e shudder seeing the 20TB drives, Linus did say you dont want 20TB drives in a regular server, this is archive data and hopefully they will actually manage the the server and swap it out in time.
I deployed 30PB of capacity yesterday. 2 days to install and cable the storage. About 40 minutes to configure with Ansible. Sooooo cool
Ansible is beyond dope!
I love Jake and Linus' banter and chemistry. please do more videos with you two, and only you two. it is hilarious. one of your best videos comedy wise in my opinion.
James & Riley. Alex & Linus. Brian the Electrician and Brian the Electrician.
Now thats a healthy work relationship between a boss and employee..
Where they both rip at each other without whole "Remember who you are talking to"
Hopefully you can recover the data and that it will take even longer until another data loss happens.
absolutely love the videos with Jake...seems like one of the few that can go jab for jab with Linus..fun to watch...more Jake vids
That guy is a joke he knows nothing. This setup will fail after 2 drives. It's a mess.. also he wears shorts to work.
@@xzaz2 what is wrong with shorts?? that comment alone is why you are absolutely so wrong
@@EddyWhitaker it's unprofessional
@@xzaz2 I suppose that depends on if your work says so..but I don't think it's unprofessional at all.. obviously it's not unprofessional at LTT because you see various in employees in shorts
@@EddyWhitaker it is. Ltt are not professionals at this topic. They have no clue what they are doing lol
I remember when the first 10 TB drives came on the market, and now 16 TB drives cost $280. Truly cool to see the rate of progress
Not really huge progress, just making smaller magnets on 20TB floppy disks
$280? Where? I could really use some, and the best sensible deal I've seen has been 14 TB Exos for 280€.
@@jubuttib They are 285 american bucks on amazon. But if you say in Euros it literally depends on where you live. In Germany they sell the 16TB version for 260€ at alternate but only let you buy one (the dumbest thing I've seen for an enterprise drive where nobody ever runs a single drive).
All the banter Linus has with his employees and guests make me realize that he’s the Conan of the tech world
Golden Comment !!
Lol, you get it
That's basically why I watch these videos lol. What the hell so I care about a petabyte server.
You nailed that.
@teamcoco Lets get Conan on LTT
Old Linus: Oh noes, we lost data because we did not have sufficient redundancy to overcome our poor preventive maintenance habits.
New Linus: Lets deploy a new cluster, with even more storage to maintain, but with less redundancy, and then have the same people manage it.
I love when Linus contradicts himself just to put together the LARGEST storage server ever used by a UA-camr ever 😁
With technology, 'ever' is not something that lasts very long.
@@bongosbongos ever just means till that very moment
but not in high performance computing.... :) PBs is small potatoes.
@@jeremygmail oh nah fr?
You are everywhere
45 drives is an amazing company. I had a very specific request for a custom server chassis design and they put it together no problem. Great people over there!
Appreciate the love!
Watching Jake and Linus in videos is hilarious. The roasting between friends is super entertaining.
Just get one of those lifters that are used for motors by car mechanics. Use suspensions, dumpers and springs to minimize vibration and you are set! You can even make your own to carry heavy stuff around the office.
I'm loving the banter between Linus and Jake. Looks like this would have been pretty fun to make
Yup pRettY fUn
Oh man, pickle Linus is such a flashback.
The old "livestream from a garage" days, so much nostalgia. I bet Edzel doesn't miss editing on an X79 ShuttlePC.
For some reason I also just remembered the stream where Linus had to explain how a thrown USB flashdrive shut down the forum for a few hours.
I can hardly believe the fact that I've seen *every single* episode of what we now call the WAN show. It was also the reason I made a Twitter account in the first place.
Lol, I literally remember Linus saying like 4-5 years ago that he would never want a 20TB hard drive because if it fails in a raid it would take way too long to rebuild it.
Also why do they need the raw files to 10,000 videos in 8k? Like hes just a storage hoarder
I think that was before he started recording 8k video
@@Wicked_Carnifex like 30k worth of hard drives as well granted Seagate sponsored but still quite overboard
@@Wicked_Carnifex he actually talked about that in the last video. It’s a mix of ‚nice to have‘ and ‚it makes for good content‘. They get the drives for free, so why wouldn’t they do it
@@Wicked_Carnifex right? It's super stupid and counter productive.
My dream used to be a 1 TB storage server because when I started in IT in the late 70's the data center I worked at was one of only 13 TB data centers in the world. Now I can get that on a postage stamp.
Do you have your own now? I'm currently at around 20tb, but in the future I'd like to kick that up to at least 100.
@@samcolton943 I think it's around 46tb now
@@JoeHusosky nice! yeah, once you start adding more, it's too easy to keep wanting to add to it haha.
I always love when Linus and Jake do videos together
I love their banter now, Jakes turning into a great host
Did he gain weight?
@@Varde1234 probably. Too fuckin cold to go running this time of the year unless you’re a real believer
@@talltale9760 what a lame excuse
I love when Jake and Linus do a video together. Always fun to watch their banter
Linus and Jake poking fun at each other adds quite a bit to the entertainment of this. It makes a workday a lot more pleasant aswell as opposed to some super serious strict work only approach
8:05 Linus is clearly handling the entire motherboard by holding the CPU cooler. Guess we know who bent the fins lol.
But isnt it how you supposed to hold it?
Probably the one who supposedly did a great job XD
I remember the day when 10MB in an old XT computer was about the size of a shoebox and you would have to use programs to low-level format the drive occasionally to keep things "Lined up".
A company I worked for spent $1,000,000 for 1MB of RAM spread over 4, 7 foot by 19" cabinets. Whoa Dude!
We are living in the future!
Ngl this sounds caveman-like. When was this?
@@tanmay4217 One of these days you're going to say, "I remember when they came out with the 25 TB drives!" Some young punk is going to reply, "WHY? That's so small that you can't even put an operating system a drive that small. What did you use it for, still photos? LOLOLOL" When that happens...I want you to remember this day and your comment.
@@tanmay4217 A long time ago but there are many many of these "caveman-like" systems that keep the world running.
@@tanmay4217 Around 30-45 years ago i guess.
@@nobody7817
normal people don't use that much data but in linus case they use that much space for their work which takes a lot of space like you in the video
I'm just really glad Linus was practicing the 3-2-1 data backup rule so he minimized his data loss.
it's a pity that 3-2-1 backup strategy is falling out of favor for some years now.
As usual Linus is badly advised by its systems engineers/partners: nowadays the best/most effective backup strategies are 3-2-1-1-0 and 4-3-2
:-)
The essential data is backed up, such as whnnock server is backed up twice, once offsite at the lab and again in a much farther away backup server in a datacenter.
One of the best videos in a long time. Entertaining to see Jake and Linus mocking each other. Also, great editing transferring the chemistry between them. It never gets boring, even though such a topic has the potential to do precisely that. Thanks!
I agree with Linus, ssd smart readings are almost always inaccurate or no available. And sometimes the things just fail.
I actually had my 1tb Samsung 970 trip SMART and lock itself a couple of weeks ago with no warnings. After reading it in another PC it failed for extended high temps. It made the entire drive Read Only and I have no way to re-flash the SMART status to re-use the drive besides sending it to Samsung (which they may or may not do).
LET"S GO!!! Nice timing LTT team, I'm looking into building a small server rack with a NAS, switch, and a place to locate my modem and router.
linus: "why would anyone buy a 20TB HDD"
also linus: buys 60x 20TB HDDs
Well the first statement was in regards to normal consumers. Linus is an enterprise consumer. There isn't much of a need for normal consumers to have 20TB at the moment. Maybe in 10-15 years file sizes might reach a point where 10s of terabytes make sense.
@@ragefacememeaholic5366 no, first statement was in regards to read/write speeds
Funny even though Linus' critique on R/W Speed vs. Data Density ratio of current HDDs (and thereby it results in compromises to reliability and other stuff) is actually valid (especially for normie consumers)... Most server users don't have much choice is the counter argument which he himself is ironically suffering now. I mean the two other alternative in the extreme is: 1) Pure SSD build (too expensive for 2PB, and an overkill performance for them), or 2) LTO Tapes (cheap and dense, but need a separate infrastructure for to be used effectively... unless they put Jake on 24/7 shifts to load tape back and forth).
Linus has a multimillion dollars business. He's not the one people are talking about when they make that statement.
Linus bought 0 hard drives. Seagate sent them in exchange for advertising. Good thing there was a failure, wink!
These 2 in a video together with the lovely banter, gets me Everytime!! I just love them both!!
5:36 I found that too, I ended up getting EXOS drives cheaper from the US imported to Australia than the NAS grade stuff was anywhere... wut
I feel like it's because of the failure rate of Seagate? May be wrong, though
@@ArensLive It might be economy of scale at work
@@ArensLive so WD do better than Seagate?
@@shawno8253 annoying thing is thats self fulfilling prophecy, "oh it didn't do well in Australia 10 years ago, so we aren't going to ever try again"
I just hate how that is the excuse, cause it does mean as an Aussie tech, its really fricking hard to source parts cause "we don't usually get much call for that"
then again, we also don't get a lot of movies or shows cause apparently "we pirate everything"...its almost like if they released it over here, we wouldn't have the need to consider that?
I've been trying to import Dell constellation ES.3 (3tb) from the states to Australia too as its pretty cheap per tb (about $25). I haven't had a failure from them yet.
Jake casually calling his boss stupid at 12:55 🤣🤣
Linus: "Andy likes it"
Jake: "You pay him to like it"
Linus: "I pay YOU to like it, just one of you is better at their job"
Funniest thing I've heard in ages
Jake has really come into his own. I really didn't like him at first. Now he's one of my favorites.
Jake dgaf. Linus is obviously not that great.
Jake is a treasure. Calm and confident descriptions and excellent advice.
Make sure to double check your ashift is set correctly on the pool! Can't change it without trashing the pool. Usually the auto detect works but I'm always paranoid the SSDs will throw it off.
Not sure what "ashift" is in this context, but the thought of a high end software system having terrible defaults is both scary and familiar as someone planning major server rework today.
LINUS you just need an IT guy 😂 OR AN IT TEAM
But they have a Linus
No they have the other 2 that are Linux tech proficient (mostly as result of last server setup fail) hopefully he is using z3 this time (nope still z2 with 15 disks per vdev) smart scans and zfs scub with all notifications enabled (he using truenas so not a roll your own setup he used last time)
Selecting each disk individually is not needed it can select all the disks automatically when creating the pool
All the IT people who would give anything to configure one petabyte watching LTT mismanage multiple petabytes. ;(
@@user-np7kr4wx7g not only does he not have enough experience and time to deal with servers constantly, but a great IT person generally has a different skill set and background/training than even the most knowledgeable consumer-tech enthusiasts. While we don't see the server-operations much I don't think anyone they have is fully qualified for such a position and those who are 90% there have a lot of other stuff happening.
Oh the irony that would be
i love how they always savage together, damn more of them
I agree 😂😂
Agree, in fact even the new mobo wife is displaced by the Jake Angry Browaifu
Cool stuff. I used to work in a NOC that dealt with a lot of drive repair. Cold & hot spares both have their uses and should be kept available.
More devices failing while you rebuild is what gets you so you do *not* want to waste time on having a replacement shipped if you can avoid it.
Few hundred bucks on an idle spare is way cheaper than data recovery or a catastrophic loss.
It also depends on economy of scale. You need a huge number of drives to make keeping your own stockpile of spares profitable compared to waiting a few days for each new drive.
I bought a 30 drive Storinator cause of you guys. It’s been great. Went with 18TB drives. Put Unraid on it. Stuck with the Intel Xeon Gold 26-core though. 45 Drives that company is awesome to deal with. Thank you guys. Lovin the content as usual.
As an actual IT sys admin I am so, so sorry. You poor soul.
But hey, if it's just for your home Plex server and you're not actually trying to run a business off it, you'll probably be OK.
@@pufthemajicdragon Might want to explain why, im really curious why that would be bad?
@@pufthemajicdragon That's some Plex setup
@@Chakratos It depends on what you're doing with it. How much nuance do you want?
If you're building a home storage server that doesn't host any critical data - like a Plex server - this is fine. If you're running a multi-million dollar media, merchandise, and software company and storing business critical data, this is NOT fine.
This kind of setup has no redundancy and barely any fault tolerance. If more than two drives fail, which this LTT series demonstrates is entirely possible and even likely, you can lose literally everything. Nevermind if your backplane fails or your motherboard fails or you get a power spike that fries the whole thing.
LTT's previous same-as-this archive server failed, which is why we get this video in the first place. That failure means a half dozen of their staff had to spend weeks investigating, troubleshooting, attempting repairs, speccing a replacement, shopping, ordering, sorting, prepping for the build, performing the data migration, checking the data, rechecking the data, attempting to repair faulty data, scrubbing old data, and repurposing the old hardware. That's just the technical staff who will loose months of productivity fixing this problem, many of them working nights and weekends, ignoring their families and personal lives. That does not include the writers, videographers, gaffers, or editors who had to drop what they were doing to make this video. That does not include the accounting staff who has to budget all of that capital and labor expense. That does not include the lost productivity when editors have to twiddle their thumbs for weeks waiting for this recovery to finish so they can access the data they need. The cost for this failure can easily hit half a million dollars before you even count the cost of the hardware they just bought. And hell, we haven't even considered the emotional toll this is taking on everyone involved, including the families of the people LMG employs.
A proper storage server, or better, a SAN, is fully redundant, fully fault tolerant, highly available, and warrantied. It might cost $100k for the 2pb of storage, but it *just run * for at least 7 years. Every drive, every component, covered under a 4 hour onsite replacement warranty.
If your life is the only one that gets ruined when your home media server goes down, do what ya like, this kinda thing is fine (and certainly makes good ego-fodder).
But when you have a couple dozen people whose livelihoods depend upon your servers being reliable, you don't half-ass DIY your IT infrastructure and cross your fingers.
I can't wait for them to hire an actual IT person and they tell Linus & Jake everything was done wrong.
I find that hilarious - when someone buys “enterprise hardware” with much lower performance, capacity and reliability at a much higher price “for the support” which when it comes down to it , may or may not save your data
Till then we can all do it for them. It's amazing how, for a tech channel they can know so little about tech.
Like what? Please tell us.
@@tomorrow6 Any drive going into a Raid array needs to be an Enterprise drive as a standard drive will degrade and fail much faster, Not worth the risk when storing large amounts of data across multiple drives. 20 years of Experience in PC/Server builds I can assure you no one is paying for the support whatsoever!
@@spendy26 I’d suggest that they fail as quickly as consumer drives of similar capacity (or did before SSD’s) - Backblaze published drive failure statistics were a good source for non enterprise disk failure rates which didn’t differ too much from enterprise.
However - enterprise raid controllers with sufficient cpu and battery backed cache did allow for special drive setups for maximum fault tolerance at the cost of more wasted capacity. And of course SCSI drivers (including SAS) failed much more cleanly than consumer IDE drives
Plus the vendors did publish their own firmware updates based on faults experienced at other customers, especially if you paid for the extended support past four years. Enterprise software did offer predictive drive failure notification which allowed drives to be swapped out in raid sets online with minimal chance of data loss (albeit a slowdown as the raid array was rebuilt)
Waiting for the next "We lost our data AGAIN video" for this server.......
Which won't take too long because... Seagate drives
13:46 all these years of advancements and we don't have two tiny LEDs on those drives (one the "back" side of course, where they'd be visible). Have a power and a status LED, and you'd be all good to go
That's extra electronics not built for the process of the component doing its task and wastes power, especially for how many are in a server rack at a time, plus a power and status LED doesn't help as their are multiple power and status states of drive that no one could remember color code to just by memory..
Less than 2 years from now, Linus will be doing another 'we sad' video because this new device was neglected into oblivion just like the last two. 😂
is a feature ... if there's an IT guy, nothing will break and that means lesser videos.
Also any rebuild of 1 20tb failed drive is going to take AGES :D
I mean, if it's going to be neglected--maybe do full mirror, those last just about forever. And investigate why so many drive failures--that's too many for a small (in total drive count) setup.
It's just content all the way down.
"neglected" is a strong word. We prefer 'gently misremembered'
😋
Petabyte projects are my favorite videos. So much storage!
Perfect storage for taking a 1MB selfie every 3 seconds for ...100 YEARS! :/
That's a lot of Pickle Linuses... Linuses? Linusi? Linoos?
I did the math and that would take 1-2 more drive. Just wanted to let you know
Bruh pics take up 2-5 MB space these days.
encode as video to take advantage of compression across frames
@@Caesar512 If "octopus" becomes "octopi" then maybe the "us" is replaced with "i" when going from singular to plural, which would make it "Lini" :)
I can appreciate the knowledge , walkthrough, and banter in these kinds of vids.
What knowledge? Everything they're doing here is wrong.
Linus, I love how even though the advertising segments are obviously recorded in a different scene, at a different time, and probably well in advance, you are smiling from ear to ear. Chuffed, just knowing you're gonna have a great segway when the advert is eventually used. It's like past-future smugness. xD
17:51 Pickle Linus
A few years back LTT bought a Tape Drive for backing up the Petabyte server, was that still doing backups? If so, I'm hoping that we didn't lose all those other Linus Pickle pictures in existence.
yeah i remember that so what happened to it
We refer to those as "Lickle Shots", I thank you.
The tape drive was for business critical files, and to the best of my knowledge is still being used for it. The data LTT lost was non essential archive videos, which have a complete backup on UA-cam. So any further backup was both unneeded and impractical.
They also build an offsite server to replicate to.
The tape take long time for backup archiving and reading data. not their option.
However, their off site choice was Google.
I like to imagine that in a few decades, people will be looking back at Petabyte server videos the way we currently look back at videos of the Saturn V computer and how the bits in it's memory also had to be put together by hand, like how they have to put in the drives into this larger case. Very cool thought
I would love to just see or touch some of this tech in person in my lifetime. I'm way too poor for a gaming PC. Can't even afford a console of any kind rn lol. Love watching nonetheless!
Where do you live?
He keeps referring to what he had as "bit rot" it's not, it was mostly failed disks combined with a healthy dose of incompetence. I'm not going to say there was *no* bit rot, but it was probably pretty close to none, bit rot is slow and fairly rare, caused over long periods of time by cosmic rays and things, they will flip a bit here and there. What he had was varying degrees of disk failure. ZFS and enterprise/nas class storage devices do not try very hard to read problem areas, they just fail the disk and move on. If you get into a situation where you have an irrecoverable number of disks failed then that array needs to be taken offline, and the disks in an error state either sent to a data recovery house, or, start making an image yourself using things like ddrescue which will try quite hard to create a complete image of a semi functional disk. I've had multiple clients over the years with parity breaking levels of drive failure and yet I've never not been able to get the data back. It's probably too late for them now though if they onlined the disks and ran a scrub.
They need someone to do IT I think they basically just ignored it till they noticed a problem and then whoops lol.
I don't understand this either. Why they would actually turn on the drives and actually connect them to an array instead of backing them up with dd one by one is beyond me. That's one of the basics of how to recover from a broken disk.
@@thebaker8637 I'm sure they just don't know for the most part. Btw check out ddrescue, dd will generally fail on sketchy disks.
@@entelin Yeah, I’m not so much of a recovery expert, I never had to do it myself, by ‘dd’ I meant ‘disk cloning tool that copies raw device blocks vs file system level’, I’m sure there’s tools out there as you said in your main comment that are designed for this.
Bit rot isn't caused by cosmic rays. It's caused by the magnetic field on the metal disk platter decaying, which is a function of the material, size of the bit, and quantum mechanics. Bit rot is a lot more common as you go up in disk size. Which is why RAID consistency checks are a thing.
20 years from now: "they really needed all this to get only 1.2 petabytes?"
Well, diamond discs are a thing, an expensive thing.
When you go to some course, in the far future, you will get the slides on a usb 7 thumbstick with a capacity of 1 peta.
Welcome to the Exabyte
Welcome to Exabyte project I guess. :D
I love seeing Jake in these kind of showcase videos. My FAVORITE channel on UA-cam. I love you guys!!!
18:21 Years of watching Linus has trained us to panic when he says "drop it" in the server room.
I know this stuff is fun and of course is great content, but you would have thought you would learn your lesson by now. When you have this much storage, just get some actual, purpose built enterprise storage. For backups, do actual enterprise class backups with something like a Data Domain, LTO, or even Amazon Glacier as the backup target. At the VERY least, get some hot spares in the array so that if a drive does go down, it can start rebuilding right away.
His last issue was with important mid production videos. Ones not published to UA-cam. This is an archive server, used for back up of old videos. He cares much less about if they fail. Because UA-cam backs them up compressed. He mentioned on the Wan Show that cloud storage, or even a second server costs too much to justify.
@@darrenhjki That's why I mentioned LTO tape backup, those are fairly inexpensive. AWS glacier is also pretty cheap and most of the costs involved come with recalling the data. While not as cheap as LTO, would still be a pretty inexpensive solution. Just seems when you are talking about this much data a DIY solution might not be the best route. Since this is mostly backups anyway, cold storage, like LTO, will always be cheaper than this beast they are building, especially when you factor in electricity, the manpower to build it, and the labor time spent when it ultimately fails in a few years.
@@nospam4chris Linus is just in love with the idea of being able to pull in any footage from his entire online life, instantly. It does drive content and lets him play with big-boy hardware and concepts; however half-assedly.
Just looked at the datasheet for those drives, and it blows my mind to learn that they are CMR, not SMR! Seagate rates them at 285MB/s, which is bananas!
i love these kinds of videos where i cant understand a single thing they say, but i know if i continue to watch their vids for a few months and i will probably have enough knowledge to teach others
Please Please Please make more footage like this, it's actually interesting and makes me want to watch more. You could even create a new channel and incorporate all the LTT behind the scenes server stuff including the new house build ;-)
Jake walking away defeated as Linus keeps putting in drives is peak content 👌
You guys really need a dedicated IT team now.
Like 4 years ago... haha
Or train current staff
Nah, its just LTT generating content for YT in 4 years.
or just not use seagate drives
or just not save useless data from years ago in the highest fidelity possible.
the sponsor of this video actually came in clutch, had bought new fans that i could not set up with razer chroma, so thanks lmg!
At this point, can we consider the server videos an entire series? If anything, I’d welcome it.
Dude, I hope my future boss is like Linus, he seems so cool and fun to work with, and in the far future I hope to be like him :) in these challenging times I wish all the good for you and your crew Linus! 😊
Watch the xmas videos where he just showers staff with latest tech and gifts. :)
@@zxc533 I know I know, that is so wholesome!
8:34 "It's exactly how it should be right from the factory!"
"I'm not sure I believe that." -- wisest words ever.
As a Storage Architect for a living, allow me to say with the complexity you have in mind and the storage pool sizes, I HIGHLY recommend using Starwind as your virtual SAN software. It's been around for many years and has amazing capability like spanning physical NAS hardware presenting one storage pool. The big thing is it's incredibly reliable and data loss prevention and excellent recovery is their strength guys.
I love this kind of content!
Would be cool if Seagate also sponsored this video with like a 2 20tb drive NAS setup so us normal people had a chance at even sniffing this kind of storage solution.
@Monochromatik Seagate Exos is a different beast than standard consumer Seagate or even ironwolf. Same for WD Gold vs WD red, especially considering WD owns Hitachi storage.
You guys should probably also make a tape backup at this point since it’s all sequential,
I remember linus explaining to his wife how 20TB doesn't make sense for storage as rebuilding them would take a loong time. What changed ?
Sponsorships
The money, the need of more data in less space, the fact that HE doesn't have to watch over that because somebody else will.
New wife
he's got too much money now :D
Has the technology improved?
i just love the friendly banter between these 2 fellas :D
I feel like Linus should use a tape storage system to make sure all that data is really safe
Yeah, like wrap the whole thing in duct tape, so NO ONE can get to it.
right? ..right? ri- guys, where'd you go
Soon we will see a day when LMG cloud storage will be a thing and can hold more storage than google drive
Considering they lost their own data for at least second time in very amateurish way I don't think I'd trust them with mine.
@@mrmarecki1 If they'd make it cheap, I'd store my stuff on their drives, and take the high-risk high-reward.
They will hire someone better and learn from it and then I could see them opening up the LMG Cloud Storage to get money back off of their drives
@@KingLarbear they get the drives for free
@@mrmarecki1 just have redundant storage in another cloud lol
Didn't you said that the 20 TB hard drives are rubbish, because if one fail, it takes an extremely long time (several days) until all the data is recovered to a new one?
This'll be able to hold like three or four games in 5 years time, pretty solid.
Seriously though, this is freaking amazing.
I've had so many HDD failures over the years that I'd still be terrified of my only backups being on them. I know SSDs are significantly more expensive but it's worth it for the peace of mind, IMO. Not that SSD failures are impossible, but far more unlikely in my experience.
Oh, and they're a lot lighter to carry. ;)
You build these things with the assumption that you *know* HDDs will fail. You just have to monitor, scrub, and replace as necessary.
Yeah losing 2 tb of memes doesnt feel good
This level of data storage should be done with a tape drive to be more secure. Keep the tapes for cold storage (off-site in case of fire), HDD for faster storage and SSD for maximum speed.
Did we ever find out how long ssd data last in cold storage? It’s not forever. Eventually the difference in charge will dissipate and all return to 0s
These kinds of professional level HDDs are much less likely to fail, and having them in a raid configuration like this means that you have to lose at least 3 disks before you lose data, and they'll be alerted if any of them fail so that they can rebuild from parity data
At this point, I'd probably try draid, it looks like the dream scenario for it.
they are clueless
13 drives in RaidZ2? Can't wait to see this video again in 3 years or so. At least go RaidZ3 for this many drives in a vDEV.
Especially with 20TB drives. Resilvering can take long enough for a 13 drive 20TB vDEV that you can lose 2 drives during rebuilding the array, just give up the extra drive.
It's actually even worse than that: they put 15 drives in each vdev and it didn't look as if they set aside some drives as hot spares. At this point they are just begging to lose their data - again...
Linus is so passive aggressive in this scetch, I love it!
In my experience cutting edge high capacity drives ALWAYS have a high failure rate. I hope these work out for you guys though!
Yes, anything that is pushing the envelope is always going to be running on the edge of what is possible, and so, more likely to fail.
Wasnt their 16TB drive "cutting edge" when they used it? Sure it failed in the end, but that is also related to the fact that no one checked them ever.
@@cyjan3k823 Yeah setting up those email alerts is pretty mission critical on something like this.
Spread out over multiple servers is actually a good thing. At least when it fails you won't lose everything at once and it will be faster to rebuild when one does fail. Nothing is worse than taking a week to rebuild only to have another failure during that process (usually an unrecoverable catastrophe).
Linus: It's mind boggling 20TB in a single drive.
1 TB MicroSD: Am I a joke to you?
MicroSD aren't dirty cheap
MicroSD might perform worse than a hard drive
Also, 20TB in that hard disk is a achievement cuz, it's just few years ago that we had low capacity hard drives
@@cry-0432 in terms of storage density by volume it's still way more impressive than spinning rust. If I have a 20 TB 3.5" in front of me I'm really not impressed anymore. Show me 8 trillion bits that fit onto my fingernail and I genuinely have trouble believing that's possible.
@@dgschrei yeah, I agree
I wish I could work at a place like this sometimes. I love everything computers, even the crazy glitches that happen seemingly for no reason until one setting change fixes it.
Linus: Our Seagate drives had a bunch of failures and we lost our data.
Also Linus: We got a lot more Seagate drives for our new storage server.
Seagate has a long-standing history of making garbage.
Looking forward to the upcoming video where they talk about how they lost two drives in one of the VDevs, then a third died during the rebuild.
My guess why he lost all his data
6:57 Yvonne Ho has a brain and a adult human brain has the ability to store the equivalent of 2.5 million gigabytes digital memory . Wrong choise Linus
Your storage failed because you failed to monitor it, rather than accept the same process will work again, use a proper enclosure with status LEDs on all the bays. If your regular monitoring notifications fail, atleast you'll still notice when someone is in the server room that there's a red light.
Their storage failed because it was done wrong from the start. And they're doing it wrong again.
Don't "use a proper enclosure", don't rely on status LEDs. Buy a real SAN with active monitoring and predictive failure alerting. Hire a real IT staff and do it the right way. Yours is the second comment I've seen about status LEDs and all I can think is ya'll must have had a help desk job in the 1980s working in the stone age before e-mail or CRM existed.
Here's how it works today: You buy a Dell EMC SAN. You set up alerts to send an e-mail to your IT mailbox. Your IT mailbox is monitored by a CRM that generates a ticket, sets a priority, and assigns it to an IT tech. For predictive failures and maintenance, the tech sees the ticket in their queue and works it FIFO. For hardware alerts the ticket priority initiates a page to your on-call tech who opens a support case with Dell who dispatches a tech with replacement hardware. The Dell tech arrives within 4 hours (usually under 2) of you opening the ticket and replaces the hardware for you. No downtime, no data loss.
If you skip the SAN and just want a storage server, you buy a Dell, install OMSA, and your RMM picks up the hardware alerts in the SYSTEM event log, generates a ticket in your CRM, and all the same stuff happens. A storage server won't give you the scalability or redundancy that a SAN would, but could be a few pennies cheaper. Of course, if you need petabytes of storage you need a SAN. Period.
No status LEDs. Reliable automated monitoring and alerting. If you think you need a backup for your monitoring, then your monitoring isn't done right in the first place. When IT is done right, nobody ever sees it. It just works. You might set foot in your dc or mdf once in a year to patch a network port. What good are status LEDs when you're never there to see them?
@@pufthemajicdragon I'm not suggesting to exclusively 'rely' on status LEDs, which is why I said 'if regular monitoring notifications fail'.
RMM / SMTP alerts frequently fail (take 365's new anti-malware policies filtering IP whitelisted connectors or SPF record updates - an onsite system such as this could easily be forgotten about and the emails simply fail to deliver).
A reliable system has multiple layers of redundancy and regular auditing would also be a part of it too.
Dell EMC and HPE are both very expensive options and put a significant premium on their drive prices for the same day onsite warranty service, he's not going spend that or pay to put on IT staff to actively monitor a storage archive that was a cost cutting measure to start with.
'No status LEDs' - Notice the Dell EMC Sans still have status lights? Rather than assume people are lesser for not suggesting the options where money grows on trees, consider the realistic one.
I love this project, its so outrageous for a regular PC user that might have 1 or 2 TB onboard, but I can't say I'm not disappointed. Its called PETAbyte project yet you only get 895TB of writable space. I know theres a difference between RAW and formatted storage, but I want to see a PB of usable storage dammit!
Hopefully they're more reliable than the lower capacity Exos drives!
They're Seagate drives... so we'll probably gonna have another 'Server went kaput' video in a year or so...
9:35 "Are they SAS or SATA?" Technically both I guess? They are SAS ports and those can pass through SATA connections. You can easily find SAS to SATA breakouts for that reason.
But on this board they are SATA only. Linus got it wrong. Got this board in my storage server.
I mean SAS wouldn't make a sense nowadays most of those fancy (enterprise) drives have SATA and SAS is really rare...
This is pretty decent price. I just ordered an additional shelf for my company's storage with 24x960GB SSDs for 50kEUR :D. Vendor locking and service fees suck.
Could you not just have bought 2 12TB drives?
@@carltonleboss that's not how it works. Having, for example 4 6TB drives rather than 2 12TB drives gives you:
- more speed, since you can write to all 4 at once
- more redundancy and reliability, more of them can break and you'd lose less or no data
- I thought I had more but I don't, unless you count more flexibility in upgrades or something.
@@devnol rack space is expensive, so drive makers will always shaft you on the highest capacity drives.
@@devnol I can add another one: lower percentage of parity data.
Service fees suck until you need them, if you dont have them you end up losing your data, like linus does all the time.
Jake's tiny little giggles are everything