An update on the server and its performance since I made this video: 50% slower is a significant worst case. I use visual noise reduction as a post processing step for the video when I render. It is a fairly computationally difficult process that deals with a lot of data. I hadn't really tried to "tune" it for effect vs. performance before and just shrugged it off as something that takes forever and uses a ton of VRAM. After tweaking it though, I was able to get very similar render performance on the 1070 as I do on the 1080 ti. It mostly comes down to keeping the VRAM needed below what the card has. Doing that massively increased render performance. So the server is actually an extremely viable option for rendering now non all fronts. This project has been going on for too long so I didn't want to have to re-record the ending again just to update for that. So it's slightly wrong.
Have you run the math to see if a prebuilt Dell/HP would have saved you money considering 3 weeks of troubleshooting is quite the expensive time vampire? How do you feel about the lack of redundant power supplies?
@techtangents, were you just reducing the look-ahead value so fewer full frames have to sit in VRAM to be analyzed? Also, you mention not thinking you'd be able to get by if you swapped the 1070 for the 1080Ti... Is that because you're doing other things on your primary workstation (other than just Resolve)? I'd think if you edit of proxy you're barely taxing your GPU at all...
@@bitterjames Sure, I actually liked it and I was quite good at remembering how to spell and pronounce it, but I get why he changed it, as "AkBKukU" is not easy to remember and sounds kind of weird.
So very helpful! Thank you. I really need to sort out my situation and get some kind or remote rendering working. And then storage... Ugh. Yep need to sort it all out!
9:24 ! THANK YOU so much for covering the all the same drive issue. I don't know how many customers in how many data centres I've worked in have run into issues, wether it's firmware updates on those drives to prevent issues or bad rebuilds that kill an entire array; it's something very few people actually know about or plan for!
I pounded that issue home with him, but he's prepared for the eventuality of the array crapping out on him! I'm in the same boat... soo many clients saying "But! We bought them based on your spec!"... Yea... and you ignored me when I said that you need to WAIT A MONTH and buy more so they aren't the same batch! "But, you said it was reliable!"... Yea... WITH BACKUPS!.... *grumble mumble....*
Even though it was hard. You succeed. Remember it's the journey not the destination that's important here. Keep your head up and keep doing what you do.
Everyone in this life wanna be somebody. I grew up with computers around me, tech all over, spent my entire life since I was 7 yo in front of computers. I am 33 right now, still working and living in front of machines, but the knowledge Tech Tangents have reached I would say he's one of most powerful man in the entire world. They can take you EVERYTHING: money, cars, house, COMPUTERS....but they never can take your knowledge.
Oh yeah, that intro. I had an issue today where grub decided to add "nomodeset" to the updated kernel. That was such a weird issue, and it managed to annoy the heck out of me for 20 minutes before I figured out why X wouldn't start. Not sure I would have had the patience to deal with that server. Hats off!
I'm running into issues with full drives, too! Right now I'm using external drives, but I know this is not a long-term solution. My buddy has been trying to talk me into buying a used Xeon server with a bunch of drive bays for remote rendering and storage. Your 3 week of agony script SCARED me, but I'm glad you suffered the server woes so I don't have to! I will be bookmarking this vid, and coming back to it later when I do my next upgrade.
I would add a couple of things: MTBFs for drives vary immensely even for identical drives because they are just a mean and they don’t specify any other statistical important value. Also reading a drive sequentially as the recovery process performs is not as bad as reading the disk all over. Thanks for the great video!
Good thing you went with 8 TB drives; WD's 6 TB and smaller drives are apparently all SMR, which is no bueno for RAID. Apparently the WD80EZAZ is CMR, so they're fine. (I'm talking about Shingled Magnetic Recording. It's a way to cram more data onto drive platters by laying the rust particles on top of each other like roof shingles. It works, but a side effect is that *rewrites* become destructive. In order for an SMR drive to rewrite old data, it has to read off the data it's going to destroy, write the intended data, then read off the data it's going to destroy to put that first layer back, etc. This can cause long lag times that make RAID controllers assume the drive has failed. Western Digital marketed NAS drives that are intended for RAID arrays, but didn't specify that they're SMR., and it's become a scandal/crisis in the data storage world.)
sounds like you've been having the same problem with cursed servers the last month as me. It's always disappointing cause it quickly saps your joy of working with computers when things go wrong over and over and over again. But I'm glad you managed to work through and get things into a workable state.
Thanks for the tips on Davinci Resolve, I collect them as a hobby. Learning that system is _fun_ (why did I choose it... why?) I did not know about X2Go, I use Moba-XTerm myself at home and at work. Constantly logging onto Linux VMs and running GUI apps like editors and Xilinx Vivado... It's pretty good, lots of connection options and a fairly painless local x-session on Windows. I ssh into the VM then just launch the gui on it and moba picks it up on WIndows. Cheers,
I have some older NAS machines with 2TB Seagate drives that have 8.5 years of continuous on time, I can assure you that when a drive fails has little to do with when they were put into service. Temperature can certainly affect drives, but I observed that failure rates were inconsistent and not linked to ambient temperature or use. There are short life failures and long life failures. None of the drives, out of 53, were complete failures, only SMART errors. I have replaced all the drives with SMART errors and written a Nagios check that looks for these failures.
Hey Shelby, I'm curious if setting the nice value high enough on the render job would allow you to use your desktop while it's rendering? I don't know if you can render one project on two graphics cards at once, but that would open up rendering two different ones potentially. I only bring this up because when I was running F@H, because it has the high nice value, I was able to game at the same time with only a small amount of FPS loss, and regular web browsing and Discord felt totally normal, even with F@H pegging my system.
@@onometre My point is more about the effort and expense of setting up a render farm just to go 4K. In time it'll be easier and cheaper once the hardware caches up.
Even most movies aren't real 4K yet. And most people don't watch UA-cam on a 4K monitor, though UA-cam has really low bitrate for 1080p so 4K looks a lot better even on a 1080p monitor
It’s a shame you d nod live closer, I’d be happy to share my server setup and how I tackled ZFS and other issues. Drop me a pm if you’d like to nerd out.
we decided against ZFS for this as his backup scheme is fairly robust as is, and we were mainly limited by SATA speeds on the SSDs and regular read/write speeds on the HDDs. Besides that, being on Ubuntu the ZFS route was cagey at best, as a distro upgrade might have ruined the ZFS Pool. Trust me, we explored this in depth and we went back and forth for days before deciding that MDADM RAIDs were the easiest and most robust route.
Great video! What about setting up 2 or 3 render systems for your workstation to increase the speed of your projects? Write a Python script for automating the editing and having the cloud processing as a backup?
That's what I think too. I always watch at 1080p30. But I get that lots of tech enthusiasts like it that way, even if just for the sake of it. And I guess it's a fun challenge to produce at that quality. Gotta justify that rendering setup, plus the higher fidelity the picture is, the more care you just take to produce a clean image. I see the appeal in trying to achieve this.
The vast majority of people that record in 60 FPS don't even produce content that would benefit from being recorded in 60 FPS. The same applies for 4K.
I'm interested to hear what specific issues you had with remote rendering without running X. If it's just alsa related there are ways to create a null .asoundrc to give you an alsa device that does nothing. You can do the same for pulseaudio on top of alsa. Kudos for reaching for x2go. Remote desktop, despite being primarily a Microsoft technology is far superior to VNC.
X to go, huh? I’ve been trying to use a Linux VM from Mac OS via X forwarding, and it’s quite clear that it doesn’t get quite the same QA love that it used to. KDE isn’t exactly a bastion of code stability, but forwarded over SSH every proper shutdown is a blessing.
Happy to post them here for you: AF is the "logical" version of SMR... they're 4kb sectors on the platter but broken up to 4 512b sectors by the controller... which is fine if you're AWARE of that before smacking a FS on them! if the FS isn't aligned and tuned for it then the drive has to read the 4kb sector, write just the 512 chunk that changed, and write that back to the drive, tanking performance, but READS aren't affected! I avoid AF like the plague, opting for 4kb native (4kn) when and where I can get them, and 512n when the controller I'm pairing them with calls for it Don't use a partition on a RAID as it misaligns the FS to the block devices and slows everything down significantly and just put the FS on the bare array with "mkfs.ext4 /dev/mdX -O sparse_super -E lazy_itable_init=0,lazy_journal_init=0 -m 0 -T largefile" sparse_super puts fewer superblocks on the FS, saving a few hundred MB of storage for this array, the lazy_*_init=0 force all initialization to be handled at creation. -m 0 removes the root reserved space, saving us 300GB of space, an finally -T largefile tuned the array for files over 12GB.
@@Gartral cheers for that. I never understood why fdisk/gdisk etc doesn't auto align partitions to avoid that whole mess. Would the zfs raidz5 or whatever it's called not be better? Although Shelby said there was a hardware raid card so i guess you lose that write cache...
@@TheErador we aren't using the hardware raid card because it's not somthing that's easily portable between servers... also while a HW raid card is rebuilding the array the ENTIRE server is down till it's done and it's MUCH harder to recover from a URE. ZFS has it's place for sure... but this use-case was very much more suited to MDADM
As far as RAID goes, I see no problems with RAID 5 as long as the user in question goes in without preconceived notions and with full understanding of the limitations. It has its problems (especially with high capacity drives and rebuild times), but it IS still an effective means of providing primary redundancy.
Something that I like to do (which might be a total waste of time, I don't know), is if I have a RAID5-7/RAIDZx of identical brand new drives, I grab one drive and leave it plugged in for a week and constantly write random data to it for the whole week (while true; do dd if=/dev/urandom of=/dev/whatever bs=4M status=progress; done ... in screen so I can close the shell). Given the bell curve of MTBFs, this one should become unhealthy earlier than the others, and that will be an indicator to take immediate corrective action to save the array (like maybe do a full tape backup and order new drives)
I'll never understand why people are still bothering with conventional raid when ZFS has basically become mainstream. For years! And NO, nevermind the low hanging fruits usually thrown in your face when it comes to raid 5 and large capacity drives. Personally I just can't stand the insane times it needs to initialize an array before you can do something with it. Or fast init it but then know it will grind them for EVEN LONGER in the background, for goddamn zeroed drives. Or days spent resyncing when the system is otherwise filled at 30% capacity. I also like the flexibility of ZFS, I can take my drives, put them in a bone stock install, import the ZFS pool and have access to my data in a second. Don't need to care abut the hardware spec, firmware revisions, config files or whatever. But hey, maybe that's just me.
ZFS on Linux could get in the way of Shelby's situation knowing his luck lol. BSD based ZFS means no Davinci Resolve so a no go. I do agree that its a better solution for a file server like this. RAID imo is best for pure file servers of a small amount of disks (like those 4 port NAS enclosures). RAID 5-7 are just meh
when we initialized the arrays we tuned it so that creating the arrays and FS only took about 10 minutes. Yes, ZFS is nearly instant, but there were other factors to account for. And it would have been MORE work to turn off the snapshot journal and metadata duplication for data deduplication. Not to mention the ZFS IO driver on linux is kernel specific and has been known to change and not mount a pool between kernels... not a good fit for long term storage. ZFS is a good choice for a NAS... it's not the right tool for this particular job.
I'm not gonna argue with you because ZFS under linux is not my cup of tea. It's just when I saw this video I saw enough red flags to make me pay attention. I'm gonna be obnoxious enough to reiterate the old adage that RAID is not the proper way to archive data. It should provide data availability and uptime and not persistence long term. I see tech youtubers getting this wrong left and right, from LMG down across the board, they all seem to get the process skewed. Just because you can cram a lot of drives in a server doesn't mean you should - unless you're selling cloud storage or you're one of those people that fire up cigars with wads of money. Because you have a timeline with defined cycles of projects the data you're storing has a short shelf life, you need to flush everything older that let's say 6 months maximum into actual archives and that could mean only one thing: duplicate tapes. Another save in a cloud is just a wonderful bonus. With LTO-2 you will need 120 tapes to flush a filled up 24TB array. That means TWO SETS of 120 tapes stored in two different locations. That's a lot of tapes! Even if you take a couple of years for full cycle it's still a lot and they keep adding up. This is why I should have halved the array size and put my money on getting a newer tape drive. LTO-5 is already a 10 years old, it's not like you pay any early adopter premium. LTO-6 would be even better. You see then, when I saw this video, IMHO I saw more "wants" than "needs" and a functional bottleneck. That led me to believe that maybe the choice of raid wasn't sufficiently thought of either.
RAID 5.. into the deep end brother.. good luck.. and trust me on this.. drives dont wear out at the same time, because you bought them at the same date.. for the love of DATA, love the vids keep em coming. On a personal note, i have been running WD Red's for a long time and they do well. It is a lottery though. I am thinking about Ironwolf Pro's for my next storage solution.
WITH the proper backups he's doing, and the fact that this is an archive target with size and basic redundancy as the goals, the risks were weighed and we decided it was the right direction. Trust me, he's well appraised of the risks and has a robust plan in place for data safety.
Ah, I see you learned that servers aren't PCs and are a whole other world of complexity. You should really be using ZFS for everything as it has other benefits.
OK, you really need a lift table, similar to the ones This Old Tony and Marius Hornberger have videos on making (out of metal and wood respectively). Global Industrial, for instance, has this: www.globalindustrial.com/p/material-handling/lift-tables/mobile-scissor/mobile-scissor-li-table-330-lb-capacity?ref=cat/b/mobile_scissor Which is a reasonably capable looking mobile lift table for a reasonable looking price, except for the fact that Global Industrial itself has fair to god-awful consumer reviews. Their BBB rating is showing as A+, so they do deal with consumer problems in some manner. (Their products look like standard industrial supply sort of stuff, so it's quite possible regular consumers are just not familiar with how an industrial supplier normally operates and are unhappy with that level of service.) edit: Poking about a bit further, they also have things like this: www.globalindustrial.com/p/material-handling/lift-trucks/office-lab/hand-w-operated-office-li-truck-220-lb-capacity Unpowered light-duty (for this kind of equipment) high-lift lift truck, lifts up to 59 inches, and the platform is 19 inches wide, ideal for rack-mount gear to heavy to comfortably lift. Seriously, if you'r going to use such heavy rack-mount units, get yourself a lift truck or table. It would be disastrous in multiple ways to slip carrying something like that server.
@@Gartral ehh, if the 8tb drives die he says he will have proper archival backups on tape and offsite, so even if the are SMR and he loses the entire array... It should not matter much anyway.
Top tip - for local archival, just give up on using anything fancy and get a QNAP/Syno. Bang some disks in, set up rsync, and never have to think about it for another five years. Yeah, I know some of the fun is in building it, but I'm a Linux systems administrator, and I do enough of that at work frankly.... I've been really pleased with the wee Syno DS214+ I've had since 2015, it just keeps on trucking along with zero input required, does everything from FTP to iscsi. Absolute bliss compared to the FreeNAS box I had before (although I hear FreeNAS is waaay better now, too)
I see your channel and it always amuses me with yotubers. You have simple projects that you can do on your phone, but you've surrounded yourself with equipment like you were in a space station. ;) Over-invented, over-invested, too complicated.
If you need HDD's for roughly 11€/TB I have a merchant in the Netherlands with very attractive offerings, I got 200TB my self recently running 16 drives in raid 60 and 4 spares for the future, muhahahaha... About zfs vs HW raid, try doing that with a windows server LOL I know linux is superior but sometimes expedience is more important. And I got 2 spare raid controllers so in case something happens I have a reserve.
An update on the server and its performance since I made this video: 50% slower is a significant worst case.
I use visual noise reduction as a post processing step for the video when I render. It is a fairly computationally difficult process that deals with a lot of data. I hadn't really tried to "tune" it for effect vs. performance before and just shrugged it off as something that takes forever and uses a ton of VRAM. After tweaking it though, I was able to get very similar render performance on the 1070 as I do on the 1080 ti. It mostly comes down to keeping the VRAM needed below what the card has. Doing that massively increased render performance. So the server is actually an extremely viable option for rendering now non all fronts.
This project has been going on for too long so I didn't want to have to re-record the ending again just to update for that. So it's slightly wrong.
Computers are cursed? Your telling me
mhh i wonder if you would be down to collab with linus tech tips :P let him build a render station with tons of storage for you :D
Have you run the math to see if a prebuilt Dell/HP would have saved you money considering 3 weeks of troubleshooting is quite the expensive time vampire? How do you feel about the lack of redundant power supplies?
@techtangents, were you just reducing the look-ahead value so fewer full frames have to sit in VRAM to be analyzed? Also, you mention not thinking you'd be able to get by if you swapped the 1070 for the 1080Ti... Is that because you're doing other things on your primary workstation (other than just Resolve)? I'd think if you edit of proxy you're barely taxing your GPU at all...
Your vid feels like a infomercial sthap !
Hey man, good to see you again! Akkbkku! 😬
@@gustavgurke9665 his old name was better
@@bitterjames agreed! it was kinda unique one 😀
@@bitterjames Sure, I actually liked it and I was quite good at remembering how to spell and pronounce it, but I get why he changed it, as "AkBKukU" is not easy to remember and sounds kind of weird.
So very helpful! Thank you. I really need to sort out my situation and get some kind or remote rendering working. And then storage... Ugh. Yep need to sort it all out!
Hit me up, I'll help you if you want, I enjoy your content as much as TT's.
@@Gartral Thanks! Wasn't sure how to hit you up -- could you drop me a line at the email address here? ua-cam.com/users/craig1blackabout
9:24 ! THANK YOU so much for covering the all the same drive issue. I don't know how many customers in how many data centres I've worked in have run into issues, wether it's firmware updates on those drives to prevent issues or bad rebuilds that kill an entire array; it's something very few people actually know about or plan for!
I pounded that issue home with him, but he's prepared for the eventuality of the array crapping out on him! I'm in the same boat... soo many clients saying "But! We bought them based on your spec!"... Yea... and you ignored me when I said that you need to WAIT A MONTH and buy more so they aren't the same batch! "But, you said it was reliable!"... Yea... WITH BACKUPS!.... *grumble mumble....*
Nobody:
Tech Tangents: *Computers are cursed*
ESPECIALLY if you have to channel John Moschita to explain what went wrong in a tech job. (ouch)
EVERY TECH SAYS THAT! Rightfully so. Fuck computers. Love/hate.
He’s not wrong. They seem to work 90% of the time but that 10% of the time they are on the fritz is usually right when a project is due lol
Well I mean that SysV systemd bullshit in Debian _retches_
Man AkBkuku ur looking good as lately dude, ur skin's cleaned up alot man I'm really pleased for u dude fr
I’m lovin the new and innovative camera angles
I feel so sorry for you dude. I used to do server admin and tech work for a living, and it was a headache. Kudos to you for doing this project.
@1:10 i was waiting for "If it doesn't say Micro Machines, it's not the real thing!"
I will say. You reading the script. If someone puts a beat in the back ground. You'd be a rapping god!
Now you have to put a beat in the background
Your rant at the beginning alone deserves a like.
Oh my god I have that razor
I think I got halfway through the video before I realized what your shirt says. Nice!
the quick spoken summary of problems was so intensely fast I felt my heart rate rise lol
Welcome to /r/datahoarder.
Top Ten Rappers Eminem was too afraid to diss
😂😂😂
I'm beginning to feel like a (server) rack god
Even though it was hard. You succeed. Remember it's the journey not the destination that's important here. Keep your head up and keep doing what you do.
Everyone in this life wanna be somebody. I grew up with computers around me, tech all over, spent my entire life since I was 7 yo in front of computers. I am 33 right now, still working and living in front of machines, but the knowledge Tech Tangents have reached I would say he's one of most powerful man in the entire world. They can take you EVERYTHING: money, cars, house, COMPUTERS....but they never can take your knowledge.
Great Micro Machines commercial at the beginning!
Oh yeah, that intro. I had an issue today where grub decided to add "nomodeset" to the updated kernel. That was such a weird issue, and it managed to annoy the heck out of me for 20 minutes before I figured out why X wouldn't start. Not sure I would have had the patience to deal with that server. Hats off!
I'm running into issues with full drives, too! Right now I'm using external drives, but I know this is not a long-term solution. My buddy has been trying to talk me into buying a used Xeon server with a bunch of drive bays for remote rendering and storage. Your 3 week of agony script SCARED me, but I'm glad you suffered the server woes so I don't have to! I will be bookmarking this vid, and coming back to it later when I do my next upgrade.
his woes were more to do with being unprepared rather than the tech itself.
the 3.3v pin that puts the disks to sleep is there on purpose to stop people from shucking drives
I would add a couple of things: MTBFs for drives vary immensely even for identical drives because they are just a mean and they don’t specify any other statistical important value. Also reading a drive sequentially as the recovery process performs is not as bad as reading the disk all over. Thanks for the great video!
I've never seen you this pissed!
I would love to watch you doing all the job
Trust me... it was mostly spent watching a terminal window and banging his head into the desk... lol
I was thinking who is this and why am I subbed to someone I don’t know. But it’s just you. I’ll watch.
Good thing you went with 8 TB drives; WD's 6 TB and smaller drives are apparently all SMR, which is no bueno for RAID. Apparently the WD80EZAZ is CMR, so they're fine.
(I'm talking about Shingled Magnetic Recording. It's a way to cram more data onto drive platters by laying the rust particles on top of each other like roof shingles. It works, but a side effect is that *rewrites* become destructive. In order for an SMR drive to rewrite old data, it has to read off the data it's going to destroy, write the intended data, then read off the data it's going to destroy to put that first layer back, etc. This can cause long lag times that make RAID controllers assume the drive has failed. Western Digital marketed NAS drives that are intended for RAID arrays, but didn't specify that they're SMR., and it's become a scandal/crisis in the data storage world.)
sounds like you've been having the same problem with cursed servers the last month as me. It's always disappointing cause it quickly saps your joy of working with computers when things go wrong over and over and over again. But I'm glad you managed to work through and get things into a workable state.
Heck yeah!
In this video you look a little like I would imagine Hoagie from DOTT would look irl. I like it :)
Thanks for the tips on Davinci Resolve, I collect them as a hobby.
Learning that system is _fun_ (why did I choose it... why?)
I did not know about X2Go, I use Moba-XTerm myself at home and at work. Constantly logging onto Linux VMs and running GUI apps like editors and Xilinx Vivado...
It's pretty good, lots of connection options and a fairly painless local x-session on Windows. I ssh into the VM then just launch the gui on it and moba picks it up on WIndows.
Cheers,
You know shjt was wild when he shows up looking like the big Lebowski
I have some older NAS machines with 2TB Seagate drives that have 8.5 years of continuous on time, I can assure you that when a drive fails has little to do with when they were put into service. Temperature can certainly affect drives, but I observed that failure rates were inconsistent and not linked to ambient temperature or use. There are short life failures and long life failures. None of the drives, out of 53, were complete failures, only SMART errors. I have replaced all the drives with SMART errors and written a Nagios check that looks for these failures.
random question why does every lto drive i see on ebay have the one bezel thing missing? do they remove them for datacenter use or something?
Hey Shelby, I'm curious if setting the nice value high enough on the render job would allow you to use your desktop while it's rendering? I don't know if you can render one project on two graphics cards at once, but that would open up rendering two different ones potentially. I only bring this up because when I was running F@H, because it has the high nice value, I was able to game at the same time with only a small amount of FPS loss, and regular web browsing and Discord felt totally normal, even with F@H pegging my system.
4K really isn't worth it.
Strong disagree
@The Lavian Oh on a CRT, now your talking.
Fr tho "4k isn't worth it" is the pc gamer version of "the eye can't see anything beyond 30fps"
@@onometre My point is more about the effort and expense of setting up a render farm just to go 4K. In time it'll be easier and cheaper once the hardware caches up.
Even most movies aren't real 4K yet. And most people don't watch UA-cam on a 4K monitor, though UA-cam has really low bitrate for 1080p so 4K looks a lot better even on a 1080p monitor
I've had the same problem with my Waifu Wallpaper collection.
Yea imma need the magnet link or Google drive address to that
It’s a shame you d nod live closer, I’d be happy to share my server setup and how I tackled ZFS and other issues. Drop me a pm if you’d like to nerd out.
we decided against ZFS for this as his backup scheme is fairly robust as is, and we were mainly limited by SATA speeds on the SSDs and regular read/write speeds on the HDDs.
Besides that, being on Ubuntu the ZFS route was cagey at best, as a distro upgrade might have ruined the ZFS Pool. Trust me, we explored this in depth and we went back and forth for days before deciding that MDADM RAIDs were the easiest and most robust route.
Gartral cool, I stayed away from Linux for ZFS and went the FreeBSD route, seeing how ZFSs native OS Solaris is dead.
hey dude you have awesome content man! keep it up
Great video! What about setting up 2 or 3 render systems for your workstation to increase the speed of your projects? Write a Python script for automating the editing and having the cloud processing as a backup?
Off-topic but I'm impressed by the quality of your audio setup - is that a ribbon mic you're using?
do you _really_ need 4K 60 fps for the type of content you do? lots of resources could be saved/freed with going Full HD "only".
self inflicted problems, unless AkBKukU produces for Brazzers on the side
That's what I think too. I always watch at 1080p30. But I get that lots of tech enthusiasts like it that way, even if just for the sake of it. And I guess it's a fun challenge to produce at that quality. Gotta justify that rendering setup, plus the higher fidelity the picture is, the more care you just take to produce a clean image. I see the appeal in trying to achieve this.
The vast majority of people that record in 60 FPS don't even produce content that would benefit from being recorded in 60 FPS. The same applies for 4K.
I'm interested in your LTO backup workflow. How cost effective is that? What model did you get?
Thumbs up to that Raid 5, I used Raid 5 for YEEEEEEAAAAARs on enterprise environments even on commodity HW totally fine
I'm interested to hear what specific issues you had with remote rendering without running X. If it's just alsa related there are ways to create a null .asoundrc to give you an alsa device that does nothing. You can do the same for pulseaudio on top of alsa. Kudos for reaching for x2go. Remote desktop, despite being primarily a Microsoft technology is far superior to VNC.
For remote rendering, does it matter much if you have the gpu in a x16 or x8 slot?
X to go, huh? I’ve been trying to use a Linux VM from Mac OS via X forwarding, and it’s quite clear that it doesn’t get quite the same QA love that it used to. KDE isn’t exactly a bastion of code stability, but forwarded over SSH every proper shutdown is a blessing.
Would like to see the raid performance tips for AF drives etc.
Happy to post them here for you:
AF is the "logical" version of SMR... they're 4kb sectors on the platter but broken up to 4 512b sectors by the controller... which is fine if you're AWARE of that before smacking a FS on them! if the FS isn't aligned and tuned for it then the drive has to read the 4kb sector, write just the 512 chunk that changed, and write that back to the drive, tanking performance, but READS aren't affected!
I avoid AF like the plague, opting for 4kb native (4kn) when and where I can get them, and 512n when the controller I'm pairing them with calls for it
Don't use a partition on a RAID as it misaligns the FS to the block devices and slows everything down significantly and just put the FS on the bare array with
"mkfs.ext4 /dev/mdX -O sparse_super -E lazy_itable_init=0,lazy_journal_init=0 -m 0 -T largefile"
sparse_super puts fewer superblocks on the FS, saving a few hundred MB of storage for this array, the lazy_*_init=0 force all initialization to be handled at creation. -m 0 removes the root reserved space, saving us 300GB of space, an finally -T largefile tuned the array for files over 12GB.
@@Gartral cheers for that. I never understood why fdisk/gdisk etc doesn't auto align partitions to avoid that whole mess. Would the zfs raidz5 or whatever it's called not be better? Although Shelby said there was a hardware raid card so i guess you lose that write cache...
@@TheErador we aren't using the hardware raid card because it's not somthing that's easily portable between servers... also while a HW raid card is rebuilding the array the ENTIRE server is down till it's done and it's MUCH harder to recover from a URE.
ZFS has it's place for sure... but this use-case was very much more suited to MDADM
@@Gartral fair. I generally prefer software raid, got bitten by dodgy raid card firmwares in the past.
Table goes wobble wobble. Sever sweating he might fall.
As far as RAID goes, I see no problems with RAID 5 as long as the user in question goes in without preconceived notions and with full understanding of the limitations. It has its problems (especially with high capacity drives and rebuild times), but it IS still an effective means of providing primary redundancy.
LOL you're awesome, looking forward to more.
Something that I like to do (which might be a total waste of time, I don't know), is if I have a RAID5-7/RAIDZx of identical brand new drives, I grab one drive and leave it plugged in for a week and constantly write random data to it for the whole week (while true; do dd if=/dev/urandom of=/dev/whatever bs=4M status=progress; done ... in screen so I can close the shell). Given the bell curve of MTBFs, this one should become unhealthy earlier than the others, and that will be an indicator to take immediate corrective action to save the array (like maybe do a full tape backup and order new drives)
Raid = Redundant Array of Independent Discs .. yes I'm sure others have posted it, but here is my posting Shelby...lol
The joke is that it also means Redundant Array of Inexpensive Disks, so he said "something or other"
I'll never understand why people are still bothering with conventional raid when ZFS has basically become mainstream. For years! And NO, nevermind the low hanging fruits usually thrown in your face when it comes to raid 5 and large capacity drives. Personally I just can't stand the insane times it needs to initialize an array before you can do something with it. Or fast init it but then know it will grind them for EVEN LONGER in the background, for goddamn zeroed drives. Or days spent resyncing when the system is otherwise filled at 30% capacity. I also like the flexibility of ZFS, I can take my drives, put them in a bone stock install, import the ZFS pool and have access to my data in a second. Don't need to care abut the hardware spec, firmware revisions, config files or whatever. But hey, maybe that's just me.
ZFS on Linux could get in the way of Shelby's situation knowing his luck lol. BSD based ZFS means no Davinci Resolve so a no go. I do agree that its a better solution for a file server like this. RAID imo is best for pure file servers of a small amount of disks (like those 4 port NAS enclosures). RAID 5-7 are just meh
when we initialized the arrays we tuned it so that creating the arrays and FS only took about 10 minutes. Yes, ZFS is nearly instant, but there were other factors to account for. And it would have been MORE work to turn off the snapshot journal and metadata duplication for data deduplication.
Not to mention the ZFS IO driver on linux is kernel specific and has been known to change and not mount a pool between kernels... not a good fit for long term storage.
ZFS is a good choice for a NAS... it's not the right tool for this particular job.
@@genderender you do realise ubuntu 18+ includes ZFS standard in there kernel
moosethemucha I didn’t say it didn’t
I'm not gonna argue with you because ZFS under linux is not my cup of tea. It's just when I saw this video I saw enough red flags to make me pay attention. I'm gonna be obnoxious enough to reiterate the old adage that RAID is not the proper way to archive data. It should provide data availability and uptime and not persistence long term. I see tech youtubers getting this wrong left and right, from LMG down across the board, they all seem to get the process skewed. Just because you can cram a lot of drives in a server doesn't mean you should - unless you're selling cloud storage or you're one of those people that fire up cigars with wads of money. Because you have a timeline with defined cycles of projects the data you're storing has a short shelf life, you need to flush everything older that let's say 6 months maximum into actual archives and that could mean only one thing: duplicate tapes. Another save in a cloud is just a wonderful bonus. With LTO-2 you will need 120 tapes to flush a filled up 24TB array. That means TWO SETS of 120 tapes stored in two different locations. That's a lot of tapes! Even if you take a couple of years for full cycle it's still a lot and they keep adding up. This is why I should have halved the array size and put my money on getting a newer tape drive. LTO-5 is already a 10 years old, it's not like you pay any early adopter premium. LTO-6 would be even better. You see then, when I saw this video, IMHO I saw more "wants" than "needs" and a functional bottleneck. That led me to believe that maybe the choice of raid wasn't sufficiently thought of either.
RAID 5.. into the deep end brother.. good luck.. and trust me on this.. drives dont wear out at the same time, because you bought them at the same date.. for the love of DATA, love the vids keep em coming. On a personal note, i have been running WD Red's for a long time and they do well. It is a lottery though. I am thinking about Ironwolf Pro's for my next storage solution.
So, why didn't you use ZFS pools instead of RAID5? (apologies if you mentioned this...commenting while watching)
I was fully expecting "Computers: How do they work?"
You should consider getting a X9DRi-F, X9DRI-F-O, or X9DRi-LN4F+, all of them have more favorable slot configurations than your current board.
How much utilization do you see using nvidia-smi while there is a render going on?
I've fallen in love with my Model M too.
Wow is about all ive got for you
you look more big lol, but is an ilucion for the beard and the hair in the first minutes
What is currently the best solution for network based rendering with Blender?
I was just thinking about the RAID 5 issues, I don't recommend RAID 5 to clients anymore but it's fine if you're willing to accept the risk.
WITH the proper backups he's doing, and the fact that this is an archive target with size and basic redundancy as the goals, the risks were weighed and we decided it was the right direction. Trust me, he's well appraised of the risks and has a robust plan in place for data safety.
@@Gartral I think that's basically what I said.
@@dangerousmythbuster I was attempting to reinforce, not refute the point. Sorry, that could have been worded better!
@@Gartral no problem
Did you read that with peripheral vision? Or was the paper a pure prop? You looked straight at the camera and not the paper during that lmao 😂
Consider playing Terraria in a livestream or something. I think it could be a good time.
No lego island
*looks at his mac pro with 2.2 TB and linux file server with 1.2 TB* I havent even filled that up yet and i still want more storage
I wouldn't mind having an hour long video tbh
Ah, I see you learned that servers aren't PCs and are a whole other world of complexity. You should really be using ZFS for everything as it has other benefits.
Damn Shelby, you could read off benchmarks for Gamers Nexus.
I don't understand anything, but... cool! 😅
1:40 I'm beginning to feel like a (server) rack god
pop_OS, Hell yeah.
OK, you really need a lift table, similar to the ones This Old Tony and Marius Hornberger have videos on making (out of metal and wood respectively). Global Industrial, for instance, has this:
www.globalindustrial.com/p/material-handling/lift-tables/mobile-scissor/mobile-scissor-li-table-330-lb-capacity?ref=cat/b/mobile_scissor
Which is a reasonably capable looking mobile lift table for a reasonable looking price, except for the fact that Global Industrial itself has fair to god-awful consumer reviews. Their BBB rating is showing as A+, so they do deal with consumer problems in some manner. (Their products look like standard industrial supply sort of stuff, so it's quite possible regular consumers are just not familiar with how an industrial supplier normally operates and are unhappy with that level of service.)
edit: Poking about a bit further, they also have things like this:
www.globalindustrial.com/p/material-handling/lift-trucks/office-lab/hand-w-operated-office-li-truck-220-lb-capacity
Unpowered light-duty (for this kind of equipment) high-lift lift truck, lifts up to 59 inches, and the platform is 19 inches wide, ideal for rack-mount gear to heavy to comfortably lift.
Seriously, if you'r going to use such heavy rack-mount units, get yourself a lift truck or table. It would be disastrous in multiple ways to slip carrying something like that server.
"I should be good for a very long time" .. :: 8k video knocks at the door::
Ya need Linus to Make ya your Own Petabyte Server in a few years :D
Buy a 1650 Ultra it has the TU 106-125 Chip and RTX Cores for Video Rendering.
Man i had to slow down the video for that opening part way to fast for me in 2x speed.
yaay! new video!
Did you make sure those drives to got are not shingled?
the 6tb ones aren't we went over that pretty hard, the 8s... well... they're really meant for archival so...
@@Gartral ehh, if the 8tb drives die he says he will have proper archival backups on tape and offsite, so even if the are SMR and he loses the entire array... It should not matter much anyway.
"SATA AF" hehehehe marked it on my drive as a joke
Top tip - for local archival, just give up on using anything fancy and get a QNAP/Syno.
Bang some disks in, set up rsync, and never have to think about it for another five years.
Yeah, I know some of the fun is in building it, but I'm a Linux systems administrator, and I do enough of that at work frankly.... I've been really pleased with the wee Syno DS214+ I've had since 2015, it just keeps on trucking along with zero input required, does everything from FTP to iscsi. Absolute bliss compared to the FreeNAS box I had before (although I hear FreeNAS is waaay better now, too)
Anyone ever tell you you look a lot like Weird Al Yankovic lol
As Wendell would say, computers barely work.
Another great one is: Learning sand to think was a mistake.
@@ToTheGAMES I believe it is actually "Teaching sand to think was a mistake".
It’s weird seeing a techtuber not able to put dual 2080ti or titan cards with like dual 28 or 64 cores and over a tb of ram.
As very interesting as those videos are i think you need some help from linus to get one of those multiple terabytes storinators
Please tighten the legs of the desk that you use at the beginning of the video.
raid 5 is fine, far as i know
having 50+TB of storage sounds nice, but spinning all this rust 24/7 feels somehow wrong to me.
I see your channel and it always amuses me with yotubers.
You have simple projects that you can do on your phone, but you've surrounded yourself with equipment like you were in a space station. ;)
Over-invented, over-invested, too complicated.
If you need HDD's for roughly 11€/TB I have a merchant in the Netherlands with very attractive offerings, I got 200TB my self recently running 16 drives in raid 60 and 4 spares for the future, muhahahaha...
About zfs vs HW raid, try doing that with a windows server LOL I know linux is superior but sometimes expedience is more important. And I got 2 spare raid controllers so in case something happens I have a reserve.
it always catches me offguard when americans say "what all". it's something you just don't hear elsewhere.
Dude u got wonderful hair why dnt make shampoo commercial in ur free time?
Please cover 486 computers from 1995 era.
❤️🇵🇭
Wasn't Pentium already king in the mid 90s?
x2go - um where have i been living - thanks for that ssh -X always sucks and VNC is also shitty
Почему все компьютерщики отращивают такие длинные волосы?
Let me just comment before I even watch the damn thing!
just to let you know the gtx 2060 is now 299$
(from what i heard)
I actually thought you looked better with facial hair
Computers were a mistake.
Too sum up the first three minutes.... "FUCK"