Super Reparatur - Awesome repair. Weill done. I have to admit that I did a similar oopsie years ago. I put a capacitor so close to the pad for a LCD metal frame bending hook, that it was ripped of by whatever pliers were used to bend the hook. However, I fixed that years ago. Still getting some of them back for service and have to be very careful.
It is not just a stupid hw design but stupid sw design as well. Many of these either didn't boot if you had a single disk or you had to configure RAID 0 stripe on that single one let alone when the disks had data on them already and this crap wipes them. They just could not have a god damn option for JBOD on many of these. There were servers we even gave up and just plug a single USB drive into the inside motherboard usb connector and boot from that.
Also an interesting issue with many raid adapters is that when a single block goes bad on a disk, the entire disk is marked bad and no longer used. When you have two disks in RAID-1, and one disk has a bad block, that disk is removed from the array and needs to be replaced. When the other disk develops a bad block in another location before you can do that, you lose all your data. Even though it is all still there. I always found that an "interesting design decision"... And of course most of us know that in cases like this, it usually helps to eject and re-place the same (bad) disk, and the data will be copied from the other disk and it again works. When that is such an easy solution, why doesn't the controller software try that by itself? We'll never know...
@@Rob2 shit HP design, they've gone down the drain over the years, no wonder everyone has gone the way of ZFS/BTFRS/CEPH, hell even LVM has better performance and handling
Does hardware raid controllers still make sense? With the filesystems of these days, such as ZFS, BTRFS, etc. is better to handle the redundancy at an higher level. Not only it has a minimal impact on the performance, but is much safer, for example ZFS maintains a checksum of the data, something that typically RAID controllers does not do (that is if you have a disk that is not completely broken, but spits out corrupted data, your data will silently corrupt and you will not notice it till it's too late). Also, good luck if the controller breaks down to recover the data on the drives... To me there is no sense to use a RAID controller, and if there is one (because the server has it already) to not configure it in JBOD and do everything software side. And if someone says "well but Windows does not support it", there is still a sense of installing a Windows server bare metal and not on top of an hypervisor like Proxmox?
@@alerighi Well, one reason I can think of is the support for "just swap the defective drive for a blank one and have automatic rebuild" that a RAID controller offers. I am using BTRFS on my own home system, and while in general I am very satisfied with it, including the support for checksums and the good recovery for single block errors (scrub), I can tell you that the handling of defective drives (and even more the handling of temporarily disconnected drives) really s*cks! Apparently the developers see no priority in fixing that, and who are we to complain about free things...
I think with symmetrical data lines they have to be the same length, which is tricky to achieve. You can't just move the caps, unless the parity is maintained.
That's true unfortunately. I never liked Adaptec RAID controllers but hopefully HP changed the firmware... It's a shame... HP was once so proud of it's Smart Array controllers and now this.
HP have gone a huge way.....backwards compared to what magic they used to create. Such a shame when bean-counters totally rip the creative and disruptive heart out of an organisation. When I first came into electronic engineering, I really wanted to work with them - Now I wouldn't go anywhere near. Very sad.
Super Reparatur - Awesome repair. Weill done. I have to admit that I did a similar oopsie years ago. I put a capacitor so close to the pad for a LCD metal frame bending hook, that it was ripped of by whatever pliers were used to bend the hook. However, I fixed that years ago. Still getting some of them back for service and have to be very careful.
Great three and a half minutes content. It is precious. Thanks a lot.
mine has the 440i, wonder if it has the same issue
It is not just a stupid hw design but stupid sw design as well. Many of these either didn't boot if you had a single disk or you had to configure RAID 0 stripe on that single one let alone when the disks had data on them already and this crap wipes them. They just could not have a god damn option for JBOD on many of these. There were servers we even gave up and just plug a single USB drive into the inside motherboard usb connector and boot from that.
HP wants to sell the M.2 boot adapters...
Also an interesting issue with many raid adapters is that when a single block goes bad on a disk, the entire disk is marked bad and no longer used.
When you have two disks in RAID-1, and one disk has a bad block, that disk is removed from the array and needs to be replaced.
When the other disk develops a bad block in another location before you can do that, you lose all your data. Even though it is all still there.
I always found that an "interesting design decision"...
And of course most of us know that in cases like this, it usually helps to eject and re-place the same (bad) disk, and the data will be copied from the other disk and it again works.
When that is such an easy solution, why doesn't the controller software try that by itself?
We'll never know...
@@Rob2 shit HP design, they've gone down the drain over the years, no wonder everyone has gone the way of ZFS/BTFRS/CEPH, hell even LVM has better performance and handling
Does hardware raid controllers still make sense? With the filesystems of these days, such as ZFS, BTRFS, etc. is better to handle the redundancy at an higher level. Not only it has a minimal impact on the performance, but is much safer, for example ZFS maintains a checksum of the data, something that typically RAID controllers does not do (that is if you have a disk that is not completely broken, but spits out corrupted data, your data will silently corrupt and you will not notice it till it's too late). Also, good luck if the controller breaks down to recover the data on the drives...
To me there is no sense to use a RAID controller, and if there is one (because the server has it already) to not configure it in JBOD and do everything software side. And if someone says "well but Windows does not support it", there is still a sense of installing a Windows server bare metal and not on top of an hypervisor like Proxmox?
@@alerighi Well, one reason I can think of is the support for "just swap the defective drive for a blank one and have automatic rebuild" that a RAID controller offers.
I am using BTRFS on my own home system, and while in general I am very satisfied with it, including the support for checksums and the good recovery for single block errors (scrub), I can tell you that the handling of defective drives (and even more the handling of temporarily disconnected drives) really s*cks!
Apparently the developers see no priority in fixing that, and who are we to complain about free things...
Smart array, dumb design.
These parts may be designed with the expectation that they be handled gently, but they do need to handled.
Stupid design. Why couldn't the 2 capacitors have been located next to the other 2 or a little notch cut out of the plastic when it was moulded.
I think with symmetrical data lines they have to be the same length, which is tricky to achieve. You can't just move the caps, unless the parity is maintained.
10 minutes to fix, 3 months of back and forth between the hypervisor and server vendor diagnosing
In fairness, they're designed by Adaptec (or maybe these by LSI). Either way, Digital/Compaq/HPE don't do it anymore.
That's true unfortunately. I never liked Adaptec RAID controllers but hopefully HP changed the firmware...
It's a shame... HP was once so proud of it's Smart Array controllers and now this.
This is what happens when the mechanical engineers and the electronics engineers don't properly coordinate with each other.
But at least they decided to rewrite each CLI and manuals so none of their slave labour in China is offended by terminology
wow, all because of a dumb air baffle
HP have gone a huge way.....backwards compared to what magic they used to create. Such a shame when bean-counters totally rip the creative and disruptive heart out of an organisation.
When I first came into electronic engineering, I really wanted to work with them - Now I wouldn't go anywhere near. Very sad.
Yep.
To me HP is just a sleazy company that sells crappy plastic printers that require $16,000/gallon ink cartridges.
Man hätte die Caps etwas schöner einlöten können. 😇
Yeah, but they work nevertheless :)