Error correction codes (Hamming coding)
Вставка
- Опубліковано 6 січ 2014
- How do we communicate digital information reliably in the presence of noise? Hamming's (7,4) error correction code demonstrates how parity bits can help us recover from transmission/storage errors. This must be taken into account when thinking about Shannon's idea of channel capacity and information rate. (hamming code, error correction)
Link to series playlist: ua-cam.com/play/PLbg3ZX2pWlgKDVFNwn9B63UhYJVIerzHL.html
Beautifully explained.
thanks, glad you found this series, was thinking of making a full version of it
So clever! Thanks for the explanation video, excellent as always.
I might use this trick in practice in my RF project.
Read about Reed-Solomon-Codes that's the modern way to do this. Although they're more complicated
As usual, your content is informative and succinct. I teach many of the CompTIA classes and if I have a choice on videos to recommend, I always use yours.
+Andrew Karaganis Thanks wonderful Andrew thanks for sharing. I'd love to know how I can help CS teachers even more. Have you checked out the latest series?
Nice Explanation and great VD visuals (Y)
This is an absolutely amazing video and explains the concept so intuitively. 😊😊
Thanks for the feedback, I actually just published a new video on error correction which goes one step further than this one.
@@ArtOfTheProblem Thanks! I'm gonna check it out:)
Awesome Explanation, Thanks
AvinashGz of 6teen ....
There are some small simplifications that made it a bit harder to understand. If the odd or even parity bit is evaluating 4 possible signals, it would be able to detect if ONE of the four signals is wrong, but not if there were TWO wrong, UNLESS it had the additional parity bits. So this is asking you to suspend your logical disbelief until the next point is made.
My mind first went to if there was a parity bit to evaluate three spaces, it would always be right but perhaps the evaluating where the positional error is more difficult.
Also, it makes me wonder if at the start it might* have possibly helped if you started with three spaces rather than four because at the end with the three overlapping circles, you are only evaluating odd / even for 3 spaces. Introducing the concept that way might* not lead to the problems above?
It might also be cool to go into why because of repetition we can compress the message. Why the data can go from 4 signals to 3 to verify
Great Video!
What if the parity or data bits are altered too? Are they not susceptible to interference?
Great channel btw
There is distance between the code
If there is change parity as you say there is maximum likelihood or nearest code
ua-cam.com/video/-15nx57tbfc/v-deo.html
ua-cam.com/video/5sskbSvha9M/v-deo.html
Yes, parity bits are susceptible to interference, but it's also possible to know if that's the case and correct it by flipping it. If a data bit has an error, two parity bits will be wrong during the check - in this case, correct the data bit. If only one parity bit has an error, only the parity bit will be wrong during the check - in this case, correct it by flipping the parity bit.
If you have two parity bits that are wrong, it's a pretty noisy transmission that needs more redundancy :)
And to think...Language of Coins will be over. I really enjoy it though!
interesting, but it seemed that the video got cut off & there's more to be said & learned
I've a test tomorrow, i'm gonna watch this till i can fucking explain point out every fucking part of this video backwards
stfu
How did it go
@@eli2858 Ending up with an A on it. So the video is good .. :)
the chalk board is supposed to read "The purpose of computing is insight, not numbers" correct? computers/computing
I wonder if the recording was done before he released the book and may have changed the title
Finally another one.
is this the final one? or are you going to upload the 16/16?
andres martinez working on final now
its like waiting for game of thrones season 4.
4:10 Does the trick preserve the order of input?
shouldn't it be, automatically correct errors at the expense of increasing the size, rather than the other way around? A minor point. Thank you for the video.
So if only 1 parity bit is incorrect, the data doesn't have an error, but the parity bit has an error right?
you said: "if the parity bit has an error, [...], the parity bit has an error"
please ask again ;)
remember that this code can only correct one error. if two or more errors occur, it can not correct it any more.
if you're interested, google for "hamming code"
No, I mean if only 1 parity check doesn't compute, then we can asume the parity bit has a transmission error.
yep, exactly. still assuming that we only have one error.
What if the error occurs in the parity bits?
PS: what's with the weird music in the background, are you trying to scare us?
Parity bits can have errors, but it's also possible to know if that's the case and correct it by flipping it. If a data bit has an error, two parity bits will be wrong during the check - in this case, correct the data bit. If only one parity bit has an error, only the parity bit will be wrong during the check - in this case, correct it by flipping the parity bit.
If you have two parity bits that are wrong, it's a pretty noisy transmission that needs more redundancy :)
but... but... what if a parity bit gets screwed up. in a given error, its going to happen about half the time. and if a parity bit is messed up, it will falsely correct a non errored nibble. so this only corrects for 4/7 errors. if an error is going to happen, its going to happen, the hope here is only that the errored bit lands on the non parity bits. can someone explain why error correction works better than this?
Because only one parity bit will have an error. If a data bit has an error, there will always be a pair of parity errors. The only actual problem would be multiple bits all being screwed up. But for that to occur, there would be so much noise that it's considered too difficult to broadcast.
What would happen if a parity bit was wrong? It would "correct" all the other bits and mess up the data. So how is this an improvement?
Add the rule "if only one parity bit is mismatched flip the parity bit".
+JonnyLatte What if two parity bits were flipped?
+kagi95 then it would mess up the correction. If thats a problem then you need a system with more redundancy and if errors are in clusters then you need to distribute that redundancy so that it is unlikely to be completely overlapped by a chunk. Nothing is going to be perfect though and the more redundancy you add the more expensive it is to store and transmit the same amount of data. If you know how corrupt your communications channel is though you can work out how much redundancy is needed and you can use larger checksums to know if in the end your block of data is corrected with any level of certainty you need (256 bit checksums mean 1 in 2^256 chance of a corruption having the same checksum as the original for example)
JonnyLatte Thanks!
Are you sure the code aren't p1 p2 d1 p3 d2 d3 d4 at 3:45?
+Christian Moen yes good catch there was an annotation I"ll have to update it
+Art of the Problem As you see, i can explain this now... too bad i didn't get this at my test at all :/
+Christian Moen ha yes, you don't really know something until you can correct it.
Still confused asf help!
This is a really poorly presented explanation. 👎