I will Rate this video #1 in teaching concepts. There should be a youtube AWARDS yearly for best video in teaching, reporting, .... all genre !!! I am sure this video will find an award there.
Wow, what a great explanation! I have professors who try to explain this lecture after lecture using cluttered equations that hide the essence of the idea. Maths is important, but it is useless without a context or an understand of the idea. Sometimes I wonder if they do this deliberately, to make us think that they are really clever for understanding such a long string of equations.
Thanks for the feedback Bashcode. I think most people do it by accident because it's hard to be both clear and accurate. Most people are one or the other, rarely do you get a Feynman who can do both (I can't). This video is the result of a team of writers to achieve the same result (clear and correct). At the beginning we didn't have any sort of 'narrative' for how to teach this in mind. Only after a few months of back and fourths did we find a 'teachable sequence' which was learner friendly and accurate for a PhD thesis no less. A very fun, challenging and rewarding process.
@@ArtOfTheProblem so grateful for this channel and the channel's team! Also, that Robert Gallager had an awesome message.. bet you AI is going to do the same thing, resurrecting interesting solutions to problems that are going to be suddenly very important
This video made me recall an ECC lecture at DEC (Digital Equipment Corp) during a maintenance course on the PDP-11/44. Prior to that, tape decks only had vertical parity (the ninth bit of every byte) and longitudinal parity which was a byte appended to the end of each block of 512 or 1024 bytes. With this scheme, vertical and longitudinal errors would point directly to a single correctable error. Things improved when the longitudinal byte was replaced with a 16-bit ECC implemented as 2-bytes (many implementations were based upon CRC-11). Getting back to the PDP-11/44, every 32-bits of data were implemented with 39-bits of memory (every 8-bit byte had a parity bit; 3 additional bits were necessary to implement the hamming code). With this scheme all single-bit errors were correctable as well as many double-bit errors)
Quality wise i think AotP makes it easily in the top 10 educational channels. Every single upload is elegant, entertaining and informative.. I miss those explanations done with strings and peas and bowls though :)
thanks for your kind feedback Rares. These Information theory videos produced with IEEE have a slightly different style (animation heavy) compared to my regular episodes (which are mostly live action). I will continue both styles in the years ahead.
You are truly gifted in explaining complicated things in a simple and interesting way and I just want to thank you for sharing your knowledge in an entertaining manner. Your effort in making those educational videos in your channel is highly appreciated and I'm looking forward for more videos to watch. I just wish you could write a technical ebook about the contents of your channel since I really love your elegant style of teaching of getting to the bare essentials of a certain concept. Cheers and more power to your channel!
@@ArtOfTheProblem It seems to me that an awfull lot of bits would be required to commucicate the random allocation between bits and parity bits. Which would of course defeat the purpose since those would need to be protected as well. How is the allocation communicated efficiently? Is it maybe the same random pattern repeated over and over?
@@charaleet6494 Great questions. The simple answer is "the same random pattern is repeated over and over" as you say. The randomness is defined at the protocol level. They are also correct to note that it's not "so simple" to use just random structure, today it's more subtle. This video only covers the key insight to use randomness (and small sets) in the creation of error correction codes.
Highly informative, yet very easy to understand. Thank you for this video! The visual elements really help linking the examples to the theoretical notions behind them.
Brit, as a non-computer scientist with aspirations to better become one, you continually blow my mind with these videos. Remarkable tutes. So clear. Great examples & animations. Zero pretension. I hope Khan Academy is still featuring all your stuff. Will contribute to your Patreon soon.
There I was bouncing around trying to understand N = NP and finally found a video that wasn't confusing AF... yours. Now here I am learning all sorts of information theory because I apparently find it very intersting! Lol.
Great explanation, quick question: so the video explained the code in terms of bit erasure, what about bit flip, I'm guessing that is harder to detect, how LDPC work in that case? How does it detect the message bits that were fliped?
I think you should have focused more on switched bits in the beginning, instead of only unknown ones, as you can't correct a switched bit with one parity bit, because you don't know where it sits. Despite that, great video.
Interesting stuff. Unfortunately, it just raises more questions in my mind. I mean, if it relies on identifying erasures, then it's specifically suited to analogue channels as you'd find in the real world. But that makes me wonder... The level of 'erasure' must also exist in a continuum... so surely there has to be more layers to this scheme in practice. For example... lets say we have a set with three 'erasures' that are only slightly below the detection threshold _(so have some small confidence in the signal - some non-zero level of confidence)_- compared to another set which has two 'hard' erasures _(where we have no confidence in the signal)_ The theory, says to try the set with the least erasures first... But it may make sense, in such scenarios, to try solving the more confident set first despite it actually having more erasures... as well as simultaneously trying to solve the low-erasure set first. In each case, the strategies will have knock-on effects ... and the number of knock-on effects may yield some sort of final _"correction distance"_ ... the final correction distance of each strategy could then perhaps allow us to favour one solution over another. Could this more probablistic approach lead to a small but measurable improvement in detection/correction? I'm guessing it's complicated. It may depends on the type of noise and the modulation used. I'm not sure how to explain my thinking, except to say that, in the analogue world, not all erasures are created equal. So, treating them as equal may mean missing vital clues. I might code something up to play with this, simulating noisy messages over various channels... perhaps a QAM channel, an FM channel and a logic-level channel... to see if there is a measurable benefit to 'weighting' erasures under various types of noise - or whether doing so just hinders correction. Well, I guess there goes MY week... : /
A basic concept is somehow wrong in this vid: a corrupted bit is not turned as a question mark at the receiver, it is just inverted as binary 1 is turned as 0. For maximum-likelihood receiver, a decision maker is designed to determine the received binaries as 0s or 1s before error detection/correction algorithm is applied. So, 1 parity bit cannot correct single-bit error. As an example: if the sent message is 11001(1), last bit is parity bit, and received message is 10001(1), the receiver only knows that the received message is wrong but cannot tell which bit is wrong.
1:18 Repetition code. That's how the old TI 99/4A saved things on cassette. Every packet was repeated twice. Thus saving took almost twice as long as necessary. Minutes. Ugh.
Sorry I just figured out how to use this feature. It was really fun! Sorry about the late notice, I will give 24 hours next time. Plus we have another video in less than 2 weeks!
+Art of the Problem Why even give a premiere though? It's like dangling a carrot on a stick for the subscribers, and if you happen to get to the time when it's premiering you either have to settle with not being able to speed the video up, or pausing it while having everyone else in the chat spoil what happens in the later part of the video by the nature of it. Or close it and wait for the premiere to finish to finally be able to watch it normally. I just don't get any reason for it!
@@Architector_4 Good points. I thought it would be good for A. early notification of when a video is coming (as long as you keep it < 24 hours) especially when there are sometimes months between videos. B. chance for live chat (it was fun today) -- I'm still exploring the idea and will only do it if subscribers enjoy the options.
I'm doing an open university course ; LDPC and the low density was described as the encoding matrix having a tiny amount of 1's compared to 0's. Like a ratio of 100:1 It's this accurate? It dosn't make sense to me why that would be the case! I much prefer the explaination of low density = each bit connected to fewer parity bits. This way larger codes can be decoded faster. I can share my course text if that is useful.
Some question here: Code and Parity Bits are send in the same channel. What happens when the Parity Bits are getting broken? Or is this somehow prevented?
So... about erasures! I'm probably wrong, but it seems to me that an erasure _(as considered here)_ is different from an error, in that you know when an erasure exists. If the sending channel is analogue, and the signal is digital ... then perhaps you're ideally looking for a string of +1 and -1 symbols. With channel noise you might consider some threshold such that anything over .75 will be a +1 and anything under -0.75 to be a -1 ... and anything closer to zero to be a faded or 'erased' symbol... But what about noise that introduces or flips symbols in a bold way? Now we're not just filling in known gaps in our knowledge about the message, but we're trying to identify, locate and correct otherwise confidently obtained _(but nevertheless, misleading)_ symbols. Is this covered in the "erasure" strategy ... or, does an erasure strategy rely on the existence of some underlying transport strategy for identifying weak symbols - and then restrict itself to resolving those 'low-confidence' symbols? I note that in hamming codes you can clearly identify a single, perfectly flipped, symbol. I'm just wondering if the same is true when discussing erasure - as the video seems to concern itself with the correction of red question marks rather than locating false positives : ) I'm probably just missing something obvious. Anyone?
If the parity bits are also in multiple sets then wouldn't there be cases where the same parity bit needs to be 1 as per one set and 0 as per another. What happens in those cases? And if such a case cannot arise, why is that? Please clarify.
The random allocation of the parity bit mappings won't allow this to happen basically. It will only ever be correct for each data and parity bit sets it's linked to... If that makes sense?
To get a feel for this, using this process, what proportion of the bits of a 1 megabit file can be randomly invalidated before I'm no longer at least 95% sure that the result is fault free?
Thank you very much for the video! Well explained! I would like to know whether LDPC codes are only used for Binary Erasure Channel. This is an impression which I got from your video.
But we actually don’t know 1?1??1. What we received is 111101, and your first step to recover is impossible. In your first step, 1?1 can be easily solved to 101, but the actual number we get is 111. We don't know if the answer is 011 or 101 or 110
I know this is the current standard and all, but it seems quite counter intuitive to use all these checks with this many parity bits. What if you could compress the data into a few bits with a lot more parity checks instead of using a lot of uncompressed data and parity checks? This would increase reliability and size at the same time. From what I understand, this simply has a ton of low density bits with a ton of parity checks that means more bits of data is needed to check all of them. I suppose you would need a fast compression algorithm that could do this, but would that not be the solution to transmit more data at a reliable rate?
In this video he talks about an "erase" state (the '?'). How does the receiver know there was an "erase" state as opposed to a flipped bit? For signals that are 0 or 1 volt, isn't the signal always going through circuits that force the value to be either 0 or 1? Is there a detector at the receiver that calls "erase" if the value is (say) between 0.4 and 0.6 volts? If not, then the method based on "erasure" would fail. As far as I know, there is no "erase" state, there are only bit flips. If there really are no 'erase' detection circuits, then the entire video is wrong. It should be redone using bit flips.
This video doesn't make much sense to be honest. The channel output (the message received by the decoder) is a sequence of values between 0.0 and 1.0, where the i-th value is the probability that the i-th bit in the original message is 1.
Maybe I missed it, but you never explain how the receiver determines which bits have been erased. Hamming Codes only provide error correction for a single bit per code word because the receiver can not know which bits have been erased without knowing the message.
I will Rate this video #1 in teaching concepts. There should be a youtube AWARDS yearly for best video in teaching, reporting, .... all genre !!! I am sure this video will find an award there.
thank you thank you thank you
brother this video is so good it makes me emotional, like it makes me regain faith in humanity.
wow can't ask for more
Wow, what a great explanation! I have professors who try to explain this lecture after lecture using cluttered equations that hide the essence of the idea. Maths is important, but it is useless without a context or an understand of the idea. Sometimes I wonder if they do this deliberately, to make us think that they are really clever for understanding such a long string of equations.
Thanks for the feedback Bashcode. I think most people do it by accident because it's hard to be both clear and accurate. Most people are one or the other, rarely do you get a Feynman who can do both (I can't). This video is the result of a team of writers to achieve the same result (clear and correct). At the beginning we didn't have any sort of 'narrative' for how to teach this in mind. Only after a few months of back and fourths did we find a 'teachable sequence' which was learner friendly and accurate for a PhD thesis no less. A very fun, challenging and rewarding process.
@@ArtOfTheProblem so grateful for this channel and the channel's team! Also, that Robert Gallager had an awesome message.. bet you AI is going to do the same thing, resurrecting interesting solutions to problems that are going to be suddenly very important
ditto!
Such a great, high quality video. Thank you
This video made me recall an ECC lecture at DEC (Digital Equipment Corp) during a maintenance course on the PDP-11/44. Prior to that, tape decks only had vertical parity (the ninth bit of every byte) and longitudinal parity which was a byte appended to the end of each block of 512 or 1024 bytes. With this scheme, vertical and longitudinal errors would point directly to a single correctable error. Things improved when the longitudinal byte was replaced with a 16-bit ECC implemented as 2-bytes (many implementations were based upon CRC-11). Getting back to the PDP-11/44, every 32-bits of data were implemented with 39-bits of memory (every 8-bit byte had a parity bit; 3 additional bits were necessary to implement the hamming code). With this scheme all single-bit errors were correctable as well as many double-bit errors)
As someone who has done a mathematics and computer science degree, these videos are PERFECT to watch during lunch break at work. Thanks!
Great to hear you enjoy these
that was a brilliant explanation, the reasoning of the naming and how you connected that part was perfect, thank you
Quality wise i think AotP makes it easily in the top 10 educational channels. Every single upload is elegant, entertaining and informative.. I miss those explanations done with strings and peas and bowls though :)
thanks for your kind feedback Rares. These Information theory videos produced with IEEE have a slightly different style (animation heavy) compared to my regular episodes (which are mostly live action). I will continue both styles in the years ahead.
You are truly gifted in explaining complicated things in a simple and interesting way and I just want to thank you for sharing your knowledge in an entertaining manner. Your effort in making those educational videos in your channel is highly appreciated and I'm looking forward for more videos to watch. I just wish you could write a technical ebook about the contents of your channel since I really love your elegant style of teaching of getting to the bare essentials of a certain concept. Cheers and more power to your channel!
Thank you so much for this video, I had trouble understanding LDPC codes but now it's crystal clear.
Thanks for making this video! You made complicated and theoretical concepts easy to grasp.
we really appreciate the feedback
Wonderful explanation!! ❤🎉😊
New video out!! ua-cam.com/video/PvDaPeQjxOE/v-deo.html
You know it's going to be a great day when *Art of the Problem* uploads a new video!
Happy Thanksgiving, everyone!
Probably a stupid question, but how does the receiving computer know which bit is tied to which parity bit if they are random?
Good question, it's defined at the protocol level and so you can assume both sides know the pattern in advance.
does that mean the message header is sent without error correction codes?
@@ArtOfTheProblem It seems to me that an awfull lot of bits would be required to commucicate the random allocation between bits and parity bits. Which would of course defeat the purpose since those would need to be protected as well. How is the allocation communicated efficiently? Is it maybe the same random pattern repeated over and over?
Thats what im wondering as well, surely there should be some sort of standardised mapping or a pattern to it
@@charaleet6494 Great questions. The simple answer is "the same random pattern is repeated over and over" as you say. The randomness is defined at the protocol level. They are also correct to note that it's not "so simple" to use just random structure, today it's more subtle. This video only covers the key insight to use randomness (and small sets) in the creation of error correction codes.
Highly informative, yet very easy to understand. Thank you for this video! The visual elements really help linking the examples to the theoretical notions behind them.
appreciate the notes Mizu
Finally I understood the essence of LDPC codes. Thanks
best explanation ever on LDPC
Awesome video.
Thanks for your feedback
@@ArtOfTheProblem 9:25 can you tell me where Peter Alias(?) proved this?
I believe this is the reference web.mit.edu/6.441/www/reading/hd2.pdf 1956@@NoNTr1v1aL
Also list of important papers by Elias is here (www.nasonline.org/publications/biographical-memoirs/memoir-pdfs/elias-peter.pdf)
@@ArtOfTheProblem thank u very much.
You explained it so incredibly well!
stay tuned for more!
Brit, as a non-computer scientist with aspirations to better become one, you continually blow my mind with these videos. Remarkable tutes. So clear. Great examples & animations. Zero pretension. I hope Khan Academy is still featuring all your stuff. Will contribute to your Patreon soon.
Thanks for your continued support. Indeed you can find Episode 1 & 2 on KA
I subscribed back when you made the RSA video, superb channel 💪🏻
Nice to know you are still around Andrea. Hope to see you again
This is a superb video!
Thanks for your feedback Emil
Man this was great. Wonderful job.
thanks! stay tuned for more
Big thumb up! Very nice and logical explanation of LDPC. 👍
I learned a lot from this, thanks!
Simple yet informative
Dr. Gallager seems like a cool dude.
Beautiful explanation.
What a great video! So intuitive! Very well done!!
appreciate the feedback
Excellent content. Thank you for the clean work
yes this one came out crisp, but it was a very hard one to make I recall.
There I was bouncing around trying to understand N = NP and finally found a video that wasn't confusing AF... yours.
Now here I am learning all sorts of information theory because I apparently find it very intersting! Lol.
thrilled to hear it
Great explanation, quick question: so the video explained the code in terms of bit erasure, what about bit flip, I'm guessing that is harder to detect, how LDPC work in that case? How does it detect the message bits that were fliped?
What a fantastic explanation ...
this was a hard one!
"Halfway between a modern cell phone, and a coffee machine"
I died.
I think that's still an overestimation considering how powerful modern cell phones are.
This is a really good explanation 👍 definitely going into my bag of training for interns.
excellent glad to hear it
IT IS AWESOME VIDEO!! everything is so clearly explained. Thank you so much!
thanks so much for the feedback glad this was helpful
Such a great Video Guys.Good Job!!
glad you enjoyed thank yoU!
Great explanatory video, thank you
I've been thinking along the lines of file wide single checksums. But overlapping randomly connected single bit checksums work better.
Interesting, what are you working on?
I just mean my understand before watching this video vs after watching it.
I think you should have focused more on switched bits in the beginning, instead of only unknown ones, as you can't correct a switched bit with one parity bit, because you don't know where it sits.
Despite that, great video.
This is really well-done!
appreciate the feedback
Best explanation
such a simple explanation !
thank you we worked hard on this to make it simple, only one our there :)
The ending note was very inspiring,
Excellent sir, thank you so much for video
appreciate the feedback
Very well explained!
really appreciate the feedback
Interesting stuff.
Unfortunately, it just raises more questions in my mind. I mean, if it relies on identifying erasures, then it's specifically suited to analogue channels as you'd find in the real world. But that makes me wonder...
The level of 'erasure' must also exist in a continuum... so surely there has to be more layers to this scheme in practice. For example... lets say we have a set with three 'erasures' that are only slightly below the detection threshold _(so have some small confidence in the signal - some non-zero level of confidence)_- compared to another set which has two 'hard' erasures _(where we have no confidence in the signal)_
The theory, says to try the set with the least erasures first...
But it may make sense, in such scenarios, to try solving the more confident set first despite it actually having more erasures... as well as simultaneously trying to solve the low-erasure set first. In each case, the strategies will have knock-on effects ... and the number of knock-on effects may yield some sort of final _"correction distance"_
... the final correction distance of each strategy could then perhaps allow us to favour one solution over another. Could this more probablistic approach lead to a small but measurable improvement in detection/correction?
I'm guessing it's complicated. It may depends on the type of noise and the modulation used.
I'm not sure how to explain my thinking, except to say that, in the analogue world, not all erasures are created equal. So, treating them as equal may mean missing vital clues.
I might code something up to play with this, simulating noisy messages over various channels... perhaps a QAM channel, an FM channel and a logic-level channel... to see if there is a measurable benefit to 'weighting' erasures under various types of noise - or whether doing so just hinders correction.
Well, I guess there goes MY week... : /
please post what you find. I want to guess that it helps
That was amazing! Thanks.
@@josephboog thanks stay tuned
A basic concept is somehow wrong in this vid: a corrupted bit is not turned as a question mark at the receiver, it is just inverted as binary 1 is turned as 0. For maximum-likelihood receiver, a decision maker is designed to determine the received binaries as 0s or 1s before error detection/correction algorithm is applied. So, 1 parity bit cannot correct single-bit error. As an example: if the sent message is 11001(1), last bit is parity bit, and received message is 10001(1), the receiver only knows that the received message is wrong but cannot tell which bit is wrong.
What an awesome video, thank you!
Took 2 years to finish this one, finally live would love your feedback: ua-cam.com/video/OFS90-FX6pg/v-deo.html
amazing explanation!
thrilled people are finding this
1:18 Repetition code. That's how the old TI 99/4A saved things on cassette. Every packet was repeated twice. Thus saving took almost twice as long as necessary. Minutes. Ugh.
Just Amazing!
stay tuned for more
new video! ua-cam.com/video/PvDaPeQjxOE/v-deo.html
Aw, I missed the premiere. I got the notification 18 minutes late.
Same, i just got notified
#me2bruh
Sorry I just figured out how to use this feature. It was really fun! Sorry about the late notice, I will give 24 hours next time. Plus we have another video in less than 2 weeks!
+Art of the Problem
Why even give a premiere though? It's like dangling a carrot on a stick for the subscribers, and if you happen to get to the time when it's premiering you either have to settle with not being able to speed the video up, or pausing it while having everyone else in the chat spoil what happens in the later part of the video by the nature of it. Or close it and wait for the premiere to finish to finally be able to watch it normally.
I just don't get any reason for it!
@@Architector_4 Good points. I thought it would be good for A. early notification of when a video is coming (as long as you keep it < 24 hours) especially when there are sometimes months between videos. B. chance for live chat (it was fun today) -- I'm still exploring the idea and will only do it if subscribers enjoy the options.
Great visuals.
I dig it.
Stupendous!
Glad you enjoyed this
Well done! subscribing right away :)
Stay tuned for more!
yey! new video!! =)
Great video!
I'm doing an open university course ; LDPC and the low density was described as the encoding matrix having a tiny amount of 1's compared to 0's. Like a ratio of 100:1 It's this accurate? It dosn't make sense to me why that would be the case! I much prefer the explaination of low density = each bit connected to fewer parity bits. This way larger codes can be decoded faster. I can share my course text if that is useful.
well explained! thanks
Nice message :)
great video keep it up and waiting for more :)
new video coming next week! and more after that
@@ArtOfTheProblem great. I clicked the 🔔 notifications button for the new videos.
😊
Some question here:
Code and Parity Bits are send in the same channel. What happens when the Parity Bits are getting broken? Or is this somehow prevented?
How comes that at 11:52 , there exist 4 black box but only 3 parity bits?
So... about erasures! I'm probably wrong, but it seems to me that an erasure _(as considered here)_ is different from an error, in that you know when an erasure exists.
If the sending channel is analogue, and the signal is digital ... then perhaps you're ideally looking for a string of +1 and -1 symbols. With channel noise you might consider some threshold such that anything over .75 will be a +1 and anything under -0.75 to be a -1 ... and anything closer to zero to be a faded or 'erased' symbol...
But what about noise that introduces or flips symbols in a bold way? Now we're not just filling in known gaps in our knowledge about the message, but we're trying to identify, locate and correct otherwise confidently obtained _(but nevertheless, misleading)_ symbols.
Is this covered in the "erasure" strategy ... or, does an erasure strategy rely on the existence of some underlying transport strategy for identifying weak symbols - and then restrict itself to resolving those 'low-confidence' symbols?
I note that in hamming codes you can clearly identify a single, perfectly flipped, symbol. I'm just wondering if the same is true when discussing erasure - as the video seems to concern itself with the correction of red question marks rather than locating false positives : )
I'm probably just missing something obvious. Anyone?
great catch. Indeed they are different (erasures carry location information) and that simplified this explanation quite a bit...
could randomness relate to sparsity of errors in the data?
How I could correct a 64 number (1 or 0) code that has been generated randomly and I don’t have parity check?
How do you deal with conflicting parity bits?
If the parity bits are also in multiple sets then wouldn't there be cases where the same parity bit needs to be 1 as per one set and 0 as per another. What happens in those cases? And if such a case cannot arise, why is that?
Please clarify.
The random allocation of the parity bit mappings won't allow this to happen basically. It will only ever be correct for each data and parity bit sets it's linked to... If that makes sense?
Great video! What's the song at 11:40? sounds like Geinoh Yamashirogumi
All sounds are original compositions for AOP by Cameron Murray.
To get a feel for this, using this process, what proportion of the bits of a 1 megabit file can be randomly invalidated before I'm no longer at least 95% sure that the result is fault free?
How many bits an ldpc can detect and correct in general?
Thank you very much for the video! Well explained! I would like to know whether LDPC codes are only used for Binary Erasure Channel. This is an impression which I got from your video.
Nope. It's used for bit errors too.
Thank you
But we actually don’t know 1?1??1. What we received is 111101, and your first step to recover is impossible. In your first step, 1?1 can be easily solved to 101, but the actual number we get is 111. We don't know if the answer is 011 or 101 or 110
I know this is the current standard and all, but it seems quite counter intuitive to use all these checks with this many parity bits. What if you could compress the data into a few bits with a lot more parity checks instead of using a lot of uncompressed data and parity checks? This would increase reliability and size at the same time. From what I understand, this simply has a ton of low density bits with a ton of parity checks that means more bits of data is needed to check all of them. I suppose you would need a fast compression algorithm that could do this, but would that not be the solution to transmit more data at a reliable rate?
875 Brown River
In this video he talks about an "erase" state (the '?'). How does the receiver know there was an "erase" state as opposed to a flipped bit? For signals that are 0 or 1 volt, isn't the signal always going through circuits that force the value to be either 0 or 1? Is there a detector at the receiver that calls "erase" if the value is (say) between 0.4 and 0.6 volts? If not, then the method based on "erasure" would fail. As far as I know, there is no "erase" state, there are only bit flips. If there really are no 'erase' detection circuits, then the entire video is wrong. It should be redone using bit flips.
This video doesn't make much sense to be honest. The channel output (the message received by the decoder) is a sequence of values between 0.0 and 1.0, where the i-th value is the probability that the i-th bit in the original message is 1.
who does this music, i love it?
Cameron Murray makes all music for AOP videos (cameronmichaelmurray.bandcamp.com/)
A worldie and a half!
4512 Windler Unions
if only it could detect corruption inside the bank as well.. would also be useful to check government, but often more than one bit will be corrupt..
Can you tell me what tools you used to make this video?
just apple motion
1841 Justus Shoals
163 Boehm Ville
Repetition is a perfect code. The problem is code rate only...
Maybe I missed it, but you never explain how the receiver determines which bits have been erased. Hamming Codes only provide error correction for a single bit per code word because the receiver can not know which bits have been erased without knowing the message.
853 Nicolas Orchard
761 Cremin Pass
3491 O'Conner Haven
5612 Vilma Extensions
969 Hardy Viaduct
68927 Mireya Fields
101 Hudson Course
where at?
39839 Efrain Shore
- God has Joined the Server
41382 Conroy Lodge
Can you make a video a week? k thx by.
www.patreon.com/artoftheproblem
explanations are good, the music is horrible sorry