One of the first uses of Reed-Solomon error correcting codes was to transmit data back from the Voyager spacecraft. Once they got beyond Jupiter, the signal to noise ratio became untenable without error correction.
Yeah. In this case he asked a very on-point question -- he asked whether the maximum number of random points that a curve interpolates is also the minimum number of points that defines the curve. The answer is yes, as explained later in the video, but unfortunately it seems like the prof misunderstood his question :(
It was a bit glossed over but properly a Reed-Solomon code can correct up to half the redundant bits. So +1 would only detect the error and +2 would correct up to 1 bit. If you have a million bits then the chances of more than one error is really high. So in practice it's somewhere around 1.5x the information rather than 3x the information.
@@tinglin6121 Unfortunately the end of the explanation is rushed. I think you're right, you construct a polynomial using the sent data as coefficients - these can be integers, because it's the input, you're not trying to achieve any particular curve on a graph, whatever you get is fine. Then you pick a point or two (or more) on the graph, and at that part it could be challenging to pick a point which both x and y are integers. This brings the question of precision and I imagine as many things in life, the actual error correction is full of complex nuances like finding the most optimal precision and number of points. Also there was no example on how it actually looks when you detect a mismatch: you got your 16 values + 2 additional points, but those 2 additional points don't lie on the curve drawn by the polynomial defined by the 16 points - so how do you find which is the offending point, and then how do you use the 2nd value to correct the error? There might be some smart math behind that, but it wasn't presented.
@@tinglin6121 All the important facts are still true when the polynomials are interpreted in a finite field, so you can use that to pull these tricks with just integers and without ever needing to worry about rounding errors or overflow or whatever. The division operation definitely doesn't behave like the one you learned in grade school, though!
Note that when you do these manipulations in binary, they get a whole lot easier and faster to process. Reed-Solomon coding is actually one of the earliest and easiest error correction codes to calculate. It was so easy that even in the limited processing of pagers in the 1980s, they could work quite well. But there are more complex error correction codes that can correct longer strings of errors. There are Bose-Chaudhuri-Hocquenghem (BCH) codes, of which Reed-Solomon is a subset, there are "turbo-codes" and there are Low Density Parity Check codes (LPDC). Each of these methods will enable one to get perfect copy closer and closer to the noise floor.
Do any of those correct for *missing* bits? As in, let's say I send 50 bits of real message encoded in 100 total bits, and a chunk of 10 bits in the middle just never arrives at the destination, can one of these codes allow the receiver to reconstruct the 50 bit real message from the 90 that did arrive?
@@maxbaugh9372 Replace the missing bits with random ones at the receiver and you are back to the original situation. This is not the most efficient way, because by using the information the bits have been "erased" you can optimize the error correcting code and the receiver, but it would still work. You can research about Information Theory to find out all the mathematical limits and awesome stuff you can do.
@@maxbaugh9372usually you know how many bits the message is, so you would add the missing bits as zeroes. But that probably works only if you know where the bits are missing.
This is generally not a problem because signals are clocked, if a link is functioning at all the both sides will agree completely about how many bits were sent.
The “sending it twice” has been the default method of error correction since the invention of the telephone, but it can break down if factors like someone’s accent or a poor connection make the repeat sound similar. For example, “nine” and “one” can sound similar and letters like “B” and “D” can also sound similar, and still sound the same to the hearer upon repeat.
Fun fact: Credit card numbers already have error detection so that the scenario described on the start doesn't happen. It's not error *correction* though
I can't imagine why you'd use forward error correction for a credit card number when you could just detect an error and ask the sender to resend that packet. It's typically only used for very low latency requirements like live video, or for situations like storage or digital TV broadcasts where it's impossible to request a resend. But I guess it's just an example.
@@gdclemo Yes it is just an example, however asking the sender to send again isn't always an option even ignoring latency demands. For example, imagine archival. There, your primary struggle is against temporal corruption: if your archival solution survives longer than the original work's medium, you don't get to ask for the original work again when corruption occurs.
@@gdclemo Well, the website asking for the number might not explicitly do it, but all of the communication between both ends is error-corrected at various levels of he networking stack.
The mathematics of how this is accomplished in real-world applications is very interesting as well. I can't recall if Numberphile has done a lot of videos on finite fields, but they're nicely suited to software and hardware implementations. Nowadays they've largely been replaced by turbocodes although they're still important. The Voyager probes use Reed-Solomon codes, as does the Compact Disc audio standard.
if I'm not mistaken, that's how compact discs that contain data (not audio) function. audio discs don't contain such a mechanism (so, they technically speaking hold more useful data); but instead errors get like "filtered" out. because music is unlike data not just entirely random. so, one single harsh spike won't pass the subsequent filter.
@edwardfanboy oh, okay. but there's definitely a difference in data they contain. maybe a CD for computers contains even more correction methods. but you definitely lose some storage capabilities. (and no, I'm not talking about discs that store more then 650 megabytes which just use more if the available area.) I mean, that's all aside this other encoding method where way more physical "pits" get used to represent digital bits. I think you need fourteen to represent eight. that's because you need to constantly focus the laser (it's about a square millimeter when it hits the surface to be insensitive towards scratches) and can't just have eight consecutive zeros. it *must* change after two spots or the laser will lose focus and alignment. this kinda like provides the same function as the punched sides of analog film, so to speak.
@@HxTurtle CD's, Barcodes, QR codes, even ASDL all have encoding schemes that prevent the continuous zero issue, But we usually don't call that data, it's something extra from the data payload that's being sent and it's below the EC. The PCM audio stream coming out of the CD is identical to the master copy used to make the CD, If the data can be repaired with EC, the issue is most shitty CD rippers put the CD drive in audio playback mode which just ignores errors and keeps playing to maintain tempo, and not data read mode which will try reading the data sector again. Remember Audio CD's were designed in the 80s to be played on affordable hardware with kB of RAM, you don't have 10s of buffering to go back and attempt to reread the data like we do with UA-cam, or can wait a few extra seconds for the game to load like on the PS1.
If I remember right, audio CDs have just enough error detection to detect a bad sample and recreate it from its neighbours, and only protect the most significant bits of the sample. Data CDs need much more error correction to recreate every bit perfectly.
@@vigilantcosmicpenguin8721 agree! when it comes to optimizing density and speed, errors start to show up. so, it requires a lot going on behind the scenes to always just work with virtually no failure rate.
I was always surprised that Jules Verne, in the "In Search of the Castaways (French: Les Enfants du capitaine Grant, lit. 'The Children of Captain Grant')", did not have a letter in a bottle streaked with small print all over the field with the coordinates of the island where the shipwrecked were saved, instead of a long-winded artistic description of the severity of their situation.
Hey, Brady, been watching Numberphipe for more than a decade. Can you please add introductory information about the people you interview and ask them to name the field they talk about so those who are interested can leave more?
3:04 - There's another problem with the tripling the digits solution in the manner executed here. Errors are usually 'bursty'. There'll be a bit of noise and a whole bunch of digits in a row will have a problem. Tripling each individual digit protects very poorly against this. It would be better to send the whole number three times. It took me a long time to wrap my head around Reed-Solomon codes because in their implementation they use Galois fields. Specifically Galois fields of characteristic 2, which are basically strings of binary digits where addition is replaced with xor. But, because I'm me, I had to have a deep understanding of _why_ you could replace addition with xor and still have algebra work exactly the same because otherwise I would never remember how the whole thing worked because it wouldn't fit into a system.
I think you use xor to make (and check) whether an array is odd or even (one of the easiest to understand [and therefore to implement] error checking [and potentially self correcting, if you make it two dimensional] methods.)
You can mitigate against this problem by the sender doing 1) encode 2) permute the coded bits 3) send. The receiver must thus do 1) receive 2) unpermute the received bits 3) decode. One sort of permutation is called interleaving. But permuting bits is independent of encoding...
Do you know of any books that explain the theory in your last paragraph? I learned this from a book about computer networking, and if I recall I was annoyed that this leap was made. In fact, I had this type of annoyance several times during my engineering degree.
@@DF-ss5ep Discrete mathematics, usually a required course for computer science, "Discrete Mathematics with Applications", is the book I would recommend, Will give you an intro to things like fields, especially in the section about encryption, it should give you enough info to actually gain some intuition. The more rigorous Abstract Algebra stuff is a bit above my knowledge.
When I took the error correction code class, the first page of the slides professor showed is to teach us the field, group .... and it's so difficult for a beginner and hard to find the relation btw channel coding. It's so nice to watch this video !
Small correction that I’m sure someone else has pointed out already but I can’t see: At 7:00 3 random points do not ALWAYS lie on a circle, as there is always a chance that the three points end up on a straight line
They pointed it out themselves. Their use of the word "random" is hiding some mathematical details that are too much of a tangent to get into. The tldr is that that those cases are a measure 0 set and so come up with 0 probability so we are mathematically justified in ignoring them (that doesn't say that they are impossible however). If you really want you could say it is a circle with a radius of infinity
It's kind of amusing to me that credit card numbers were chosen, because they already have error detection built-in to the cc# itself (mod10 code) to detect people transposing digits or having a wrong digit when writing them down or saying them over the phone.
And also because sending 48 bytes instead of 16 bytes would make no noticeable difference to the transmission time. It was a pretty bad choice of example.
@@abhijitborah If someone is completely lost it might be a bit much to expect them to be able to jump straight to understanding and applying the maths here, no?
This method is also used for a multi-password encryption (not real term), for example if you want to encrypt a message, and provide 10 people with their unique decryption keys, and require that in order to encode it at least 4 people need to get together and uae their unique decryption keys at the same time.
Even though the 2 extra data points that we need to send for error correction is independent of the number of data we are sending, this however would only work if we assume that we can have at most 1 mistake in the message we're trying to send. Unfortunately the bigger the data we are sending the higher the likelihood of multiple errors at the same time so we'll probably have to send more than 2 extra data points for error correction to work with larger data messages, which of course wouldn't cause any problem in practical terms since the ratio of redundant to the essential data would be close to 0 for large messages.
Well, this video provides the thought process, implementation of course need to solve more problems. But still, imagine you need to send milion digits and expect 10% to get lost on the way. With naive solution, you might need grahams number of digits to send, while with this solution, you actually need to send little over 1,2 milion digits to be reasonably sure the code will still fix itself on the other side.
As a fun addendum, this method can also be used to share a secret between parties. Suppose you want to share the missile launch code among five generals, in such a way that if two go mad, the missiles can't be launched. At least three need to go mad. Choose a polynomial f(x) = a2 x^2 + a1 x + a0, where a0 is the launch code, and a1 and a2 are random numbers. Give one of these pairs to each general: (1,f(1)), (2,f(2)), (3,f(3)), (4,f(4)), and (5,f(5)). Then if any three generals go mad, they can use Lagrange interpolation to calculate f(0), which is the launch code. If only two generals go mad, they can't do it. (Note: As with error correcting curves, you need to work in a finite field for exactly no information to be leaked.)
I realise this is beyond the scope of the video, but the main question this raised for me was the difference between single digits (4 bits) and polynomial coefficients (floating point numbers, which require much more storage, which would defeat the object of the exercise). I'm sure the real algorithm has some optimisations for binary, but that did bother me a bit!
Replacing 16 base-10 digits with 16 floating-point coefficients is basically the same problem that we originally sought to avoid, though. Base-10 can be expressed fairly compactly in BCD, or even more compactly in pure binary. _One_ single-precision IEEE float is 32 bits (4 bytes). 4 bytes can encode 8 base-10 digits in BCD, or about 9.63 base-10 digits in binary. Even if each base-10 digit is repeated 3 times for redundancy and error correction, using floating point numbers instead still results in about 2-3x as much data.
@@jeremykeens3905in my case it's admittedly closer to the former. I'm a software developer for the airline industry, the hardware I work with uses error correction codes for all sorts of things like passport barcode scans and so on. I've known for a long time that polynomials are used for error detection and recovery but I've never really intuitively understood why.
15:04 of course, if there was a degree-15 polynomial through those 17 points then it must be the same as the original one (since it still passes through the 16 correct points, and there's only one degree-15 polynomial that does this) Also, as stated, you could _probably_ correct the error sending just 17 points since the degree-15 polynomials interpolating the wrong sets of 16 points would, in all likelihood, not have integer coefficients. But I assume in practice we'd be doing this over finite fields, in order to reduce the information being sent.
When I was teaching math I loved the idea of smoothly connecting curves, I always taught my students the equation that passes through 3 points in order to make any multiple point graphs look smooth. When they learned calculus I further taught them to make them "connect" smoothly by slopes and peaks. Funtime.
The polynomial used in Lagrange interpolation taken to its limit describes a generating function. Maybe the redundancy of Reed-Solomon codes using Lagrange interpolation explains why generating functions have functional equations such as recurrent relations on their coefficients? Also, using a generating function that is periodic with each period containing the finite set of numbers n (in the same order) would be like the naïve approach on its side taken to its limit. I think finding those periodic generating functions is doable.
I'd prefer Parker Correcting Code which first confirms the nonexistence of errors, then introduces errors in a way which skirts most error correcting algorithms.
Credit card numbers already contain a check digit that tests for errors. Does that mean you only need one extra digit for the polynomial, since the 16th digit is already non-random?
I think technically no, you still need two extra, but I’m not sure it’s terribly important. See, the 16th digit is only non-random if you enter your credit card number correctly. If you make a mistake, it potentially becomes 16 random numbers. Those 16 numbers when transmitted electronically could have further errors, so you still need two extra digits on top to catch and fix. However, if you entered an invalid credit card number in the first place, I’m not sure how important it is to catch any errors that happen afterwards, but from a coding regulation perspective you probably want to just to catch the edge cases.
Depends on implementation. The credit-card code catches any single-digit error, but might fail to catch that there are two errors. So if you get 17 digits that you detect as wrong, there will likely be 2 combinations of 16 digits subsets that produce legal credit card number, so it is likely not enough to be self-repairing, but using it might still be useful with some other tricks.
Just want to point out for pedantry's sake that you couldn't have the credit card number with the 1 instead of 2 in the example, there's already error correction built in to credit card numbers, the last number is a check digit that will be wrong if any other digit changed.
Very reminiscent of Shamir's Secret Sharing Scheme, by which a secret bitstring (such as a private key) can be recovered from any M of N "shares," where no subset of M-1 shares tells you anything about the secret. It's the same deal: you can't find the original polynomial (and thus the secret) unless you know at least a number of points equal to the degree of the polynomial plus 1.
Very smart indeed. But what if there are more than one errors? Trying out all possible combinations and fitting them to a curve seems to be quite an expensive process in itself.
I guess there is still a nonzero albeit small chance that the random errors manage to evade detection by satisfying some other polynomial? Can you quantify this probability?
The precise answer might sound strange to you. If the noise is a continuous random variable, the answer is 0 (think gaussian distribution or bell curve). If the distribution has discrete components, or delta functions (think flipping a coin), and they are perfectly placed, it can be non-zero, but this would never happen in practice. Is it POSSIBLE for a random error to produce a new polynomial and evade detection? Yes, it is an event that exists in the event space (in math jargon this is called the sigma field). But it is probability 0. It’s a quirk of probability theory but these don’t mean exactly the same thing. Essentially, it CAN happen, but it WON’T ever happen even if you repeat the random trial an arbitrary number of times.
Here I was expecting the Luhn algorithm! This can detect a single error in any digit of a credit card or most swapped digits without correcting it. It's an algorithm most credit card numbers actually use.
Clarification: They don't *need to know _just_ the Y-values, they need to know pairs (X1, Y1), (X2, Y2),... (Xn, Yn). Edit: changed 'send' to 'need to know'. Thanks for the correction.
@@rmsgrey Please draw the line that has y-values 1 and 3. How many million options there are? How about one that goes through (1,1) and (37,3). Edit: Okay, I re-read your comment. You are right; you may have standard values. But you need to specify those. No mention of that on the video.
@@LukeSumIpsePatremTe There are a lot of details that are left out of the video - it's a video about the basic theory rather than instructions on how to implement it (for example, you'd use a different number system rather than reals to make calculation more efficient)
While I follow the maths of finding a number of points that uniquely define a curve, I don’t understand how you get from these 2-dimensional points with real numbers x and y to a list of single integers. In terms of computational error detection and correction, I’ve found it more understandable and to view checksums in terms of parity bits. Hamming ECC uses a set of parity bits each of which covers half of the bits of the data and parity itself, so when any of the parity bits are wrong the positions of the incorrect parity bits point to the bit that was in error. I’ve heard of Reed-Solomon ECC since it is used for disc storage, but its algorithm looks very complicated.
That's where this algorithm gets into the really obscure math. The answer is that instead of real (or rational) numbers, you use something called a finite field, which has addition, subtraction, multiplication, and division like rational numbers, but there are only 256 of them and the answers don't agree in any way with the real numbers. They were discovered as a theoretical exercise, but they have all the necessary properties for Lagrange's Interpolation Theorem to be true. Of course, the graphs really don't look like anything, because there isn't anywhere between two points to draw a line.
If you know the x-co-ordinates of the points you're looking at (and for a method like this, you'll generally pick sensible x values in advance, so most of the work only needs to be done once) you can work out the general form of the polynomial at those points (so at x=1, you have something like a+b+c+d+e+f+g+h+i+j+k+l+m+n+o+p) which, when you substitute in the y values you receive will give you a bunch of simultaneous equations. Because of the added structure of these equations all coming from the same underlying polynomial, you can solve them more efficiently by constructing a "divided difference" table - an inverted triangle where the first row is just the original y values, then in each subsequent row, each entry is the difference between the two entries above (right minus left) divided by the difference between the x values corresponding to the base of that entry's triangle (again, right minus left). So, for example, in the second row, below and between y1 and y2, you'd get y1,2 = (y2 - y1)/(x2 - x1), while in the third row, below and between y1,2 and y2,3 you'd get y1,3 = (y2,3 - y1,2)/(x3 - x1). After filling out the divided difference table, the leading diagonal pretty much lets you read off the coefficients for the polynomial with minimal effort. It seems complicated to describe, but if you try actually doing it a couple of times, it should all make sense :)
Nice and clear, right up until 13:24 where "n is the max we cant specify any more points".....What? Then later the presenter specifies more points (17 and 18). I get that you need a minimum of 2 points to specify a line and 3 points for a quadratic etc - where does the max come in?
Based on my understanding, I am pointing out a detail which may have been not explicitly clear from the video. This is how the algorithm would work: 0. Assume that the number of digits to be transferred are 16: a1, a2, ...a16 1. Based on this, the sender software and receiver software have already fixed the x co-ordinates of the points for which f(x) needs to be calculated. One possible solution for sender to agree upon these x co-ordinates would be to always choose x=1, x=2, ...x=16 and x=17 for error detection and x=18 for error correction. Thus in future, if different number of digits need to be transmitted [e.g., lets say 20], the sender software and receiver software can unambiguously know what x co-ordinates to use [x=1, x=2, ...x=20, x=21, x=22] without having to update the softwares. 2. The sender software constructs a polynomial from the 16 digits: f(x) = a1 * x^15 + a2 * x^14 + ... a15 * x + a16. 3. The sender software computes f(x) values for the chosen x=1, x=2, ... x=16, x=17, x=18 4. The sender software only transfers the above f(x1), f(x2), ... f(x16), f(x17), f(x18) values to the receiver software. Note that the actual digits a1, a2, ..., a16 are not transferred here. Nor are the x co-ordinates as the x co-ordinates are already agreed between the sender and receiver software. 5. The receiver software tries to construct a 15-degree polynomial curve from the above received 18 f(x) values. If it's able to construct the polynomial, it gets the co-efficients a1, a2, ... a16 as there can only be one such polynomial. Thus, the receiver software got the required data. 6. If the receiver software is unable to construct such a 15-degree polynomial [due to one of the f(x) values sent being corrupted], it tries to construct a polynomial from 16 f(x) values [since there are 18 f(x) values sent, there are 18C16 [18 choose 16] = 153 permutations in which you can select the 16 of the f(x) values and thus construct 153 such polynomials of degree 15. Of all these 153 polynomials, only the polynomials which used the faulty f(x) value would have different a1, a2, ..., a15 coefficients. Rest other polynomials would have the exact same coefficients a1 to 16 which are equal to the actual data which was to be sent. 7. This way, without transferring the actual data, we can interpolate the actual data! Please correct me if I understood it wrong.
Your step 1 makes a lot of sense - I was confused why you only needed one extra bit of data instead of twice as much since you're using coordinates (i.e. two numbers for each point), but agreeing the x coordinates beforehand seems logical.
Quick question - I can see why adding two extra numbers for the RS code will enable you to identify which number is erroneous, but how is this then used for error correction? Couldn't the true value lie anywhere on the curve - how do we know what to correct it to?
The coordinates are spaced out equally on the x axis. The x value is just the position of the number in the stream of data. There is one unique value generated by the polynomial for each position on the x axis and it is the value of numbers at their chosen x coordinate. To reconstruct the missing number you just put its position in the stream into the polynomial function.
I think you can test each of the 18 points one by one. When testing the i-th point, you take the set of the other 17 points and find their interpolation. If the erroneous point is in this set, then the interpolation will have degree 16. But if the erroneous number isn't included in the set, then the interpolation of the points will be the original polynomial of degree 15. So, when you test the erroneous point, you find that it's in fact errouneous, as well as the original polynomial. From the point you can take its value of x an by evaluating the polynomial you can get the corrected value.
Wouldn't it likely also be enough to send only the original 16 points because in case of an error occurs the probability of the coefficients being integers is incredibly small?
In practice, this is usually done over finite fields. So in other words, your polynomial can only take on a very limitied set of values to begin with. For example only the values 0, 1, 2, 3, 4, 5 and 6. No real numbers, no rational numbers, nothing in between, just these seven values. So if there is an error, it may change a 4 to a 6, or something. This is being done because calculating with these limited set of numbers is quicker than with the entire rational numbers.
But... what actually gets sent? Are the values of that function integers? Floating points? Surely the information needed to describe 16, 17, or 18 points on a curve is a lot more than 16 single digits. But we're supposed to be improving on a two- or threefold redundancy. I understand the abstract reasoning behind it, and I see the point when you're sending a huge amount of digits, but we started with a credit card number and this seems like over- _over-_ *overkill*
The original number is what get sent, the credit card number (in truth it would be encrypted but that's another whole level of complexity that's not necessaryto know about for error correction). This is an error correction technique, it is determining what was received is what was sent, it is not trying to fix what was sent in the first place if the person entered the wrong credit card number, it's just fixing if there has been an error in transmission (I.e. if noise had messed up the message somehow along the way). If the sender typed in a wrong number as part of the credit card number, then there is no way the receiver can know this. However, if in the sending of the credit card number a digit got messed up, then this technique could hello you determine what value was out of place. It is an error correction technique, not an original number verifier. Perhaps this might help as an explanation If someone is saying MY DOG HAS FLEAS out loud but because of the wind, the receiver hears MY DOG HAD FLEAS this error correction technique would help you determine that the word HAD should have been HAS If the original person had accidentally said MY DOGS HAD FLEAS you would not know they said something wrong and this technique cannot fix the wrong original intended message because this is error correction, not original message verification. You need to do other things to verify the original message before the message was ever sent If you want original message verification. Hopefully my crude attempt at explaining things helped
@@stultuses Thank you for trying, but this does not address my confusion. In the original discussion about how to check the credit card number, the naive solution was to double each number. So I understand exactly what was transmitted. Two 8's, two 4's, two 3's, and so on, until you have sent 32 total digits. Then triple-send in case one of a double-sent pair is corrupted. Sent 48 digits. Fine. Then we discuss Reed-Solomon and the functions and all of that. What *actual values* need to be sent for that to work? The credit card number plus _what else?_ A few extra digits? A binary code? Something else?
it feels very generatingfunction-y. like the polynomial you use to encode your numbers is the same as the ordinary generating function for if your numbers were a sequence with the last number being the zeroth value of the sequence
This was fascinating and passionate. But, there seemed to be a whole introduction for the generalist that was missing. What the heck were those curves with all the points? Did you just transmit the credit card numbers assuming they were on a curve, or did you transmit the polynomial? I had absolutely no idea what was going on. Was there an earlier video?
cool. I sort of imagine that if instead of sending 18 points you send 17 points and find the values of the coefficients a1, a2, ….., a15 by choosing 16 random points in those 17 points. perhaps perform this task a few times and the coefficients that duplicate the most number of times are the correct coefficients. Maybe this method may be too complex for its benefit of one less signal to transfer. This is awesome though.
It seems like that would work, but it turns out that including a wrong point messes up all the coefficients, so none of them agree with the correct answer or each other at all.
If you have one errored number, then each set of 16 points will give a different curve with different coefficients - for a toy example, consider sending three points on a line: changing one point gives you a triangle rather than a line, with three different slopes for the three different lines, and, unless one of the points sent is on the y-axis, three different y-intercepts too. So you wouldn't get coefficients repeating, let alone the correct values.
I recently experimented with Reed-Solomon error correction, and to be fair it's a beautiful solution and less complicated than it seems. The first case is for error detection, and as said in the answer above, a single wrong number will make a completely differente curve. So if redundant values are also in the same curve, you can procede without worries. In the other hand, if the redundant values doesn't match the curve, you must test for errors. This is where things can get tricky and some implementations may differ, but a simple first step is to just try skip one value and check if it "fixes" the interpolation. You can also test for bits flipped, and so on. If you really need to recover the data, you can try harder and harder until it fits, and this is where more redundancy can help. A nice property of reed-solomon is also this possibility to simply add more redundancy, or change how it is interleaved between data, without any change in the base algorithm. For computer files, for example, additional redundancy can be stored in another filesystem as a backup. In the end, you are just interpolating points on a curve :o
@@iabervon That makes perfect sense! My 'optimisation' seemed a little too trivial to be original or correct (in this case correct). I would really like to find out though how sensitive these coefficients are to the points. If a single point shifts some dx (or dy) away from its original position what is the corresponding change da1, da2, ....dan in the value of the coefficients. I lack the mathematical tools to really get into this question (and the topic in general) but thank you though! Much appreciations from all the way in South Africa.
@@rmsgrey hey. Thank you for your response. I accidently attended an electrical engineering lecture a few weeks ago (I study Chemical Engineering here in South Africa and I sort of got lost on campus and found my way somehow to that lecture! ) Anyway, they were discussing transferring data and such and I have a vague memory of some of the things you mention being part of that discussion, albeit in a more technical setting. It's interesting how life works!
i get that 3 random points will virtually never fall on a line, but isn’t that only true for analog data, where the number of possible points are infinite? in an extreme case of a grid that was, say, 4x4, 3 random points would create a line relatively often. what’s the resolution threshold for this method to actually be reliable?
The framing about curves lying on "random" points is a little confusing IMO. Obviously, three random points COULD be colinear, so saying that they don't define a line because they're not "random" enough is kind of ambiguous. I think a clearer way to explain it would be: given a curve of a particular type (line, circle, degree-n polynomial), what is the minimum number of points one could choose that uniquely define it from other curves of that type? For example, two unique points lie on infinitely many curves, but only one line; three points lie on infinitely many curves, but at most one of those curves is a circle, etc.
Regarding the comment at 12:40, on knowing 16 points on a degree 16 polynomial determines it uniquely via Reed-Solomon - this seems oddly analogous to the Fundamental Theorem of Elimination Theory. I’m really curious whether this can be made precise.
How can one figure out which point that is wrong given 2 extra points to help correct it? Do you have to do N checks, each check omitting one point that you're checking for, in order to figure out which one that was wrong? Because then with 1 million points, you'd have to do 1 million checks in the RS case, but if you were given 3 million points (in the naive case) by sending the 1 million data points 3 times, you'd only have to look at the wrong data point and do a majority vote of 3 numbers.
likely the switches pre-date the bookcases. you would think just relocate the switches, but there might not be a more suitable location. the most likely answer is that doing so requires union labor and significant cost. it is less expensive and more efficient simply to cut holes in a bookcase. I've heard stories very similar to this.
How does adding the 18th number help us correct the error? i understood fully up to the point where they start adding the additional points. 17th point i still understand, but how does the 18th point help correct the error?
If you don't have a degree 15 polynomial when you look at the 18 numbers, you can look at the eighteen different sets of 17 numbers. One of them will exclude the error, so will be 17 numbers that all lie on the degree 15 polynomial, while the other seventeen will each be the 1 error and 16 correct values. Since the 16 correct values define the correct degree 15 polynomial, which we know the error is away from, none of them will give a degree 15 polynomial. So if there's at most one error, there will only be one degree 15 polynomial that fits 17 of the points given, and that will be the correct curve.
Adding the 18th point means you now effectively have 17 different sets of 17 different points, each excluding a single point. If there's one error, all the sets except the one that doesn't have the bad value will detect the error. You can then use the set that doesn't have the error to determine the correct value for the corrupted piece of data.
SpaceX actually does the 3x redundancy thing. Above the atmosphere, radiation is a bigger problem than here on the surface and regular computers get a lot more random errors from that. The traditional method of dealing with this is to harden the systems against radiation to prevent the errors. That makes them more expensive and heavier. SpaceX just uses three more or less normal computers. Usually, the computers agree, but when there's a radiation-induced error, two computers outvote the third. You can't use error correction algorithms because you don't know what the result is supposed to be ahead of time. Of course, the insides of spacecraft are protected against radiation for the benefit of the humans inside, so astronauts on the ISS can use their laptops for everyday use without issue.
Took a class with Professor Vogt my freshman year of college and absolutely loved it, she’s a fantastic lecturer. So happy to see her on numberphile!
Funny seeing you here
I'm a fan.
@@aidanhennessey5586haha i was wondering if this would happen
What class was it?
I love lecturers who get excited about their subject and excited that you get excited too! :D
One of the first uses of Reed-Solomon error correcting codes was to transmit data back from the Voyager spacecraft. Once they got beyond Jupiter, the signal to noise ratio became untenable without error correction.
I thought they used binary and not decimal encoding. would it not be untwoable?
@@TaohRihze Reed-Solomon in binary is simpler to understand and implement
@@TaohRihzeBa-dum tish!
@@TaohRihze You have to account for inflationary language. Without that things would be twoderful.
@@allanolley4874 That's a deep cut from the 1940s.
I always love the interaction with Brady. He's questions and thoughts are always on point and helpfull.
*His questions
Yeah. In this case he asked a very on-point question -- he asked whether the maximum number of random points that a curve interpolates is also the minimum number of points that defines the curve.
The answer is yes, as explained later in the video, but unfortunately it seems like the prof misunderstood his question :(
It was a bit glossed over but properly a Reed-Solomon code can correct up to half the redundant bits. So +1 would only detect the error and +2 would correct up to 1 bit. If you have a million bits then the chances of more than one error is really high. So in practice it's somewhere around 1.5x the information rather than 3x the information.
Also, the extra numbers need to be float instead of integers, no? I can't see how it can be done with only integer y's.
@@tinglin6121 Unfortunately the end of the explanation is rushed. I think you're right, you construct a polynomial using the sent data as coefficients - these can be integers, because it's the input, you're not trying to achieve any particular curve on a graph, whatever you get is fine. Then you pick a point or two (or more) on the graph, and at that part it could be challenging to pick a point which both x and y are integers. This brings the question of precision and I imagine as many things in life, the actual error correction is full of complex nuances like finding the most optimal precision and number of points. Also there was no example on how it actually looks when you detect a mismatch: you got your 16 values + 2 additional points, but those 2 additional points don't lie on the curve drawn by the polynomial defined by the 16 points - so how do you find which is the offending point, and then how do you use the 2nd value to correct the error? There might be some smart math behind that, but it wasn't presented.
@@tinglin6121 All the important facts are still true when the polynomials are interpreted in a finite field, so you can use that to pull these tricks with just integers and without ever needing to worry about rounding errors or overflow or whatever. The division operation definitely doesn't behave like the one you learned in grade school, though!
Note that when you do these manipulations in binary, they get a whole lot easier and faster to process. Reed-Solomon coding is actually one of the earliest and easiest error correction codes to calculate. It was so easy that even in the limited processing of pagers in the 1980s, they could work quite well. But there are more complex error correction codes that can correct longer strings of errors. There are Bose-Chaudhuri-Hocquenghem (BCH) codes, of which Reed-Solomon is a subset, there are "turbo-codes" and there are Low Density Parity Check codes (LPDC). Each of these methods will enable one to get perfect copy closer and closer to the noise floor.
Do any of those correct for *missing* bits? As in, let's say I send 50 bits of real message encoded in 100 total bits, and a chunk of 10 bits in the middle just never arrives at the destination, can one of these codes allow the receiver to reconstruct the 50 bit real message from the 90 that did arrive?
@@maxbaugh9372 Replace the missing bits with random ones at the receiver and you are back to the original situation. This is not the most efficient way, because by using the information the bits have been "erased" you can optimize the error correcting code and the receiver, but it would still work. You can research about Information Theory to find out all the mathematical limits and awesome stuff you can do.
@@maxbaugh9372 if I am not wrong, they can, since bits comes in packages, also, missing bits are registered as bits with 0 value.
but I am not sure
@@maxbaugh9372usually you know how many bits the message is, so you would add the missing bits as zeroes.
But that probably works only if you know where the bits are missing.
This is generally not a problem because signals are clocked, if a link is functioning at all the both sides will agree completely about how many bits were sent.
The “sending it twice” has been the default method of error correction since the invention of the telephone, but it can break down if factors like someone’s accent or a poor connection make the repeat sound similar. For example, “nine” and “one” can sound similar and letters like “B” and “D” can also sound similar, and still sound the same to the hearer upon repeat.
Extra footage from this interview over on Numberphile2: ua-cam.com/video/eLPfRY4NATw/v-deo.html
Tq
MORE MATH.
you should pin this
Glad I checked the comments. Is there a reason to post this here and not in the description?
this seems like a comment that traveled through time as the comment is from 1 day ago, but the video got published 13 hours ago
Reed-Solomon Codes are very prominently used for CDs. That way a tiny scratch can be accounted for and doesnt make your data unrecoverable.
The real magic is the interleaving. Reed Solomon can't help you if the whole code is unreadable
Fun fact: Credit card numbers already have error detection so that the scenario described on the start doesn't happen. It's not error *correction* though
thank you.
I can't imagine why you'd use forward error correction for a credit card number when you could just detect an error and ask the sender to resend that packet. It's typically only used for very low latency requirements like live video, or for situations like storage or digital TV broadcasts where it's impossible to request a resend. But I guess it's just an example.
@@gdclemo Yes it is just an example, however asking the sender to send again isn't always an option even ignoring latency demands. For example, imagine archival. There, your primary struggle is against temporal corruption: if your archival solution survives longer than the original work's medium, you don't get to ask for the original work again when corruption occurs.
@@mark.panghyUzcard does, it is from Uzbekistan. Also, some credit cards from the communications industry or used in healthcare may start with 8.
@@gdclemo Well, the website asking for the number might not explicitly do it, but all of the communication between both ends is error-corrected at various levels of he networking stack.
The mathematics of how this is accomplished in real-world applications is very interesting as well. I can't recall if Numberphile has done a lot of videos on finite fields, but they're nicely suited to software and hardware implementations. Nowadays they've largely been replaced by turbocodes although they're still important. The Voyager probes use Reed-Solomon codes, as does the Compact Disc audio standard.
And ata disks and fpgas have a niche for finite fields calculations
QR codes also use Reed-Solomon.
this seems like a very easy crossover with computerphile to look at the actual implementation of these codes and what kind of algorithms they're using
File:Okinawan_last_name_Hanko_Stamps_in_Tsurumi_Okinawa_Street.jpg@@petertaylor4980
Isabel Vogt is an amazing teacher. Great video!
Yeah it's okay
Nah
Is this Isabel's Numberphile debut? She's awesome!
Great explanation of Error-Correcting Codes! I never looked at Reed-Solomon codes in this way.
if I'm not mistaken, that's how compact discs that contain data (not audio) function.
audio discs don't contain such a mechanism (so, they technically speaking hold more useful data); but instead errors get like "filtered" out. because music is unlike data not just entirely random. so, one single harsh spike won't pass the subsequent filter.
@edwardfanboy oh, okay.
but there's definitely a difference in data they contain.
maybe a CD for computers contains even more correction methods. but you definitely lose some storage capabilities. (and no, I'm not talking about discs that store more then 650 megabytes which just use more if the available area.)
I mean, that's all aside this other encoding method where way more physical "pits" get used to represent digital bits. I think you need fourteen to represent eight. that's because you need to constantly focus the laser (it's about a square millimeter when it hits the surface to be insensitive towards scratches) and can't just have eight consecutive zeros. it *must* change after two spots or the laser will lose focus and alignment. this kinda like provides the same function as the punched sides of analog film, so to speak.
@@HxTurtle CD's, Barcodes, QR codes, even ASDL all have encoding schemes that prevent the continuous zero issue, But we usually don't call that data, it's something extra from the data payload that's being sent and it's below the EC.
The PCM audio stream coming out of the CD is identical to the master copy used to make the CD, If the data can be repaired with EC, the issue is most shitty CD rippers put the CD drive in audio playback mode which just ignores errors and keeps playing to maintain tempo, and not data read mode which will try reading the data sector again. Remember Audio CD's were designed in the 80s to be played on affordable hardware with kB of RAM, you don't have 10s of buffering to go back and attempt to reread the data like we do with UA-cam, or can wait a few extra seconds for the game to load like on the PS1.
If I remember right, audio CDs have just enough error detection to detect a bad sample and recreate it from its neighbours, and only protect the most significant bits of the sample. Data CDs need much more error correction to recreate every bit perfectly.
The more you learn about data mechanisms, the more wonderful it is that all of it can be stored so flawlessly.
@@vigilantcosmicpenguin8721 agree! when it comes to optimizing density and speed, errors start to show up.
so, it requires a lot going on behind the scenes to always just work with virtually no failure rate.
I was always surprised that Jules Verne, in the "In Search of the Castaways (French: Les Enfants du capitaine Grant, lit. 'The Children of Captain Grant')", did not have a letter in a bottle streaked with small print all over the field with the coordinates of the island where the shipwrecked were saved, instead of a long-winded artistic description of the severity of their situation.
Hey, Brady, been watching Numberphipe for more than a decade. Can you please add introductory information about the people you interview and ask them to name the field they talk about so those who are interested can leave more?
It's linked in the description.
Lagrange Interpolation is a huge part of the research I’m doing for my masters, such a cool tool
3:04 - There's another problem with the tripling the digits solution in the manner executed here. Errors are usually 'bursty'. There'll be a bit of noise and a whole bunch of digits in a row will have a problem. Tripling each individual digit protects very poorly against this. It would be better to send the whole number three times.
It took me a long time to wrap my head around Reed-Solomon codes because in their implementation they use Galois fields. Specifically Galois fields of characteristic 2, which are basically strings of binary digits where addition is replaced with xor.
But, because I'm me, I had to have a deep understanding of _why_ you could replace addition with xor and still have algebra work exactly the same because otherwise I would never remember how the whole thing worked because it wouldn't fit into a system.
I think you use xor to make (and check) whether an array is odd or even (one of the easiest to understand [and therefore to implement] error checking [and potentially self correcting, if you make it two dimensional] methods.)
You can mitigate against this problem by the sender doing 1) encode 2) permute the coded bits 3) send. The receiver must thus do 1) receive 2) unpermute the received bits 3) decode. One sort of permutation is called interleaving. But permuting bits is independent of encoding...
Do you know of any books that explain the theory in your last paragraph? I learned this from a book about computer networking, and if I recall I was annoyed that this leap was made. In fact, I had this type of annoyance several times during my engineering degree.
@@DF-ss5ep Discrete mathematics, usually a required course for computer science, "Discrete Mathematics with Applications", is the book I would recommend, Will give you an intro to things like fields, especially in the section about encryption, it should give you enough info to actually gain some intuition. The more rigorous Abstract Algebra stuff is a bit above my knowledge.
Isabel reminds me of Tom Crawford so much! Same look, same smile, and same enthusiasm. Does anyone else see that?
A variant maybe? Who knows?😂
When I took the error correction code class, the first page of the slides professor showed is to teach us the field, group .... and it's so difficult for a beginner and hard to find the relation btw channel coding.
It's so nice to watch this video !
Small correction that I’m sure someone else has pointed out already but I can’t see: At 7:00 3 random points do not ALWAYS lie on a circle, as there is always a chance that the three points end up on a straight line
They pointed it out themselves. Their use of the word "random" is hiding some mathematical details that are too much of a tangent to get into. The tldr is that that those cases are a measure 0 set and so come up with 0 probability so we are mathematically justified in ignoring them (that doesn't say that they are impossible however). If you really want you could say it is a circle with a radius of infinity
s/random/generic/
A straight line is also often considered as a circle with basically infinite radius.
Then even if they are on a line, we can fit such a circle
It's kind of amusing to me that credit card numbers were chosen, because they already have error detection built-in to the cc# itself (mod10 code) to detect people transposing digits or having a wrong digit when writing them down or saying them over the phone.
And also because sending 48 bytes instead of 16 bytes would make no noticeable difference to the transmission time. It was a pretty bad choice of example.
@@RFC3514Perhaps, but this isn't computerphile.
Extremely well explained. Great video.
I wish I could say I followed this, but I'm getting an error between my neurons.
Knowing Lagrange's Interpolation and working out a few programming examples may help in understanding the original problem.
@@abhijitborah If someone is completely lost it might be a bit much to expect them to be able to jump straight to understanding and applying the maths here, no?
If you also look at the second video, you have two new pieces of data, and can correct the error between your neurons. :)
All you need to do is add another connection of neurons
@@abhijitborahI think that was a joke, fyi
This method is also used for a multi-password encryption (not real term), for example if you want to encrypt a message, and provide 10 people with their unique decryption keys, and require that in order to encode it at least 4 people need to get together and uae their unique decryption keys at the same time.
Even though the 2 extra data points that we need to send for error correction is independent of the number of data we are sending, this however would only work if we assume that we can have at most 1 mistake in the message we're trying to send. Unfortunately the bigger the data we are sending the higher the likelihood of multiple errors at the same time so we'll probably have to send more than 2 extra data points for error correction to work with larger data messages, which of course wouldn't cause any problem in practical terms since the ratio of redundant to the essential data would be close to 0 for large messages.
Well, this video provides the thought process, implementation of course need to solve more problems. But still, imagine you need to send milion digits and expect 10% to get lost on the way. With naive solution, you might need grahams number of digits to send, while with this solution, you actually need to send little over 1,2 milion digits to be reasonably sure the code will still fix itself on the other side.
Great video! Always amazed by Brady's ability to ask insightful questions. Professor Vogt was so engaging!
As a fun addendum, this method can also be used to share a secret between parties.
Suppose you want to share the missile launch code among five generals, in such a way that if two go mad, the missiles can't be launched. At least three need to go mad.
Choose a polynomial f(x) = a2 x^2 + a1 x + a0, where a0 is the launch code, and a1 and a2 are random numbers.
Give one of these pairs to each general: (1,f(1)), (2,f(2)), (3,f(3)), (4,f(4)), and (5,f(5)).
Then if any three generals go mad, they can use Lagrange interpolation to calculate f(0), which is the launch code. If only two generals go mad, they can't do it.
(Note: As with error correcting curves, you need to work in a finite field for exactly no information to be leaked.)
Curious analogy. 😐
@@warlockpaladin2261 If you'd prefer to substitute a secret chicken or soft drink recipe, that's totally fine.
I realise this is beyond the scope of the video, but the main question this raised for me was the difference between single digits (4 bits) and polynomial coefficients (floating point numbers, which require much more storage, which would defeat the object of the exercise). I'm sure the real algorithm has some optimisations for binary, but that did bother me a bit!
Love the cute paper change intermissions!
Replacing 16 base-10 digits with 16 floating-point coefficients is basically the same problem that we originally sought to avoid, though. Base-10 can be expressed fairly compactly in BCD, or even more compactly in pure binary.
_One_ single-precision IEEE float is 32 bits (4 bytes). 4 bytes can encode 8 base-10 digits in BCD, or about 9.63 base-10 digits in binary. Even if each base-10 digit is repeated 3 times for redundancy and error correction, using floating point numbers instead still results in about 2-3x as much data.
In real applications the computation is usually done over a finite field rather than the real numbers, which does not have this problem
That was an absolutely brilliant explanation!
Of what? Is numberphile aimed at cognoscenti or ordinary curious people? If the second we need more explanation
@@jeremykeens3905in my case it's admittedly closer to the former. I'm a software developer for the airline industry, the hardware I work with uses error correction codes for all sorts of things like passport barcode scans and so on. I've known for a long time that polynomials are used for error detection and recovery but I've never really intuitively understood why.
You should be the guest
She's great at explaining
Another video about error correction code in a short while, this is the third one now. Coincidence??
That way if there was an error in any of the videos we can use the other two videos to correct it
I really like the B-roll of the landscape around there.
15:04 of course, if there was a degree-15 polynomial through those 17 points then it must be the same as the original one (since it still passes through the 16 correct points, and there's only one degree-15 polynomial that does this)
Also, as stated, you could _probably_ correct the error sending just 17 points since the degree-15 polynomials interpolating the wrong sets of 16 points would, in all likelihood, not have integer coefficients. But I assume in practice we'd be doing this over finite fields, in order to reduce the information being sent.
I'd love to see more videos featuring Isabel!
In the same league of a Parker Square, this may be a Vogt Circle :D
Now I understand how RAID6 works a little better :) Great video
When I was teaching math I loved the idea of smoothly connecting curves, I always taught my students the equation that passes through 3 points in order to make any multiple point graphs look smooth.
When they learned calculus I further taught them to make them "connect" smoothly by slopes and peaks.
Funtime.
Great questioning from Brady!
The polynomial used in Lagrange interpolation taken to its limit describes a generating function. Maybe the redundancy of Reed-Solomon codes using Lagrange interpolation explains why generating functions have functional equations such as recurrent relations on their coefficients? Also, using a generating function that is periodic with each period containing the finite set of numbers n (in the same order) would be like the naïve approach on its side taken to its limit. I think finding those periodic generating functions is doable.
I'd prefer Parker Correcting Code which first confirms the nonexistence of errors, then introduces errors in a way which skirts most error correcting algorithms.
Credit card numbers already contain a check digit that tests for errors. Does that mean you only need one extra digit for the polynomial, since the 16th digit is already non-random?
yeah but websites would want to distinguish between transmission errors and people trying to guess a credit card number
I think technically no, you still need two extra, but I’m not sure it’s terribly important. See, the 16th digit is only non-random if you enter your credit card number correctly. If you make a mistake, it potentially becomes 16 random numbers. Those 16 numbers when transmitted electronically could have further errors, so you still need two extra digits on top to catch and fix. However, if you entered an invalid credit card number in the first place, I’m not sure how important it is to catch any errors that happen afterwards, but from a coding regulation perspective you probably want to just to catch the edge cases.
Depends on implementation. The credit-card code catches any single-digit error, but might fail to catch that there are two errors. So if you get 17 digits that you detect as wrong, there will likely be 2 combinations of 16 digits subsets that produce legal credit card number, so it is likely not enough to be self-repairing, but using it might still be useful with some other tricks.
Just want to point out for pedantry's sake that you couldn't have the credit card number with the 1 instead of 2 in the example, there's already error correction built in to credit card numbers, the last number is a check digit that will be wrong if any other digit changed.
Very clever.
Whoa…that is WILD!
Reminds me of Reed-Solomon error correction in link 16
I guess it wasn’t Brady’s day as he sounded a bit annoyed in contrast to the pure enthusiasm of miss Vogt.
Something does appear to be a bit off in comparison to other Numberphile videos. I am curious to the reason.
Maybe he had explosive diarrhea
man, that was explained a lot better than in my coding theory course
Very cool! Great video :)
Problem is points consist of two numbers. So unless our points are all roots, we have to send 2*(n+2) instead of 3n for error correction.
Love the geometrical earrings
That is very clever
Credit card numbers have error detection built into them. The last digit is a checksum of sorts.
Yes, and the number is FAR from random.
They have error detection, but not error correction.
6:45 was a Parker circle. They gave it a go.
fantastic video
Very reminiscent of Shamir's Secret Sharing Scheme, by which a secret bitstring (such as a private key) can be recovered from any M of N "shares," where no subset of M-1 shares tells you anything about the secret. It's the same deal: you can't find the original polynomial (and thus the secret) unless you know at least a number of points equal to the degree of the polynomial plus 1.
So cool!!!! Thanks for the explainer!
Very smart indeed. But what if there are more than one errors? Trying out all possible combinations and fitting them to a curve seems to be quite an expensive process in itself.
Error detection is built in TCP protocol, part of Internet, used by the web.
The Lagrange Interpolation Theorem is pretty lit if I do say so myself.
I guess there is still a nonzero albeit small chance that the random errors manage to evade detection by satisfying some other polynomial? Can you quantify this probability?
The precise answer might sound strange to you. If the noise is a continuous random variable, the answer is 0 (think gaussian distribution or bell curve). If the distribution has discrete components, or delta functions (think flipping a coin), and they are perfectly placed, it can be non-zero, but this would never happen in practice. Is it POSSIBLE for a random error to produce a new polynomial and evade detection? Yes, it is an event that exists in the event space (in math jargon this is called the sigma field). But it is probability 0. It’s a quirk of probability theory but these don’t mean exactly the same thing. Essentially, it CAN happen, but it WON’T ever happen even if you repeat the random trial an arbitrary number of times.
Thanks!
Here I was expecting the Luhn algorithm! This can detect a single error in any digit of a credit card or most swapped digits without correcting it. It's an algorithm most credit card numbers actually use.
Brady: construct a circle
Isbel: freehand a circle
Me: freehand the construction of a circle
Imma guess that 3B1B's Hemming Code videos are gonna be useful in my being able to follow along with this video lol
Some of the basic ideas are the same, but the details of Hamming Codes are different enough that they'll be strictly limited in their utility.
Man Brady, Bring more paper for our girl here...
Clarification: They don't *need to know _just_ the Y-values, they need to know pairs (X1, Y1), (X2, Y2),... (Xn, Yn).
Edit: changed 'send' to 'need to know'. Thanks for the correction.
Why would you send pairs rather than having previously agreed standard x values?
@@rmsgrey
Please draw the line that has y-values 1 and 3. How many million options there are? How about one that goes through (1,1) and (37,3).
Edit: Okay, I re-read your comment. You are right; you may have standard values. But you need to specify those. No mention of that on the video.
@@LukeSumIpsePatremTe There are a lot of details that are left out of the video - it's a video about the basic theory rather than instructions on how to implement it (for example, you'd use a different number system rather than reals to make calculation more efficient)
While I follow the maths of finding a number of points that uniquely define a curve, I don’t understand how you get from these 2-dimensional points with real numbers x and y to a list of single integers.
In terms of computational error detection and correction, I’ve found it more understandable and to view checksums in terms of parity bits. Hamming ECC uses a set of parity bits each of which covers half of the bits of the data and parity itself, so when any of the parity bits are wrong the positions of the incorrect parity bits point to the bit that was in error. I’ve heard of Reed-Solomon ECC since it is used for disc storage, but its algorithm looks very complicated.
That's where this algorithm gets into the really obscure math. The answer is that instead of real (or rational) numbers, you use something called a finite field, which has addition, subtraction, multiplication, and division like rational numbers, but there are only 256 of them and the answers don't agree in any way with the real numbers. They were discovered as a theoretical exercise, but they have all the necessary properties for Lagrange's Interpolation Theorem to be true. Of course, the graphs really don't look like anything, because there isn't anywhere between two points to draw a line.
If you know the x-co-ordinates of the points you're looking at (and for a method like this, you'll generally pick sensible x values in advance, so most of the work only needs to be done once) you can work out the general form of the polynomial at those points (so at x=1, you have something like a+b+c+d+e+f+g+h+i+j+k+l+m+n+o+p) which, when you substitute in the y values you receive will give you a bunch of simultaneous equations.
Because of the added structure of these equations all coming from the same underlying polynomial, you can solve them more efficiently by constructing a "divided difference" table - an inverted triangle where the first row is just the original y values, then in each subsequent row, each entry is the difference between the two entries above (right minus left) divided by the difference between the x values corresponding to the base of that entry's triangle (again, right minus left). So, for example, in the second row, below and between y1 and y2, you'd get y1,2 = (y2 - y1)/(x2 - x1), while in the third row, below and between y1,2 and y2,3 you'd get y1,3 = (y2,3 - y1,2)/(x3 - x1).
After filling out the divided difference table, the leading diagonal pretty much lets you read off the coefficients for the polynomial with minimal effort.
It seems complicated to describe, but if you try actually doing it a couple of times, it should all make sense :)
Nice and clear, right up until 13:24 where "n is the max we cant specify any more points".....What? Then later the presenter specifies more points (17 and 18). I get that you need a minimum of 2 points to specify a line and 3 points for a quadratic etc - where does the max come in?
Based on my understanding, I am pointing out a detail which may have been not explicitly clear from the video. This is how the algorithm would work:
0. Assume that the number of digits to be transferred are 16: a1, a2, ...a16
1. Based on this, the sender software and receiver software have already fixed the x co-ordinates of the points for which f(x) needs to be calculated. One possible solution for sender to agree upon these x co-ordinates would be to always choose x=1, x=2, ...x=16 and x=17 for error detection and x=18 for error correction. Thus in future, if different number of digits need to be transmitted [e.g., lets say 20], the sender software and receiver software can unambiguously know what x co-ordinates to use [x=1, x=2, ...x=20, x=21, x=22] without having to update the softwares.
2. The sender software constructs a polynomial from the 16 digits: f(x) = a1 * x^15 + a2 * x^14 + ... a15 * x + a16.
3. The sender software computes f(x) values for the chosen x=1, x=2, ... x=16, x=17, x=18
4. The sender software only transfers the above f(x1), f(x2), ... f(x16), f(x17), f(x18) values to the receiver software. Note that the actual digits a1, a2, ..., a16 are not transferred here. Nor are the x co-ordinates as the x co-ordinates are already agreed between the sender and receiver software.
5. The receiver software tries to construct a 15-degree polynomial curve from the above received 18 f(x) values. If it's able to construct the polynomial, it gets the co-efficients a1, a2, ... a16 as there can only be one such polynomial. Thus, the receiver software got the required data.
6. If the receiver software is unable to construct such a 15-degree polynomial [due to one of the f(x) values sent being corrupted], it tries to construct a polynomial from 16 f(x) values [since there are 18 f(x) values sent, there are 18C16 [18 choose 16] = 153 permutations in which you can select the 16 of the f(x) values and thus construct 153 such polynomials of degree 15. Of all these 153 polynomials, only the polynomials which used the faulty f(x) value would have different a1, a2, ..., a15 coefficients. Rest other polynomials would have the exact same coefficients a1 to 16 which are equal to the actual data which was to be sent.
7. This way, without transferring the actual data, we can interpolate the actual data!
Please correct me if I understood it wrong.
Your step 1 makes a lot of sense - I was confused why you only needed one extra bit of data instead of twice as much since you're using coordinates (i.e. two numbers for each point), but agreeing the x coordinates beforehand seems logical.
I think in step 6 you can construct 18 polynomials from 17 f(x) values and see which polynomial is of degree 15.
I want to see how to create the ECC values using the input data... and how to verify/correct the received signal.
Quick question - I can see why adding two extra numbers for the RS code will enable you to identify which number is erroneous, but how is this then used for error correction? Couldn't the true value lie anywhere on the curve - how do we know what to correct it to?
I suppose as you know the polynomial, you just correct this erroneus number to the output of the polynomial at the exact same input
The coordinates are spaced out equally on the x axis. The x value is just the position of the number in the stream of data. There is one unique value generated by the polynomial for each position on the x axis and it is the value of numbers at their chosen x coordinate. To reconstruct the missing number you just put its position in the stream into the polynomial function.
I think you can test each of the 18 points one by one.
When testing the i-th point, you take the set of the other 17 points and find their interpolation. If the erroneous point is in this set, then the interpolation will have degree 16. But if the erroneous number isn't included in the set, then the interpolation of the points will be the original polynomial of degree 15.
So, when you test the erroneous point, you find that it's in fact errouneous, as well as the original polynomial. From the point you can take its value of x an by evaluating the polynomial you can get the corrected value.
@@Unifrog_ aah thank you! This was the key piece of info I'd missed - that the x value is just the position in the stream!
What if the impostor is one of the two extra numbers?
Wouldn't it likely also be enough to send only the original 16 points because in case of an error occurs the probability of the coefficients being integers is incredibly small?
In practice, this is usually done over finite fields. So in other words, your polynomial can only take on a very limitied set of values to begin with. For example only the values 0, 1, 2, 3, 4, 5 and 6. No real numbers, no rational numbers, nothing in between, just these seven values. So if there is an error, it may change a 4 to a 6, or something. This is being done because calculating with these limited set of numbers is quicker than with the entire rational numbers.
But... what actually gets sent? Are the values of that function integers? Floating points? Surely the information needed to describe 16, 17, or 18 points on a curve is a lot more than 16 single digits. But we're supposed to be improving on a two- or threefold redundancy.
I understand the abstract reasoning behind it, and I see the point when you're sending a huge amount of digits, but we started with a credit card number and this seems like over- _over-_ *overkill*
The original number is what get sent, the credit card number (in truth it would be encrypted but that's another whole level of complexity that's not necessaryto know about for error correction).
This is an error correction technique, it is determining what was received is what was sent, it is not trying to fix what was sent in the first place if the person entered the wrong credit card number, it's just fixing if there has been an error in transmission (I.e. if noise had messed up the message somehow along the way).
If the sender typed in a wrong number as part of the credit card number, then there is no way the receiver can know this.
However, if in the sending of the credit card number a digit got messed up, then this technique could hello you determine what value was out of place.
It is an error correction technique, not an original number verifier.
Perhaps this might help as an explanation
If someone is saying MY DOG HAS FLEAS out loud but because of the wind, the receiver hears MY DOG HAD FLEAS this error correction technique would help you determine that the word HAD should have been HAS
If the original person had accidentally said MY DOGS HAD FLEAS you would not know they said something wrong and this technique cannot fix the wrong original intended message because this is error correction, not original message verification.
You need to do other things to verify the original message before the message was ever sent If you want original message verification.
Hopefully my crude attempt at explaining things helped
@@stultuses Thank you for trying, but this does not address my confusion. In the original discussion about how to check the credit card number, the naive solution was to double each number. So I understand exactly what was transmitted. Two 8's, two 4's, two 3's, and so on, until you have sent 32 total digits. Then triple-send in case one of a double-sent pair is corrupted. Sent 48 digits. Fine.
Then we discuss Reed-Solomon and the functions and all of that. What *actual values* need to be sent for that to work? The credit card number plus _what else?_ A few extra digits? A binary code? Something else?
Fascinating.
it feels very generatingfunction-y. like the polynomial you use to encode your numbers is the same as the ordinary generating function for if your numbers were a sequence with the last number being the zeroth value of the sequence
"If I send it three times..." that made me think of a movie, "Minority Report".
This was fascinating and passionate. But, there seemed to be a whole introduction for the generalist that was missing. What the heck were those curves with all the points? Did you just transmit the credit card numbers assuming they were on a curve, or did you transmit the polynomial? I had absolutely no idea what was going on. Was there an earlier video?
Yes!
yay new video
Awesome!
Unit based subscripts? Omega Yikes. Much Based.
cool. I sort of imagine that if instead of sending 18 points you send 17 points and find the values of the coefficients a1, a2, ….., a15 by choosing 16 random points in those 17 points. perhaps perform this task a few times and the coefficients that duplicate the most number of times are the correct coefficients. Maybe this method may be too complex for its benefit of one less signal to transfer. This is awesome though.
It seems like that would work, but it turns out that including a wrong point messes up all the coefficients, so none of them agree with the correct answer or each other at all.
If you have one errored number, then each set of 16 points will give a different curve with different coefficients - for a toy example, consider sending three points on a line: changing one point gives you a triangle rather than a line, with three different slopes for the three different lines, and, unless one of the points sent is on the y-axis, three different y-intercepts too.
So you wouldn't get coefficients repeating, let alone the correct values.
I recently experimented with Reed-Solomon error correction, and to be fair it's a beautiful solution and less complicated than it seems.
The first case is for error detection, and as said in the answer above, a single wrong number will make a completely differente curve. So if redundant values are also in the same curve, you can procede without worries.
In the other hand, if the redundant values doesn't match the curve, you must test for errors. This is where things can get tricky and some implementations may differ, but a simple first step is to just try skip one value and check if it "fixes" the interpolation. You can also test for bits flipped, and so on. If you really need to recover the data, you can try harder and harder until it fits, and this is where more redundancy can help.
A nice property of reed-solomon is also this possibility to simply add more redundancy, or change how it is interleaved between data, without any change in the base algorithm. For computer files, for example, additional redundancy can be stored in another filesystem as a backup. In the end, you are just interpolating points on a curve :o
@@iabervon That makes perfect sense! My 'optimisation' seemed a little too trivial to be original or correct (in this case correct). I would really like to find out though how sensitive these coefficients are to the points. If a single point shifts some dx (or dy) away from its original position what is the corresponding change da1, da2, ....dan in the value of the coefficients. I lack the mathematical tools to really get into this question (and the topic in general) but thank you though! Much appreciations from all the way in South Africa.
@@rmsgrey hey. Thank you for your response. I accidently attended an electrical engineering lecture a few weeks ago (I study Chemical Engineering here in South Africa and I sort of got lost on campus and found my way somehow to that lecture! ) Anyway, they were discussing transferring data and such and I have a vague memory of some of the things you mention being part of that discussion, albeit in a more technical setting. It's interesting how life works!
I think this is related to CRC (cyclic redundancy check) codes sent by computers, but I would have liked to see the actual algorithm explained.
not at all related
Classic Parker circle. It has no corners, except one. :D 6:47
Ok that's great! Now can you explain Quantum computer error correction please?
Yeah sure it's like quite similar but also different in some aspects and a lot smaller
I am currently writing my thesis on error correcting codes... I feel weirdly targeted :D
Oh well, pure chance when there are lots of viewers.
i get that 3 random points will virtually never fall on a line, but isn’t that only true for analog data, where the number of possible points are infinite? in an extreme case of a grid that was, say, 4x4, 3 random points would create a line relatively often. what’s the resolution threshold for this method to actually be reliable?
The framing about curves lying on "random" points is a little confusing IMO. Obviously, three random points COULD be colinear, so saying that they don't define a line because they're not "random" enough is kind of ambiguous. I think a clearer way to explain it would be: given a curve of a particular type (line, circle, degree-n polynomial), what is the minimum number of points one could choose that uniquely define it from other curves of that type? For example, two unique points lie on infinitely many curves, but only one line; three points lie on infinitely many curves, but at most one of those curves is a circle, etc.
Are Eric and Hannah Larson siblings? EDIT: Meant to type that into google lol. Brain fried. Although Isabel would probably know.
Regarding the comment at 12:40, on knowing 16 points on a degree 16 polynomial determines it uniquely via Reed-Solomon - this seems oddly analogous to the Fundamental Theorem of Elimination Theory. I’m really curious whether this can be made precise.
How can one figure out which point that is wrong given 2 extra points to help correct it?
Do you have to do N checks, each check omitting one point that you're checking for, in order to figure out which one that was wrong?
Because then with 1 million points, you'd have to do 1 million checks in the RS case, but if you were given 3 million points (in the naive case) by sending the 1 million data points 3 times, you'd only have to look at the wrong data point and do a majority vote of 3 numbers.
she's so cool
I think this has some similiarity to Elliptic Curve Cryptography
I wished she went further to show how the error was corrected. I felt all the explanation went to waste without demonstrating that.
Why are there light switches behind the books in the bookcase?
likely the switches pre-date the bookcases. you would think just relocate the switches, but there might not be a more suitable location. the most likely answer is that doing so requires union labor and significant cost. it is less expensive and more efficient simply to cut holes in a bookcase. I've heard stories very similar to this.
How does adding the 18th number help us correct the error? i understood fully up to the point where they start adding the additional points. 17th point i still understand, but how does the 18th point help correct the error?
If you don't have a degree 15 polynomial when you look at the 18 numbers, you can look at the eighteen different sets of 17 numbers. One of them will exclude the error, so will be 17 numbers that all lie on the degree 15 polynomial, while the other seventeen will each be the 1 error and 16 correct values. Since the 16 correct values define the correct degree 15 polynomial, which we know the error is away from, none of them will give a degree 15 polynomial.
So if there's at most one error, there will only be one degree 15 polynomial that fits 17 of the points given, and that will be the correct curve.
Adding the 18th point means you now effectively have 17 different sets of 17 different points, each excluding a single point. If there's one error, all the sets except the one that doesn't have the bad value will detect the error. You can then use the set that doesn't have the error to determine the correct value for the corrupted piece of data.
SpaceX actually does the 3x redundancy thing. Above the atmosphere, radiation is a bigger problem than here on the surface and regular computers get a lot more random errors from that. The traditional method of dealing with this is to harden the systems against radiation to prevent the errors. That makes them more expensive and heavier. SpaceX just uses three more or less normal computers. Usually, the computers agree, but when there's a radiation-induced error, two computers outvote the third. You can't use error correction algorithms because you don't know what the result is supposed to be ahead of time. Of course, the insides of spacecraft are protected against radiation for the benefit of the humans inside, so astronauts on the ISS can use their laptops for everyday use without issue.