Great video! Now it all makes sense. When I studied Lagrange polynomials, I only understood enough to work out the interpolation polynomials but never really grasped the intricate details behind them. Your video - like your other video of Pade approximations - has made the whole concept a lot clearer to me. Thank you!
I remember developing the interpolation polynomial formula in college for fun. My discrete math professor gave a puzzle to the class to come up with polynomials that for many integer inputs return prime number outputs. I switched the problem a little bit for myself in that I asked "how can I make a polynomial that passes through each of these arbitrary points?" And as I solved this puzzle on my own over the next two weeks I developed this interpolation method on my own and it felt so empowering. I showed it to a few of my other professors and we proved that there is only one line passing through two points, only one parabola passing through three points, only one cubic passing through four points, only one quartic passing though five points, etcetera for all natural numbers of points. Anyway, some of my most enjoyable days of college were solving these puzzles.
I found this video from the Advent of Code 2023, because it needed Lagrange Polynomials to solve one of the problems (without telling you). Maybe you'd enjoy the problems in Advent of Code as well? It's free and open, so could just check it out. One of these year people also found the Chinese Remainder Theorem for example...
Best comment ever! Really though, it is such a fun journey to figure out error correcting codes. Might I suggest my favorite the Reed Solomon Vandermonde variant as the most challenging IHMO. For a cherry on the cake implement a real finite field of any scalar bit width and don't cheat with dog slow 8 bit toy implementations :). Taught me everything I'll ever want to know about applied mathematics in computer science.
This was perfect. Why can't professors explain like this? Easy to understand with nice graphics for the visual learners + soothing voice that helps you concentrate. Thank you!
I love the way you explain things, its so straight forward and intuitive that it makes it feel like it's something I could've come up with. Really good explanations in this video and the Padé approximations, which I had never heard before. Thanks for making great content
I'm a French student and I have to do researches on Lagrange interpolation theorem. Your video is the only that helped me understand the subject, not a single French video helped me on this.
Wow! Awesome explanation! I particularly love how you broke down Lagrange polynomials into the smaller order polynomials that compose it and how those were "guessed" in the first place. Very intuitive explanation. Thank you!
well, i'm actually dealing with maths in french, and it is my second language, but I swear that no french speaking professor on youtube was able to transmit this idea as smoothly as you did,sir. I appreciate that tysm...
This is excellent. I've reviewed my materials and looked at a bunch of others online but this is far and away the best explanation I've found. Thank you!
Dr. Will THANK YOU! I'm getting all worked up over the bad explanation of my teacher and book, and I watch the first minute of your video and I get it. You rock!
Thank you! The visual representation really makes it clear how the interpolated polynomial gets the correct values. Also, splitting it into steps rather than just giving us the entire equation all at once!
Great video. Just connected all the dots in my head. Thank you! I cannot believe that some random guys with far lesser skills in teaching dominate the search results...
A music teacher told me that a really good music teacher is always a really good musician.. I wonder how true that is of mathematicians. I took a BA in math at UCSC and only had a couple really good teachers.. Tnx Dr Wood for these very helpful videos.. Almost want to take out my old Burden and Faires and write some code!
Great material. In comms, we use Lagrange interpolators in control loops doing timing recovery. Typically, we refer to them as Farrow filters. But they are, in essence, just a lagrange polynomial. A matrix multiply derives the coefficients, and then we evaluate the polynomial at the desired sample location. In modern FPGAs, we have resolutions of 1/2^26 evaluation points between samples. They're flexible as resamplers, interpolators or phase shifters for phased antenna arrays. So many great applications.
Also, in FPGAs, where multiply and accum (MAC) HW is prevalent, the Horner decomposition is really useful. Y = c0 + c1*x + c2*x^2 + c3*x^3 + c4*x^4 Y = c0 + x(c1 + x(c2 + x(c3 + c4*x))) It can be decomposed this way into a series of MACs for which we have fast hardware. If you have enough clocks, you can even get it done reusing 1 MAC (DSP48e2).
I'm really unfamiliar with that topic could you give me some hand holds/ search terms to look this up? I'm really curious about the application of this method.
@@drvanon Sure. Happy to comment. First, a fantastic reference on the subject is "Digital Signal Processing with Field Programmable Gate Arrays (Fourth Edition)" by Uwe Meyer-Baese. He covers the topic in section 5.6, Arbitrary Sample Rate Converters. Subsection 5.6.2 is Polynomial Fractional Delay Design specifically treats Lagrange Polynomial interpolators. If you're a MATLAB user, Farrow filters are equivalent mathematically to cubic spline interp. But if you have the comms or DSP toolbox (can't remember which), there's a Fractional Rate Converter that is a farrow and will actually give you the matrix of coefficients you need for the matrix mult to derive the polynomial coefficients. In HW, you only have to do one multiply for that matrix, as the other coeffiecients are divisions by powers of 2, so shifts work. So you do one multiply, and then a bunch of shifts and some adds. Then, for evaluating the polynomial, it's a bunch of recursive MACs. There's a bunch of bookeeping to handle. You'll need a phase accumulator that will need to wrap. You can use this for interpolation or decimation. In the decimation case, you'll occasionally need to slip a sample. In the interp case, you'll potentially have multiple polynomial evaluations per sample location.
One nuance to add is that you can use a lagrange polynomial of arbitrary length. Typically a 3rd or 4th order polynomial is used in comms work, as going beyond that doesn't give much of a return on investment. Uwe-Meyer mentions that in some audio work in SW processing, they've used polynomials of very high order. I'm not sure what the motivation is. But with a 3rd or 4th order polynomial, it's still very efficient in HW. If you're working in something like a Zynq Ultrascale+, and are running at 350 MHz for your sysclk, and have a signal in the 30 MSPS range, this is VERY realizable with 1 DSP for real and 1 DSP for imag. (I think I use a third DSP for my phase accumulator so I can hit a good fmax). I do a lot of OFDM work. Typically, when I'm running my timing control loops, I'll measure the accumulated phase angle of the channel estimates and run that into a PID which is then driving the Farrow Filter resample rate. Driving the error to zero essentially locks you "on-baud" in the OFDM sense that you're triggering your FFTs precisely on the instant that you need to. That helps in tracking the signal timing drift, but it also means your equalizer doesn't have to work very hard... i.e. the sinusoid in the frequency domain on your channel estimate isn't there, and you have more significant bits for equalizer precision. It's worth the effort. It translates to multiple dB SNR improvement on the recovered constellation.
Thank you so much! I stumbled upon that when researching Shamir's Secret Sharing Algorithm but at that point the formulas just went straight over my head but this is explained so clear, I love it
Ugh, I just used Lagrange interp to make a camera fly about between points so 5 minutes ago, if I was asked if I understood it I'd have said nodded my head and said "pretty decent understanding". After this video I'm revising my self-appraisal to "I'm effectively clueless".
Is this method unsuitable if there's any noise in the original observation of the "nodes"? If this method guarantees the lowest order polynomial, then I guess it could give a polynomial of order 1 million that *exactly* fits the nodes, instead of a very low order polynomial that *almost* fits the nodes.
My background is in physics and this is exactly what I was wondering too! My thought was that it introduces a new variable for each data point, which practically never works with "real life data". Im curious if that means this is not suited for fitting then, or if it can be modified to be more suitable. If the former, what are then the practical uses of this?
The higher the order of the polynomials, the more that noise yields vastly different results. It gets more unstable as you add more points. The polynomial will still always fit the intended points, but the interpolating polynomials become wildly different for more and more points. If approximations robust to noise are what you need, you can try other interpolation methods, like fourier
you don't know how this material was unnecessarily made complicated in your classes in university. I'm amazed how clearly and simply you explain what others couldn't. Thanks a lot
7 months late to the party but here's an answer. Recall that every polynomial can be written as a product of irreducible monomials multiplied by some constant factor c. c(x-r_0)(x-r_1)...(x-r_n) So if you take the general form of the Lagrange basis and subtract 1, you get the constant 0 polynomial, proving that the Lagrange basis always sums to one, everywhere. Here is why: By design, the Lagrange basis normally interpolates a degree n polynomial using n+1 basis polynomials each of degree n (that each have n roots guaranteed by their n monomial factors) but each having one point defined to 1. However subtracting 1 from the entire interpolating polynomial gives a polynomial of degree n with n+1 roots (The same n+1 points that were originally 1, are now roots). sigma(L(x)) - 1 = P(x) - 1 On the left hand side, each basis polynomial is of degree n with n roots, but on the right hand side, the entire interpolating polynomial minus one is of now of degree n with n+1 roots somehow. Weird, right? This would be a logical contradiction of the Fundamental Theorem of Algebra UNLESS we allow the constant c to be zero, then it shows both sides must be constant polynomials, namely, 0. So the constant term of that new polynomial, c, must be 0, making the entire interpolating polynomial minus one always be guaranteed to yield the constant zero polynomial. So the Lagrange basis summed together is guaranteed to be the constant 1 polynomial at all points, since it's sum is the constant 1 polynomial.
Nice video, question: if you want to find an interpolation with a polynomial of order n+1 for n+1 nodes, what is a good method of going about it? (I assume there must be some extra degree of freedom, since this will underdetermine the system of equations)
yes. for the Langrangian method, no inverse matrix or Gauss-Jordan procedure is needed. there is also another method: Newtonian interpolation (for equally-spaced x-coordinates). all methods will yield the same result, because the interpolation polynomial is unique.
The barycentric Lagrange method shows algorithmic superiority compared to the Vandermonde method. Adding new data points is more efficient in the barycentric Lagrange method. The Vandermonde method must unfortunately be recalculated entirely as new data points are entered.
A very important point is that the interpolation formula (another polynomial) can be doing something totally different than the function outside the selected points, even between the selected points.
Yes! Of any variable count, and in any configuration you can imagine ( Degree of each variable ). As long as you don't run out of data points. I like to think of it as a fractal reduction of functions.
It brings back memories of my glorious days that I used this thing to predict pattern of the prime numbers. No matter how hard I tried, everything just failed successfully.
because if m is strictly less than n, then the proof misses the case when the interpolating polynomial is of degree exactly n. That is when the coefficients of the highest order term do not add up to zero.
Yes you're right! polynomial interpolation is provably impossible if there are 2 distinct y_i values for a repeated x_i node. The best proof IMO uses the Vandermonde matrix which I am writing a video about at the moment :-)
Great video! Now it all makes sense. When I studied Lagrange polynomials, I only understood enough to work out the interpolation polynomials but never really grasped the intricate details behind them. Your video - like your other video of Pade approximations - has made the whole concept a lot clearer to me. Thank you!
Awesome, thanks!
Dude you make stuff so clear, watching your videos feel like cheating.
Thanks a lot! that's exactly what I'm aiming for!
I remember developing the interpolation polynomial formula in college for fun. My discrete math professor gave a puzzle to the class to come up with polynomials that for many integer inputs return prime number outputs. I switched the problem a little bit for myself in that I asked "how can I make a polynomial that passes through each of these arbitrary points?" And as I solved this puzzle on my own over the next two weeks I developed this interpolation method on my own and it felt so empowering.
I showed it to a few of my other professors and we proved that there is only one line passing through two points, only one parabola passing through three points, only one cubic passing through four points, only one quartic passing though five points, etcetera for all natural numbers of points.
Anyway, some of my most enjoyable days of college were solving these puzzles.
Remember: when you come up with something new, you can be sure somebody else did it before you :)
I found this video from the Advent of Code 2023, because it needed Lagrange Polynomials to solve one of the problems (without telling you). Maybe you'd enjoy the problems in Advent of Code as well? It's free and open, so could just check it out. One of these year people also found the Chinese Remainder Theorem for example...
1 minute in and the visualization already helped more than hours of uni lectures
This can be used to create error correcting codes. The design of such a code is left as an exercise to the reader :P
Best comment ever! Really though, it is such a fun journey to figure out error correcting codes. Might I suggest my favorite the Reed Solomon Vandermonde variant as the most challenging IHMO. For a cherry on the cake implement a real finite field of any scalar bit width and don't cheat with dog slow 8 bit toy implementations :). Taught me everything I'll ever want to know about applied mathematics in computer science.
I created such codes. Proof is left as exercise for reader :P
Not allyourcode
I swear this Lagrange guy is everywhere. Every field of math I'm trying to understand, something Lagrange pops up.
He along with the infamous "Three L's", Laplace, Lagrange and Legrende
This was perfect. Why can't professors explain like this? Easy to understand with nice graphics for the visual learners + soothing voice that helps you concentrate. Thank you!
I've been using this video as a reference for quite a while and just realized it was released on my birthday. Thanks for all the guidance Dr. Wood ❤
I love the way you explain things, its so straight forward and intuitive that it makes it feel like it's something I could've come up with. Really good explanations in this video and the Padé approximations, which I had never heard before. Thanks for making great content
I love it when the video drops all the necessary knowledge in the first 30 seconds. Well done.
I'm a French student and I have to do researches on Lagrange interpolation theorem. Your video is the only that helped me understand the subject, not a single French video helped me on this.
Wow! Awesome explanation! I particularly love how you broke down Lagrange polynomials into the smaller order polynomials that compose it and how those were "guessed" in the first place. Very intuitive explanation. Thank you!
Thank you very much for the kind words!
well, i'm actually dealing with maths in french, and it is my second language, but I swear that no french speaking professor on youtube was able to transmit this idea as smoothly as you did,sir. I appreciate that tysm...
This is excellent. I've reviewed my materials and looked at a bunch of others online but this is far and away the best explanation I've found. Thank you!
thank you very much!
I can tell this channel will be great for topics related to computational physics. Please keep it up!
I cannot thank you enough for this heartwarming explanation! May God bless you. Much love and respect!
Dr. Will THANK YOU! I'm getting all worked up over the bad explanation of my teacher and book, and I watch the first minute of your video and I get it. You rock!
That's awesome! thanks a lot!
Best explanation I've ever seen concerning Lagrange polynomials. I can now visualize proofs that use Lagrange interpolation.tysm
Thank you! The visual representation really makes it clear how the interpolated polynomial gets the correct values. Also, splitting it into steps rather than just giving us the entire equation all at once!
The graphics were super insightful! You made every nuance of the concept extremely easy to grasp! Thank you.
Thank you so much! Being able to present math in such a beautiful and crystal clear way is a true art, and you are a true artist!
Thanks so much. very kind words!
Great video. Just connected all the dots in my head. Thank you!
I cannot believe that some random guys with far lesser skills in teaching dominate the search results...
Magic! interpolation shows hints of the underlying mathematical /musical structure of perception. Thank you!
Been putting of studying these for an upcoming test as they looked really complicated. Thank you so much for this video!
This Lagrange guy is really smart. I see a bright future ahead for him
Awesome video, got this in my recommendations just as the professor covered in my metrology class
Thank you for educating me, Sire. You have my eternal gratitude
This is so beautiful
One aha moment after the other, effortlessly
I feel blessed to have found this UA-cam channel
This is excellent. You've helped me in Optimal Control Theory :)
Thank you very much! glad it was useful :-)
Students if you are watching this make note that this is the most brilliant explanation of this concept.
Thank you very much!
Just found your channel and I really enjoy your videos!
Thanks a lot!
This is the best educational video i've ever seen
Thank you very much!!
fantastic video, thank you for taking the time to explain this! I love the music too
Loved the video! Explanation was clear and straightforward. Lagrange interpolation is a wonderfully useful gadget.
I'm a chemist, and have no idea what a Legendre transformation is - any chance you could do a video on them?
A music teacher told me that a really good music teacher is always a really good musician.. I wonder how true that is of mathematicians. I took a BA in math at UCSC and only had a couple really good teachers.. Tnx Dr Wood for these very helpful videos.. Almost want to take out my old Burden and Faires and write some code!
*Insert* Obama giving Obama a medal meme :D
I agree though these videos are amazing.
the explanation are truly mesmerising
omg. two minutes in an i understand what days of a course couldn't .
This was an excellent, clear and succinct explanatory video, thank you!
Great material. In comms, we use Lagrange interpolators in control loops doing timing recovery. Typically, we refer to them as Farrow filters. But they are, in essence, just a lagrange polynomial. A matrix multiply derives the coefficients, and then we evaluate the polynomial at the desired sample location. In modern FPGAs, we have resolutions of 1/2^26 evaluation points between samples. They're flexible as resamplers, interpolators or phase shifters for phased antenna arrays. So many great applications.
Also, in FPGAs, where multiply and accum (MAC) HW is prevalent, the Horner decomposition is really useful.
Y = c0 + c1*x + c2*x^2 + c3*x^3 + c4*x^4
Y = c0 + x(c1 + x(c2 + x(c3 + c4*x)))
It can be decomposed this way into a series of MACs for which we have fast hardware. If you have enough clocks, you can even get it done reusing 1 MAC (DSP48e2).
Awesome! thanks for sharing!
I'm really unfamiliar with that topic could you give me some hand holds/ search terms to look this up? I'm really curious about the application of this method.
@@drvanon Sure. Happy to comment. First, a fantastic reference on the subject is "Digital Signal Processing with Field Programmable Gate Arrays (Fourth Edition)" by Uwe Meyer-Baese. He covers the topic in section 5.6, Arbitrary Sample Rate Converters. Subsection 5.6.2 is Polynomial Fractional Delay Design specifically treats Lagrange Polynomial interpolators.
If you're a MATLAB user, Farrow filters are equivalent mathematically to cubic spline interp. But if you have the comms or DSP toolbox (can't remember which), there's a Fractional Rate Converter that is a farrow and will actually give you the matrix of coefficients you need for the matrix mult to derive the polynomial coefficients.
In HW, you only have to do one multiply for that matrix, as the other coeffiecients are divisions by powers of 2, so shifts work. So you do one multiply, and then a bunch of shifts and some adds. Then, for evaluating the polynomial, it's a bunch of recursive MACs. There's a bunch of bookeeping to handle. You'll need a phase accumulator that will need to wrap. You can use this for interpolation or decimation. In the decimation case, you'll occasionally need to slip a sample. In the interp case, you'll potentially have multiple polynomial evaluations per sample location.
One nuance to add is that you can use a lagrange polynomial of arbitrary length. Typically a 3rd or 4th order polynomial is used in comms work, as going beyond that doesn't give much of a return on investment. Uwe-Meyer mentions that in some audio work in SW processing, they've used polynomials of very high order. I'm not sure what the motivation is.
But with a 3rd or 4th order polynomial, it's still very efficient in HW. If you're working in something like a Zynq Ultrascale+, and are running at 350 MHz for your sysclk, and have a signal in the 30 MSPS range, this is VERY realizable with 1 DSP for real and 1 DSP for imag. (I think I use a third DSP for my phase accumulator so I can hit a good fmax).
I do a lot of OFDM work. Typically, when I'm running my timing control loops, I'll measure the accumulated phase angle of the channel estimates and run that into a PID which is then driving the Farrow Filter resample rate. Driving the error to zero essentially locks you "on-baud" in the OFDM sense that you're triggering your FFTs precisely on the instant that you need to.
That helps in tracking the signal timing drift, but it also means your equalizer doesn't have to work very hard... i.e. the sinusoid in the frequency domain on your channel estimate isn't there, and you have more significant bits for equalizer precision. It's worth the effort. It translates to multiple dB SNR improvement on the recovered constellation.
best way of introducing to lagrange interpolation
Superb video. Great explanation. Covers everything, without any bs. Thank you
I feel lucky to have access to such content
Thank you so much! I stumbled upon that when researching Shamir's Secret Sharing Algorithm but at that point the formulas just went straight over my head but this is explained so clear, I love it
As an 11th grader this video was a trip
what a brilliant video mate, saving my degree haha keep it up
Ugh, I just used Lagrange interp to make a camera fly about between points so 5 minutes ago, if I was asked if I understood it I'd have said nodded my head and said "pretty decent understanding". After this video I'm revising my self-appraisal to "I'm effectively clueless".
Unreal or Unity?
Great video, thanks helped me understand the concept.
this video is perfection
Wonderful, thanks for the clear and concise video
Is this method unsuitable if there's any noise in the original observation of the "nodes"? If this method guarantees the lowest order polynomial, then I guess it could give a polynomial of order 1 million that *exactly* fits the nodes, instead of a very low order polynomial that *almost* fits the nodes.
My background is in physics and this is exactly what I was wondering too! My thought was that it introduces a new variable for each data point, which practically never works with "real life data".
Im curious if that means this is not suited for fitting then, or if it can be modified to be more suitable. If the former, what are then the practical uses of this?
The higher the order of the polynomials, the more that noise yields vastly different results. It gets more unstable as you add more points.
The polynomial will still always fit the intended points, but the interpolating polynomials become wildly different for more and more points.
If approximations robust to noise are what you need, you can try other interpolation methods, like fourier
You make this so easy to understand.
Wow, really well explained! Thanks
What did Scorpion say, when he found his family dead? 4:32
Well....I'm here only to say "beautiful explanation" and, thank you so much!!
Thank you!
you don't know how this material was unnecessarily made complicated in your classes in university. I'm amazed how clearly and simply you explain what others couldn't. Thanks a lot
Thanks a lot ❤
Please can you make a video teaching us how we can draw and animate graphs like the animation in this video?
5:23 Why use L(x) here and not p(x)?
Wonderful illustration! Thank you!
Amazing video! Thank you for this.
This really cleared things up for me. Thanks!
Very clear! Thank you!
Wonderfully explained!
How didnt I find this channel earlier? Amazing content
Thank you!
BEAUTIFUL THANK YOU SO MUCH
How is it guaranteed that the basis functions all add to 1 at any point over the domain?
7 months late to the party but here's an answer.
Recall that every polynomial can be written as a product of irreducible monomials multiplied by some constant factor c.
c(x-r_0)(x-r_1)...(x-r_n)
So if you take the general form of the Lagrange basis and subtract 1, you get the constant 0 polynomial, proving that the Lagrange basis always sums to one, everywhere.
Here is why:
By design, the Lagrange basis normally interpolates a degree n polynomial using n+1 basis polynomials each of degree n (that each have n roots guaranteed by their n monomial factors) but each having one point defined to 1. However subtracting 1 from the entire interpolating polynomial gives a polynomial of degree n with n+1 roots (The same n+1 points that were originally 1, are now roots).
sigma(L(x)) - 1 = P(x) - 1
On the left hand side, each basis polynomial is of degree n with n roots, but on the right hand side, the entire interpolating polynomial minus one is of now of degree n with n+1 roots somehow. Weird, right? This would be a logical contradiction of the Fundamental Theorem of Algebra UNLESS we allow the constant c to be zero, then it shows both sides must be constant polynomials, namely, 0.
So the constant term of that new polynomial, c, must be 0, making the entire interpolating polynomial minus one always be guaranteed to yield the constant zero polynomial.
So the Lagrange basis summed together is guaranteed to be the constant 1 polynomial at all points, since it's sum is the constant 1 polynomial.
Wow, this is sweet!
broooooooooooo this was freakin genius
Nice video, question: if you want to find an interpolation with a polynomial of order n+1 for n+1 nodes, what is a good method of going about it? (I assume there must be some extra degree of freedom, since this will underdetermine the system of equations)
The graph of them all together reminds me of the influence of the different points in a bezier curve, any relation?
Fantastic explanation, thank you!
Great video!! bumped in accidently but found a gem. Also, which software are you using for these amazing animations?
very cool, would have loved to have seen an example at the end where the polynomial is of lower order than the number of points
Isn't this equivalent to having a single polynomial of degree n and just solving the set of equations given by the points we wish to interpolate?
yes. for the Langrangian method, no inverse matrix or Gauss-Jordan procedure is needed. there is also another method: Newtonian interpolation (for equally-spaced x-coordinates). all methods will yield the same result, because the interpolation polynomial is unique.
The barycentric Lagrange method shows algorithmic superiority compared to the Vandermonde method.
Adding new data points is more efficient in the barycentric Lagrange method. The Vandermonde method must unfortunately be recalculated entirely as new data points are entered.
Fantastic explanation
great and clean video!
Thanks!
Really well explained!
Thanks!
Thank you for your video!
good video. liked the music. concise.
So p(x) becomes the polynomial that the entire set of data points?
absolutely beautiful video
Thanks a lot! appreciate the kind words
Thank you, very good explanation! :)
Can't you combine Padé approximants with this if you just substitute the Taylor series with a lagrange Polynomial?
A very important point is that the interpolation formula (another polynomial) can be doing something totally different than the function outside the selected points, even between the selected points.
Very good explanation!
And this technique can be used to interpolate functions of more than one variable.
Yes! Of any variable count, and in any configuration you can imagine ( Degree of each variable ). As long as you don't run out of data points. I like to think of it as a fractal reduction of functions.
Very well explained
It brings back memories of my glorious days that I used this thing to predict pattern of the prime numbers. No matter how hard I tried, everything just failed successfully.
Prime generating curve?
Loved it!
In the very end of the video, should not it be that m is less than or equal to n (not strictly less than)?
because if m is strictly less than n, then the proof misses the case when the interpolating polynomial is of degree exactly n. That is when the coefficients of the highest order term do not add up to zero.
1:13 I can think of a way to make polynomial with same property but not of minimal degree
Wait
Nvm
Polynomial is something like x(x-2)(x-3)/2
2:34 let's see
Oh it is
This is great!
Very nice explanation.
Thank You Soooo Much Sir!
what a great video!
wow dude , understanding some thing so thoroughly , doesn't require you to actively memorise facts/ formulaes - i can feel the equations now
at 3:39 , why did you write "n+1" and not just "n"
Because set of points give are from 1 to n in addition to the point 0 making it n+1
What if there is a repeated desired output value in the y_i list? Dividing by l_i*(x_i) won't work in that case haha
Yes you're right! polynomial interpolation is provably impossible if there are 2 distinct y_i values for a repeated x_i node. The best proof IMO uses the Vandermonde matrix which I am writing a video about at the moment :-)
Bro, your videos are amazing
Thanks a lot!