Lagrange Interpolation

Поділитися
Вставка
  • Опубліковано 23 гру 2024

КОМЕНТАРІ • 229

  • @Peter-bg1ku
    @Peter-bg1ku 3 роки тому +230

    Great video! Now it all makes sense. When I studied Lagrange polynomials, I only understood enough to work out the interpolation polynomials but never really grasped the intricate details behind them. Your video - like your other video of Pade approximations - has made the whole concept a lot clearer to me. Thank you!

  • @tombackhouse9121
    @tombackhouse9121 3 роки тому +147

    Dude you make stuff so clear, watching your videos feel like cheating.

    • @DrWillWood
      @DrWillWood  3 роки тому +24

      Thanks a lot! that's exactly what I'm aiming for!

  • @einstien311
    @einstien311 3 роки тому +116

    I remember developing the interpolation polynomial formula in college for fun. My discrete math professor gave a puzzle to the class to come up with polynomials that for many integer inputs return prime number outputs. I switched the problem a little bit for myself in that I asked "how can I make a polynomial that passes through each of these arbitrary points?" And as I solved this puzzle on my own over the next two weeks I developed this interpolation method on my own and it felt so empowering.
    I showed it to a few of my other professors and we proved that there is only one line passing through two points, only one parabola passing through three points, only one cubic passing through four points, only one quartic passing though five points, etcetera for all natural numbers of points.
    Anyway, some of my most enjoyable days of college were solving these puzzles.

    • @stewartzayat7526
      @stewartzayat7526 3 роки тому +21

      Remember: when you come up with something new, you can be sure somebody else did it before you :)

    • @JesperDramsch
      @JesperDramsch Рік тому

      I found this video from the Advent of Code 2023, because it needed Lagrange Polynomials to solve one of the problems (without telling you). Maybe you'd enjoy the problems in Advent of Code as well? It's free and open, so could just check it out. One of these year people also found the Chinese Remainder Theorem for example...

  • @ricksoslick
    @ricksoslick Рік тому +6

    1 minute in and the visualization already helped more than hours of uni lectures

  • @allyourcode
    @allyourcode 3 роки тому +160

    This can be used to create error correcting codes. The design of such a code is left as an exercise to the reader :P

    • @justdoityourself7134
      @justdoityourself7134 3 роки тому +16

      Best comment ever! Really though, it is such a fun journey to figure out error correcting codes. Might I suggest my favorite the Reed Solomon Vandermonde variant as the most challenging IHMO. For a cherry on the cake implement a real finite field of any scalar bit width and don't cheat with dog slow 8 bit toy implementations :). Taught me everything I'll ever want to know about applied mathematics in computer science.

    • @23_dhimantsinhsolanki9
      @23_dhimantsinhsolanki9 Рік тому +2

      I created such codes. Proof is left as exercise for reader :P
      Not allyourcode

  • @whitewalker608
    @whitewalker608 5 місяців тому +8

    I swear this Lagrange guy is everywhere. Every field of math I'm trying to understand, something Lagrange pops up.

    • @alejandrogm7617
      @alejandrogm7617 24 дні тому

      He along with the infamous "Three L's", Laplace, Lagrange and Legrende

  • @marcozelioli
    @marcozelioli 2 роки тому +2

    This was perfect. Why can't professors explain like this? Easy to understand with nice graphics for the visual learners + soothing voice that helps you concentrate. Thank you!

  • @asiimwemmanuel
    @asiimwemmanuel 4 місяці тому +2

    I've been using this video as a reference for quite a while and just realized it was released on my birthday. Thanks for all the guidance Dr. Wood ❤

  • @purungo
    @purungo 3 роки тому +22

    I love the way you explain things, its so straight forward and intuitive that it makes it feel like it's something I could've come up with. Really good explanations in this video and the Padé approximations, which I had never heard before. Thanks for making great content

  • @justdoityourself7134
    @justdoityourself7134 3 роки тому +3

    I love it when the video drops all the necessary knowledge in the first 30 seconds. Well done.

  • @clementp.5984
    @clementp.5984 2 роки тому +2

    I'm a French student and I have to do researches on Lagrange interpolation theorem. Your video is the only that helped me understand the subject, not a single French video helped me on this.

  • @AJ-et3vf
    @AJ-et3vf 3 роки тому +4

    Wow! Awesome explanation! I particularly love how you broke down Lagrange polynomials into the smaller order polynomials that compose it and how those were "guessed" in the first place. Very intuitive explanation. Thank you!

    • @DrWillWood
      @DrWillWood  3 роки тому +1

      Thank you very much for the kind words!

  • @Angel-oq5bs
    @Angel-oq5bs 7 місяців тому

    well, i'm actually dealing with maths in french, and it is my second language, but I swear that no french speaking professor on youtube was able to transmit this idea as smoothly as you did,sir. I appreciate that tysm...

  • @muess
    @muess 2 роки тому +2

    This is excellent. I've reviewed my materials and looked at a bunch of others online but this is far and away the best explanation I've found. Thank you!

  • @jaminkidd6285
    @jaminkidd6285 3 роки тому +5

    I can tell this channel will be great for topics related to computational physics. Please keep it up!

  • @safiyajd
    @safiyajd 21 день тому

    I cannot thank you enough for this heartwarming explanation! May God bless you. Much love and respect!

  • @OneShotKill3r
    @OneShotKill3r 2 роки тому

    Dr. Will THANK YOU! I'm getting all worked up over the bad explanation of my teacher and book, and I watch the first minute of your video and I get it. You rock!

    • @DrWillWood
      @DrWillWood  2 роки тому +1

      That's awesome! thanks a lot!

  • @kiat4797
    @kiat4797 3 роки тому +1

    Best explanation I've ever seen concerning Lagrange polynomials. I can now visualize proofs that use Lagrange interpolation.tysm

  • @itaisod
    @itaisod 2 роки тому

    Thank you! The visual representation really makes it clear how the interpolated polynomial gets the correct values. Also, splitting it into steps rather than just giving us the entire equation all at once!

  • @DevonBomer
    @DevonBomer 2 роки тому

    The graphics were super insightful! You made every nuance of the concept extremely easy to grasp! Thank you.

  • @vlasnovokhatnii4019
    @vlasnovokhatnii4019 2 роки тому

    Thank you so much! Being able to present math in such a beautiful and crystal clear way is a true art, and you are a true artist!

    • @DrWillWood
      @DrWillWood  2 роки тому +1

      Thanks so much. very kind words!

  • @waltzofthestars2078
    @waltzofthestars2078 2 роки тому

    Great video. Just connected all the dots in my head. Thank you!
    I cannot believe that some random guys with far lesser skills in teaching dominate the search results...

  • @MURPHSOUND
    @MURPHSOUND Місяць тому

    Magic! interpolation shows hints of the underlying mathematical /musical structure of perception. Thank you!

  • @zoomerb0y752
    @zoomerb0y752 8 місяців тому

    Been putting of studying these for an upcoming test as they looked really complicated. Thank you so much for this video!

  • @lucassamuel6069
    @lucassamuel6069 8 місяців тому

    This Lagrange guy is really smart. I see a bright future ahead for him

  • @max_kl
    @max_kl 3 роки тому +1

    Awesome video, got this in my recommendations just as the professor covered in my metrology class

  • @busycow8334
    @busycow8334 10 місяців тому

    Thank you for educating me, Sire. You have my eternal gratitude

  • @kangalio
    @kangalio 3 роки тому +4

    This is so beautiful
    One aha moment after the other, effortlessly

  • @takyc7883
    @takyc7883 3 роки тому

    I feel blessed to have found this UA-cam channel

  • @jonasdegrave5916
    @jonasdegrave5916 3 роки тому +3

    This is excellent. You've helped me in Optimal Control Theory :)

    • @DrWillWood
      @DrWillWood  3 роки тому

      Thank you very much! glad it was useful :-)

  • @doggydoggywho
    @doggydoggywho 3 роки тому

    Students if you are watching this make note that this is the most brilliant explanation of this concept.

  • @redroth57
    @redroth57 3 роки тому +3

    Just found your channel and I really enjoy your videos!

  • @restacks8777
    @restacks8777 2 роки тому

    This is the best educational video i've ever seen

  • @juliaclaire68
    @juliaclaire68 Місяць тому

    fantastic video, thank you for taking the time to explain this! I love the music too

  • @bentationfunkiloglio
    @bentationfunkiloglio 3 роки тому

    Loved the video! Explanation was clear and straightforward. Lagrange interpolation is a wonderfully useful gadget.

  • @Namerson
    @Namerson 3 роки тому +6

    I'm a chemist, and have no idea what a Legendre transformation is - any chance you could do a video on them?

  • @paulmitchell6602
    @paulmitchell6602 3 роки тому +1

    A music teacher told me that a really good music teacher is always a really good musician.. I wonder how true that is of mathematicians. I took a BA in math at UCSC and only had a couple really good teachers.. Tnx Dr Wood for these very helpful videos.. Almost want to take out my old Burden and Faires and write some code!

    • @hagengo
      @hagengo 3 роки тому

      *Insert* Obama giving Obama a medal meme :D
      I agree though these videos are amazing.

  • @ahmedmalaq7410
    @ahmedmalaq7410 Місяць тому

    the explanation are truly mesmerising

  • @nikhilsrajan
    @nikhilsrajan 3 роки тому

    omg. two minutes in an i understand what days of a course couldn't .

  • @robertwalkley4665
    @robertwalkley4665 2 роки тому

    This was an excellent, clear and succinct explanatory video, thank you!

  • @gironic
    @gironic 3 роки тому

    Great material. In comms, we use Lagrange interpolators in control loops doing timing recovery. Typically, we refer to them as Farrow filters. But they are, in essence, just a lagrange polynomial. A matrix multiply derives the coefficients, and then we evaluate the polynomial at the desired sample location. In modern FPGAs, we have resolutions of 1/2^26 evaluation points between samples. They're flexible as resamplers, interpolators or phase shifters for phased antenna arrays. So many great applications.

    • @gironic
      @gironic 3 роки тому

      Also, in FPGAs, where multiply and accum (MAC) HW is prevalent, the Horner decomposition is really useful.
      Y = c0 + c1*x + c2*x^2 + c3*x^3 + c4*x^4
      Y = c0 + x(c1 + x(c2 + x(c3 + c4*x)))
      It can be decomposed this way into a series of MACs for which we have fast hardware. If you have enough clocks, you can even get it done reusing 1 MAC (DSP48e2).

    • @DrWillWood
      @DrWillWood  3 роки тому

      Awesome! thanks for sharing!

    • @drvanon
      @drvanon 3 роки тому

      I'm really unfamiliar with that topic could you give me some hand holds/ search terms to look this up? I'm really curious about the application of this method.

    • @gironic
      @gironic 3 роки тому

      @@drvanon Sure. Happy to comment. First, a fantastic reference on the subject is "Digital Signal Processing with Field Programmable Gate Arrays (Fourth Edition)" by Uwe Meyer-Baese. He covers the topic in section 5.6, Arbitrary Sample Rate Converters. Subsection 5.6.2 is Polynomial Fractional Delay Design specifically treats Lagrange Polynomial interpolators.
      If you're a MATLAB user, Farrow filters are equivalent mathematically to cubic spline interp. But if you have the comms or DSP toolbox (can't remember which), there's a Fractional Rate Converter that is a farrow and will actually give you the matrix of coefficients you need for the matrix mult to derive the polynomial coefficients.
      In HW, you only have to do one multiply for that matrix, as the other coeffiecients are divisions by powers of 2, so shifts work. So you do one multiply, and then a bunch of shifts and some adds. Then, for evaluating the polynomial, it's a bunch of recursive MACs. There's a bunch of bookeeping to handle. You'll need a phase accumulator that will need to wrap. You can use this for interpolation or decimation. In the decimation case, you'll occasionally need to slip a sample. In the interp case, you'll potentially have multiple polynomial evaluations per sample location.

    • @gironic
      @gironic 3 роки тому

      One nuance to add is that you can use a lagrange polynomial of arbitrary length. Typically a 3rd or 4th order polynomial is used in comms work, as going beyond that doesn't give much of a return on investment. Uwe-Meyer mentions that in some audio work in SW processing, they've used polynomials of very high order. I'm not sure what the motivation is.
      But with a 3rd or 4th order polynomial, it's still very efficient in HW. If you're working in something like a Zynq Ultrascale+, and are running at 350 MHz for your sysclk, and have a signal in the 30 MSPS range, this is VERY realizable with 1 DSP for real and 1 DSP for imag. (I think I use a third DSP for my phase accumulator so I can hit a good fmax).
      I do a lot of OFDM work. Typically, when I'm running my timing control loops, I'll measure the accumulated phase angle of the channel estimates and run that into a PID which is then driving the Farrow Filter resample rate. Driving the error to zero essentially locks you "on-baud" in the OFDM sense that you're triggering your FFTs precisely on the instant that you need to.
      That helps in tracking the signal timing drift, but it also means your equalizer doesn't have to work very hard... i.e. the sinusoid in the frequency domain on your channel estimate isn't there, and you have more significant bits for equalizer precision. It's worth the effort. It translates to multiple dB SNR improvement on the recovered constellation.

  • @ar3568row
    @ar3568row 2 роки тому

    best way of introducing to lagrange interpolation

  • @nitsanbh
    @nitsanbh 3 роки тому

    Superb video. Great explanation. Covers everything, without any bs. Thank you

  • @tartaric2040
    @tartaric2040 3 роки тому +1

    I feel lucky to have access to such content

  • @conando025
    @conando025 3 роки тому +2

    Thank you so much! I stumbled upon that when researching Shamir's Secret Sharing Algorithm but at that point the formulas just went straight over my head but this is explained so clear, I love it

  • @arshawitoelar7675
    @arshawitoelar7675 3 роки тому +2

    As an 11th grader this video was a trip

  • @SantiagoMorales-w1s
    @SantiagoMorales-w1s 9 місяців тому

    what a brilliant video mate, saving my degree haha keep it up

  • @davidmurphy563
    @davidmurphy563 2 роки тому +1

    Ugh, I just used Lagrange interp to make a camera fly about between points so 5 minutes ago, if I was asked if I understood it I'd have said nodded my head and said "pretty decent understanding". After this video I'm revising my self-appraisal to "I'm effectively clueless".

  • @liqs_xd
    @liqs_xd Рік тому

    Great video, thanks helped me understand the concept.

  • @mattiafiore7568
    @mattiafiore7568 Рік тому

    this video is perfection

  • @triton62674
    @triton62674 2 роки тому

    Wonderful, thanks for the clear and concise video

  • @weetabixharry
    @weetabixharry 3 роки тому +2

    Is this method unsuitable if there's any noise in the original observation of the "nodes"? If this method guarantees the lowest order polynomial, then I guess it could give a polynomial of order 1 million that *exactly* fits the nodes, instead of a very low order polynomial that *almost* fits the nodes.

    • @drvanon
      @drvanon 3 роки тому +1

      My background is in physics and this is exactly what I was wondering too! My thought was that it introduces a new variable for each data point, which practically never works with "real life data".
      Im curious if that means this is not suited for fitting then, or if it can be modified to be more suitable. If the former, what are then the practical uses of this?

    • @f1uc1k1y1o1u
      @f1uc1k1y1o1u Рік тому

      The higher the order of the polynomials, the more that noise yields vastly different results. It gets more unstable as you add more points.
      The polynomial will still always fit the intended points, but the interpolating polynomials become wildly different for more and more points.
      If approximations robust to noise are what you need, you can try other interpolation methods, like fourier

  • @fakestory1753
    @fakestory1753 3 роки тому

    You make this so easy to understand.

  • @coltonbourque693
    @coltonbourque693 Рік тому

    Wow, really well explained! Thanks

  • @gghelis
    @gghelis 3 роки тому

    What did Scorpion say, when he found his family dead? 4:32

  • @rodanmuro
    @rodanmuro 2 роки тому

    Well....I'm here only to say "beautiful explanation" and, thank you so much!!

  • @janurek3050
    @janurek3050 3 роки тому

    you don't know how this material was unnecessarily made complicated in your classes in university. I'm amazed how clearly and simply you explain what others couldn't. Thanks a lot

  • @EngMostafaEssam
    @EngMostafaEssam 3 роки тому +1

    Thanks a lot ❤
    Please can you make a video teaching us how we can draw and animate graphs like the animation in this video?

  • @mosab643
    @mosab643 Рік тому

    5:23 Why use L(x) here and not p(x)?

  • @ziningwang1807
    @ziningwang1807 3 роки тому

    Wonderful illustration! Thank you!

  • @surabhi6576
    @surabhi6576 2 роки тому

    Amazing video! Thank you for this.

  • @knaz7468
    @knaz7468 3 роки тому

    This really cleared things up for me. Thanks!

  • @brianyeh2695
    @brianyeh2695 8 місяців тому

    Very clear! Thank you!

  • @everydaySupremacey
    @everydaySupremacey 3 роки тому

    Wonderfully explained!

  • @adrianv.v.4445
    @adrianv.v.4445 3 роки тому

    How didnt I find this channel earlier? Amazing content

  • @estebancanizales3303
    @estebancanizales3303 2 роки тому

    BEAUTIFUL THANK YOU SO MUCH

  • @robertwalkley4665
    @robertwalkley4665 2 роки тому +1

    How is it guaranteed that the basis functions all add to 1 at any point over the domain?

    • @f1uc1k1y1o1u
      @f1uc1k1y1o1u Рік тому

      7 months late to the party but here's an answer.
      Recall that every polynomial can be written as a product of irreducible monomials multiplied by some constant factor c.
      c(x-r_0)(x-r_1)...(x-r_n)
      So if you take the general form of the Lagrange basis and subtract 1, you get the constant 0 polynomial, proving that the Lagrange basis always sums to one, everywhere.
      Here is why:
      By design, the Lagrange basis normally interpolates a degree n polynomial using n+1 basis polynomials each of degree n (that each have n roots guaranteed by their n monomial factors) but each having one point defined to 1. However subtracting 1 from the entire interpolating polynomial gives a polynomial of degree n with n+1 roots (The same n+1 points that were originally 1, are now roots).
      sigma(L(x)) - 1 = P(x) - 1
      On the left hand side, each basis polynomial is of degree n with n roots, but on the right hand side, the entire interpolating polynomial minus one is of now of degree n with n+1 roots somehow. Weird, right? This would be a logical contradiction of the Fundamental Theorem of Algebra UNLESS we allow the constant c to be zero, then it shows both sides must be constant polynomials, namely, 0.
      So the constant term of that new polynomial, c, must be 0, making the entire interpolating polynomial minus one always be guaranteed to yield the constant zero polynomial.
      So the Lagrange basis summed together is guaranteed to be the constant 1 polynomial at all points, since it's sum is the constant 1 polynomial.

  • @timothymattnew
    @timothymattnew 3 роки тому +1

    Wow, this is sweet!

  • @ananditatiwari21
    @ananditatiwari21 2 роки тому

    broooooooooooo this was freakin genius

  • @sammyapsel1443
    @sammyapsel1443 Рік тому

    Nice video, question: if you want to find an interpolation with a polynomial of order n+1 for n+1 nodes, what is a good method of going about it? (I assume there must be some extra degree of freedom, since this will underdetermine the system of equations)

  • @typecasto
    @typecasto 3 роки тому

    The graph of them all together reminds me of the influence of the different points in a bezier curve, any relation?

  • @linguamathematica2582
    @linguamathematica2582 3 роки тому

    Fantastic explanation, thank you!

  • @ayushagrawal8198
    @ayushagrawal8198 3 роки тому

    Great video!! bumped in accidently but found a gem. Also, which software are you using for these amazing animations?

  • @overcomplete
    @overcomplete 2 роки тому

    very cool, would have loved to have seen an example at the end where the polynomial is of lower order than the number of points

  • @Dolkarr
    @Dolkarr 3 роки тому +1

    Isn't this equivalent to having a single polynomial of degree n and just solving the set of equations given by the points we wish to interpolate?

    • @schrodingerbracat2927
      @schrodingerbracat2927 3 роки тому +1

      yes. for the Langrangian method, no inverse matrix or Gauss-Jordan procedure is needed. there is also another method: Newtonian interpolation (for equally-spaced x-coordinates). all methods will yield the same result, because the interpolation polynomial is unique.

    • @f1uc1k1y1o1u
      @f1uc1k1y1o1u Рік тому

      The barycentric Lagrange method shows algorithmic superiority compared to the Vandermonde method.
      Adding new data points is more efficient in the barycentric Lagrange method. The Vandermonde method must unfortunately be recalculated entirely as new data points are entered.

  • @osamaelatfi2583
    @osamaelatfi2583 Рік тому

    Fantastic explanation

  • @clorox6447
    @clorox6447 2 роки тому

    great and clean video!

  • @joaofrancisco8864
    @joaofrancisco8864 3 роки тому +1

    Really well explained!

  • @tidaimon2149
    @tidaimon2149 Рік тому

    Thank you for your video!

  • @samuelmcdonagh1590
    @samuelmcdonagh1590 3 роки тому

    good video. liked the music. concise.

  • @p4rkoursnip3r47
    @p4rkoursnip3r47 10 місяців тому

    So p(x) becomes the polynomial that the entire set of data points?

  • @sultandaniels1732
    @sultandaniels1732 2 роки тому

    absolutely beautiful video

    • @DrWillWood
      @DrWillWood  2 роки тому

      Thanks a lot! appreciate the kind words

  • @BetaChri5
    @BetaChri5 Рік тому

    Thank you, very good explanation! :)

  • @Kuratius
    @Kuratius 3 роки тому

    Can't you combine Padé approximants with this if you just substitute the Taylor series with a lagrange Polynomial?

  • @jaimeduncan6167
    @jaimeduncan6167 Рік тому

    A very important point is that the interpolation formula (another polynomial) can be doing something totally different than the function outside the selected points, even between the selected points.

  • @AntonioLasoGonzalez
    @AntonioLasoGonzalez Рік тому

    Very good explanation!

  • @paulkolodner2445
    @paulkolodner2445 3 роки тому +1

    And this technique can be used to interpolate functions of more than one variable.

    • @justdoityourself7134
      @justdoityourself7134 3 роки тому

      Yes! Of any variable count, and in any configuration you can imagine ( Degree of each variable ). As long as you don't run out of data points. I like to think of it as a fractal reduction of functions.

  • @Unknownfor13
    @Unknownfor13 8 місяців тому

    Very well explained

  • @hongminh4963
    @hongminh4963 2 роки тому

    It brings back memories of my glorious days that I used this thing to predict pattern of the prime numbers. No matter how hard I tried, everything just failed successfully.

  • @matematicaspanish8301
    @matematicaspanish8301 2 роки тому

    Loved it!

  • @sarabasheer-jc7ws
    @sarabasheer-jc7ws 9 місяців тому

    In the very end of the video, should not it be that m is less than or equal to n (not strictly less than)?

    • @sarabasheer-jc7ws
      @sarabasheer-jc7ws 9 місяців тому

      because if m is strictly less than n, then the proof misses the case when the interpolating polynomial is of degree exactly n. That is when the coefficients of the highest order term do not add up to zero.

  • @user-pr6ed3ri2k
    @user-pr6ed3ri2k Рік тому

    1:13 I can think of a way to make polynomial with same property but not of minimal degree

  • @christianchavez2202
    @christianchavez2202 3 роки тому

    This is great!

  • @rcangelosi
    @rcangelosi Рік тому

    Very nice explanation.

  • @maestro_100
    @maestro_100 Рік тому

    Thank You Soooo Much Sir!

  • @lukasbalmer9137
    @lukasbalmer9137 5 місяців тому

    what a great video!

  • @nalayak862
    @nalayak862 Рік тому

    wow dude , understanding some thing so thoroughly , doesn't require you to actively memorise facts/ formulaes - i can feel the equations now

  • @child7400
    @child7400 Рік тому

    at 3:39 , why did you write "n+1" and not just "n"

    • @mostafagamed
      @mostafagamed 4 місяці тому

      Because set of points give are from 1 to n in addition to the point 0 making it n+1

  • @reijerboodt8715
    @reijerboodt8715 3 роки тому

    What if there is a repeated desired output value in the y_i list? Dividing by l_i*(x_i) won't work in that case haha

    • @DrWillWood
      @DrWillWood  3 роки тому

      Yes you're right! polynomial interpolation is provably impossible if there are 2 distinct y_i values for a repeated x_i node. The best proof IMO uses the Vandermonde matrix which I am writing a video about at the moment :-)

  • @ianprado1488
    @ianprado1488 2 роки тому

    Bro, your videos are amazing