Padé Approximants

Поділитися
Вставка
  • Опубліковано 16 чер 2024
  • In this video we'll talk about Padé approximants: What they are, How to calculate them and why they're useful.
    Chapters:
    0:00 Introduction
    0:33 The Problem with Taylor Series
    2:11 Constructing Padé Approximants
    4:50 Why Padé Approximants are useful
    5:45 Summary
    Supporting the Channel.
    If you would like to support me in making free mathematics tutorials then you can make a small donation over at
    www.buymeacoffee.com/DrWillWood
    Thank you so much, I hope you find the content useful.

КОМЕНТАРІ • 622

  • @3blue1brown
    @3blue1brown 2 роки тому +6679

    I had never known about Padé approximations, and you did such a good job motivating and explaining them. Also, the way you mentioned how it seems almost unreasonably effective, like getting something for nothing, definitely felt like it was speaking directly to the thought passing through my mind at that moment.

    • @DrWillWood
      @DrWillWood  2 роки тому +1424

      Thanks so much! Padé approximants still feel kind of magic to me and it's definitely that they aren't well known that made me want to make a video on them. Also, does this mean I have bragging rights that "I taught 3b1b some mathematics" :-D

    • @T3sl4
      @T3sl4 2 роки тому +25

      I'm excited!

    • @remicornwall754
      @remicornwall754 2 роки тому +21

      Qualitatively and intuitively, is it because when you truncate a Taylor series at some order O(n) you are doing precisely that but when you have a rational polynomial approximation O(m)/O(n) it really could be order greater than m or n if it is divided out? I.e. if you do 1/(1-x), the series is infinite, so you must be accruing all that accuracy by doing O(m)/O(n) because it is not really truncated at order O(m) or O(n) but is potentially an infinite series, even though you write it compactly as a rational fraction.

    • @curtiswfranks
      @curtiswfranks 2 роки тому +78

      It is not quite for nothing. You are calculating twice as many interrelated coefficients (minus 1), and the overall function is more complicated (which makes computation for any given x harder and further dependent on precision). And that is AFTER having to calculate the Taylor expansion, so it is triple the amount of work, and is 'refining' an approximation which was already rather nice and effective in some respects. It still seems unreasonably effective to me, a bit, but it is not entirely for free. There probably are also some theoretical costs associated with going from polynomials (extremely nice, highly-constrained functions) to rational functions (which are just slightly less so).

    • @phiefer3
      @phiefer3 2 роки тому +62

      @@curtiswfranks I don't think it's fair to say that you calculate twice as many coefficients, in general the number of coefficients would be roughly equal. In the Pade aproximation you have m coefficients in the numerator and n coefficients in the denominator, and you construct it from a m+n order taylor series, so they literally should be equal. In the example the taylor series only had fewer because some of the coefficients were 0, but you still have to calculate those coefficients anyways.
      As far as the "something from nothing", I think what is meant is that you get this extra precision without taking any more information from the original function. I could literally give you a taylor series without telling you what it is supposed to be approximating, and you could build a better approximation even though you don't even know what it is you are approximating.

  • @jamesblank2024
    @jamesblank2024 2 роки тому +470

    Padé approximations shine in analog filter design where you have poles and zeros. They are particularly effective in analog delay lines.

    • @5ty717
      @5ty717 10 місяців тому +3

      Beautiful

    • @Eizengoldt
      @Eizengoldt 10 місяців тому +8

      Nerd

    • @pbs1516
      @pbs1516 9 місяців тому +10

      We also use them in control theory, for the same reasons (a finite-order modeling of a time delay). It's becoming an old man trick though, now that most of what we're doing practically is in discrete time (were delays can be treated as a linear system with the good multiplicity!)

    • @alexanderlea2293
      @alexanderlea2293 9 місяців тому +3

      As soon as he mentioned the fractional polynomial, my mind went to laplace and filters. It's such a natural fit.

    • @yash1152
      @yash1152 9 місяців тому

      thanks i was looking for some comment mentioning its use case areas.

  • @carl8703
    @carl8703 2 роки тому +1385

    In general, expressions of the form at 0:23 are interesting since they have a few nice properties:
    1.) Much like how polynomials are capable of expressing any function composed from addition, subtraction, and multiplication (for a finite number of operations, at the very least), expressions of the form at 0:23 do the same thing, but for addition, subtraction, multiplication, *and division*. Another way of putting it is that they are capable of describing any function defined on a field. This might help towards explaining why Padé approximates can be so much more effective than Taylor series.
    2.) Approximates of that form are used all the time in highly realistic graphics shaders. This is because they can be used to create fast approximates of functions whose real values could not be calculated in the time it takes to render a frame. Unlike polynomials, they can behave very well over their entire domain, and they avoid large exponents that could introduce floating point precision issues, both of which are important when you need to guarantee that a shader will not create graphical artifacts in a limited environment where all you have to work with is 32 bit floating point precision. They also avoid calls to advanced functions like sin() or exp(), which again makes their execution especially fast.
    3.) You don't always need the derivatives of a function to find such an approximate. For instance, if you know that a function has an asymptote, or that it assumes a certain value at 0, or that it's symmetric, or that it tends towards a number at ±∞, then that automatically tells you something about the coefficients within the approximate. It then becomes much easier for you to run an optimization algorithm on a dataset to find good values for the remaining coefficients. Christophe Schlick gives an excellent example of this approach in "An Inexpensive BRDF Model for Physically-based Rendering" (1994).
    4.) Multivariate versions of the approximate are a thing, too. To see how such a thing can be done, simply start from the proof for the statement in 1.) but now instead of working with real values and a variable "x" as elements, you'll also be working with another variable "y". As an example, for 3rd order bivariate approximates you'll wind up with polynomials in your numerators and denominators that have the form p₁xxx +p₂xxy + p₃xyy + p₄yyy + p₅xx + p₆xy + p₇yy + p₈x + p₉y + p₁₀

    • @DrWillWood
      @DrWillWood  2 роки тому +318

      Wow, that's so awesome! All news to me as well! Thanks for taking the time to write this post I appreciate it, as will people watching this video I'm sure.

    • @Peter-bg1ku
      @Peter-bg1ku 2 роки тому +26

      Thank you @carl. You almost made me shed a tear with your post.

    • @diegohcsantos
      @diegohcsantos 2 роки тому +27

      @@DrWillWood is there any result about the error estimation with a certain Pade aproximante? Something like Lagrange's form for the remainder fo a Taylor polynomial?

    • @dlevi67
      @dlevi67 2 роки тому +14

      This comment should be pinned - it adds so much extra information to an already excellent video!

    • @TheBasikShow
      @TheBasikShow 2 роки тому +12

      I don’t see how (1) could be true. The Weierstrass function is defined on a commutative field (ℝ) but doesn’t have a Taylor approximation, and therefore (if I understand correctly) doesn’t have a Padé approximation. Maybe you meant that Padé approximations can express any functions which are derived from adding, subtracting, multiplying and dividing the identity f(x) = x?

  • @Elies313E
    @Elies313E 2 роки тому +61

    The algorithm recommended this video to me, I'm so thankful because this is beatiful and very useful.

    • @Elies313E
      @Elies313E 8 місяців тому

      @@chonchjohnch Which part?

  •  2 роки тому +316

    This is an excellent explanation of something I didn't know existed. Yet it's so simple and elegant. I'm working on a Machine Learning playlist on linear regression and kernel methods and I wish I had seen this video earlier! I'll play around with Padé approximants for a while and see where this leads me.
    Thank you for this interesting new perspective!

    • @helpicantgetoffofyoutube
      @helpicantgetoffofyoutube 9 місяців тому +8

      Hello there! It's been a year. I just watched the video, and now I wonder what you managed to do since then

  • @Sarsanoa
    @Sarsanoa 2 роки тому +93

    Oh nice. e^-x is a very common function to Padé approximate in linear control theory because it's the Laplace transform of a uniform time delay. Notably, x in this context is a complex number, yet it still works. I've never understood how it was computed until now.
    I think the aha moment is realizing we are discarding all higher order terms when we perform the equation balance. This is the key reason why the Padé approximation isn't just equal to the Taylor approximation.

    • @mstarsup
      @mstarsup 2 роки тому +14

      The case you're talking about can be computed much more easily actually. Just write e^(-x) = e^(-x/2)/e^(x/2), and use Taylor expansion for e^(-x/2) and for e^(x/2) :)

  • @romajimamulo
    @romajimamulo 2 роки тому +52

    I've never heard of this before, and after judging so many bad entries, this is a breath of fresh air

    • @DrWillWood
      @DrWillWood  2 роки тому +8

      Thanks a lot! Its one of my favourite things about maths UA-cam, coming across concepts you wouldn't have otherwise! Also, good luck with SoME1 :-)

    • @romajimamulo
      @romajimamulo 2 роки тому +3

      @@DrWillWood thank you for the good luck. Check out mine if you have time (it's a 5 part series, but for judging, only the first part is required

    • @ciarfah
      @ciarfah 2 роки тому

      We used these in Control Theory to approximate time delays (exponentials) in the transfer function

  • @tbucker2247
    @tbucker2247 2 роки тому +70

    Mech Eng grad student here. This is my "did you know?!" flex for the next couple weeks. Amazing video, thanks!!

    • @isaackay5887
      @isaackay5887 2 роки тому +10

      Let's be realistic though...you and I both know we're still gonna regard *sin(x)≈x* _for small x_ lol

    • @janami-dharmam
      @janami-dharmam 2 роки тому +1

      Basically you are using M+N terms in the taylor series but using only M and N terms for evaluation. This is computationally very efficient.

    • @oniflrog4487
      @oniflrog4487 2 роки тому +1

      @@isaackay5887 MechEs that work in Rotordynamics: *small* x? what you mean!?
      🤣

    • @cdenn016
      @cdenn016 10 місяців тому +1

      If you want to be a summation god then check out the book by Carl Bender. Elite 💪

  • @eulefranz944
    @eulefranz944 2 роки тому +33

    Finally! I encountered these so often in physics papers. Finally I get it!

    • @MrKatana333
      @MrKatana333 2 роки тому +5

      What physics papers? I was just wondering how this could be applied in physics. Can you give some reference please? Thanks!

    • @felosrg1266
      @felosrg1266 9 місяців тому

      In which fields did you find those pate approximations in use?

  • @ZakaiOlsen
    @ZakaiOlsen 2 роки тому +163

    Having spent a great deal of time reading up on Pade approximants and struggling to find easy to understand introductory examples it is extremely exciting to see content such as this being put out there for people to learn. Fantastic job motivating the need and demonstrating the utility for these rational approximations. In my personal explorations, I have found multipoint Pade approximations to be very cool, being able to capture asymptotic behaviors for both large and small x, or around poles / points of interest is very cool. Keep up the awesome work!

    • @DonMeaker
      @DonMeaker 2 роки тому +6

      Ratios of polynomials are used in aircraft flight controls. Normally, flight test attempts to measure aircraft handling qualities in a ratio of two second order polynomials, even though modern digital flight controls may be much higher order functions.

    • @mjmlvp
      @mjmlvp 10 місяців тому +1

      For a detailed description you got to also see the video series on youtube of lectures by Carl Bender, "Mathematical Physics"

  • @rolfexner9557
    @rolfexner9557 2 роки тому +138

    It seems natural to choose M = N. What are the situations where there is an advantage in choosing M > N or N > M, where I have a "budget" of M+N coefficients that I want to work with?

    • @ishimarubreizh3726
      @ishimarubreizh3726 2 роки тому +80

      Having M=N means you can choose a finite non zero limit to the padé approximant according to the function you want to capture the behavior. In this case the denominator had a larger order and therefore made the fraction goes to 0 at inf. If it was the other way around (N>M) it would blow up and you would lose one of the motivation to use padé, but having polynomials with roots at the denominator allows to better describe poles located at finite x I would say, where Taylor expansion fail to even exist
      Edit : it seems this reply was liked a few times so I would like to add that going to inf is not an actual loss of motivation. It is great to have an expression that also captures the asymptotic behavior even when there is no finite limit, whereas Taylor blows up without following the divergence of the function

    • @pierrecurie
      @pierrecurie 2 роки тому +26

      M=0 is just an ordinary Taylor series, so that's at least 1 use for N>M. From that perspective, it can also be seen that the Taylor series is a special case for pade approx.

    • @viliml2763
      @viliml2763 2 роки тому +28

      Your function will behave like x^(N-M) asymptotically.
      For example, with a function like sine that tends to stay constant, you might want M=N. With a function e^x that tends to zero, you might want M>N. And so on.

  • @DeathStocker
    @DeathStocker 2 роки тому +71

    A hidden gem of a channel! Never really considered other approximations because the Taylor ones are so commonly used in computation. I remember reading about polynomial approximations of trigonometric functions for low-end hardware but maybe those were less general than the Padé approximation.

    • @DrWillWood
      @DrWillWood  2 роки тому +7

      Thanks a lot! Ah yeah that's a really nice application for this sort of stuff (how can we pack the most powerful approximation into the smallest memory/compute). I don't know much about it but definitely cool! I remember a lecturer saying splines were important in this area but I'll be honest I can't remember the details!

    • @DeathStocker
      @DeathStocker 2 роки тому +7

      @@DrWillWood Found the book that I read! It is "The Art of Designing Embedded Systems (2nd ed)" by Jack Ganssle. Chapter 4.4 has floating point approximations for common functions like exponent, log, and trigonometric.

    • @DrWillWood
      @DrWillWood  2 роки тому +5

      @@DeathStocker Awesome!! thanks for that :-)

  • @ichigonixsun
    @ichigonixsun 9 місяців тому +37

    The Padé Approximant might be closer to the actual function in the long run, but it actually has a larger relative error compared to the Taylor series around x=0. Since we only care about approximating sin(x) from x=0 to x=pi/4, because we can then use reflection and other properties to get the value for other angles, the benefits are overcome by the disadvantages (i.e. you have to do more arithmetic operations, including a division).

    • @ere4t4t4rrrrr4
      @ere4t4t4rrrrr4 9 місяців тому +2

      That's interesting. Are there any approximation that is locally better than the Taylor series, around some specific point?

    • @ichigonixsun
      @ichigonixsun 9 місяців тому +3

      ​@@ere4t4t4rrrrr4 I know there are algorithms that can compute the value of a function at a given point with (arbitrarily) better precision, but i don't know about any other closed algebraic formula which locally approximates a given function better than the Taylor Series.

    • @beeble2003
      @beeble2003 9 місяців тому +5

      Is that true in general for periodic functions? What about non-periodic functions?

    • @Solution4uTx
      @Solution4uTx 2 місяці тому

      @@ichigonixsun that's interesting could you please share the name of those algorithm i want to test

    • @ichigonixsun
      @ichigonixsun 2 місяці тому +2

      @@Solution4uTx CORDIC, for example. There are many others with different use cases.

  • @henrikd.8818
    @henrikd.8818 2 роки тому +34

    I really like how fast you managed to explain it! Only few math videos get a topic like this explained in under 7 minutes

    • @inigolarraza5599
      @inigolarraza5599 10 місяців тому +3

      Yeah, if you skip the mostly unnecessarily long proofs and theorems, and use a more accesible language, most 1st-2nd year college mathematics cousld be explained this way.
      Concepts like eigenvalues and eigenvectors, Taylor polynomials and series, or Lagrange multipliers could easily be taught in 10-20 minutes (of course, if you already know matrices, determinants, derivatives and some multivariable calculus) but easily take up entire lectures, because of the excessive attention on proofs or "preliminary definitions" that are not necessary to understand the concepts in the first place. (only to rigorously define them).
      The sad reality is that most students get lost or mentally exhausted on the theoretical chip-chat and they end up NOT learning the methods or even understanding what they're doing.

  • @cornevanzyl5880
    @cornevanzyl5880 2 роки тому +35

    I really didn't like calculus in University but I find this very interesting. I can appreciate the beauty much more now that I'm not suffering through it

    • @yan.weather
      @yan.weather 10 місяців тому

      Suffering indeed 😂🎉

  • @nikolaimikuszeit3204
    @nikolaimikuszeit3204 2 роки тому +26

    Definitively interesting, but if I get it right: if you decide you need a higher order you cannot re-use the low order coefficients that you already have. That I'd consider a disadvantage.

    • @beeble2003
      @beeble2003 9 місяців тому

      Doesn't seem like much of a disadvantage: you calculate the approximant only a few times compared to how many times you use that approximation function.

    • @nikolaimikuszeit3204
      @nikolaimikuszeit3204 9 місяців тому

      @@beeble2003 Well, I'd assume that this is true for many---definitively not all---applications using approximations and I agree. Then it is "not much" of a disadvantage but it is one. ;) Cheers.

    • @9WEAVER9
      @9WEAVER9 2 дні тому

      Well it's not like those new coefficients are impossible to find. ​You'll have a recurrence relation, at least, for the coefficients of the next higher order Padé. Otherwise you wouldn't even have a Pade' to begin with. See it as problematic as you'd like, Padé gives you information in proportion to the work you give the Padé. That's just a commonality of Asymptotic Analysis.@@nikolaimikuszeit3204

  • @paris_mars
    @paris_mars 10 місяців тому +3

    This is a great video. Well made, simple, clearly explained, genuinely interesting. Awesome.

  • @dodokgp
    @dodokgp 10 місяців тому +8

    The best and yet the simplest explanation for Padé approximation I have seen! We use it a lot in finite element simulation software in engineering, but I was always in search for a more intuitive explanation for its merits over default Taylor series! I am happy today.

  • @HELLO-mx6pt
    @HELLO-mx6pt 2 роки тому +1

    Pretty cool video! The general idea for the approximation reminds me a lot of the process one uses to expand a fraction into a p-adic sum, to get its p-adic representation. After years of doing math, this whole idea of actually using rational functions, a thing that we commonly ignore due to them being "ugly" is shinning a new light. Keep up the good work!

  • @abdulkadiryilmaz4085
    @abdulkadiryilmaz4085 2 роки тому

    Immediately subbed. Definitely a great content, keep up the good work

  • @user-de1td7jh9y
    @user-de1td7jh9y Рік тому

    Thank you very much! All the explanations I found on the Internet were quite difficult for me to understand. You've done a really cool work!

  • @robertdavie1221
    @robertdavie1221 10 місяців тому

    Very well explained. Thank you!

  • @vector8310
    @vector8310 4 місяці тому

    Superb introduction. I was browsing thru Hall's book on continued fractions and happened upon a section on Padé approximants, which peaked my curiosity and led me to this video. I can't wait to study these further. Thank you.

  • @curiousaboutscience
    @curiousaboutscience 9 місяців тому

    This was fun to watch! Definitely good points on the typical Taylor series divergence too. How cool!

  • @MeanSoybean
    @MeanSoybean 10 місяців тому

    This is absolutely brilliant.

  • @tedburke525
    @tedburke525 2 роки тому

    So clear and concise! Thank you.

  • @knpark2025
    @knpark2025 9 місяців тому +1

    I once needed to look into a dataset that seemed to have an asymptotic line. I remembered of rational equations from high school algebra, did a regression analysis with that instead of polynomials, and it worked wonders. I never expected this to be also a thing for approximating existing functions and I am so happy to learn about it here.

  • @MuradBeybalaev
    @MuradBeybalaev 9 місяців тому

    I appreciate that you switched to stressing the correct syllable midway through the video.
    Not only is the man French, but there's even an explicit diacritic in the last syllable of his name to make stress extra clear.

  • @StratosFair
    @StratosFair 10 місяців тому

    Short, clear, and instructive. Congratulations for the great work 👍🏾👍🏾

    • @DrWillWood
      @DrWillWood  10 місяців тому

      Thank you very much!

  • @easymathematik
    @easymathematik 2 роки тому +40

    I´ve met Padé-Approximation at university in 5th semester. The name of the course was - as you can guess - "Approximation". :D There are another very interesting methods as well.
    Nice video from you. :)

    • @algorev8679
      @algorev8679 10 місяців тому +4

      What other interesting methods did you see?

    • @seanleith5312
      @seanleith5312 10 місяців тому

      We know Spanish peple have no contribution to Math or science in General, is this the first one?

    • @ere4t4t4rrrrr4
      @ere4t4t4rrrrr4 9 місяців тому +1

      @@algorev8679 one are Chebyshev polynomials, another are Lagrange polynomials (they are used if you want to approximate minimizing the error around a large interval of the function and not just approximate around a point like Taylor series or Padé approximants). Check out the approximation theory article on Wikipedia

  • @MW-vg9dn
    @MW-vg9dn 2 роки тому

    Very cool channel, thanks for making these videos!

  • @AJ-et3vf
    @AJ-et3vf 2 роки тому

    Awesome video! Thank you!

  • @hrissan
    @hrissan 2 роки тому

    Thanks, never heard of this approximation before.

  • @abelferquiza1627
    @abelferquiza1627 2 роки тому

    I didnt know and i liked, so simple and useful.thanks

  • @Peter-bg1ku
    @Peter-bg1ku 2 роки тому +2

    This video is way better than the book I read which skipped a lot of the information relating to the Pade approximation. Thank you! This is brilliant!

  • @EconJohnTutor
    @EconJohnTutor 2 роки тому

    This is incredible. I never knew about this, thank you!

  • @energyeve2152
    @energyeve2152 9 місяців тому

    Cool! Thanks for sharing!

  • @philipoakley5498
    @philipoakley5498 2 роки тому

    Excellent. Hadn't known about that method. I like (typically) that the estimate tends to zero in the long term (no mention of NM Effects)

  • @VLSrinivas
    @VLSrinivas 9 місяців тому

    I have read about Padé schemes to discretize a partial differential operator in Computational fluid dynamics a while ago but thanks for making the advantage over Taylor series more visible. This could be of great use in computational engineering simulations to avoid divergence.

  • @algorithminc.8850
    @algorithminc.8850 2 роки тому +26

    Great explanation. These are the kinds of topics you want to share with everyone, as a scientist, but want to keep quiet, as a company, in order to have an edge. Thank you much.

  • @EngMostafaEssam
    @EngMostafaEssam 2 роки тому

    Thanks a lot, you deserve more than million subscribers ❤

  • @1997CWR
    @1997CWR 2 роки тому

    Excellent Presentation. Concise, clear, and thoroughly enjoyable.

  • @shubhamg9495
    @shubhamg9495 2 роки тому

    Such an informative video. Thank you so much!

  • @xyzct
    @xyzct 2 роки тому +2

    I'm stunned I've never heard of this before, given how important Taylor series approximations are.

  • @TypicalAlec
    @TypicalAlec 2 роки тому

    This is fantastic, genuinely might use these at work 👍🏻

  • @amitozazad1584
    @amitozazad1584 2 роки тому

    Simply amazing!

  • @editvega803
    @editvega803 2 роки тому

    Thank you very much!! I didn't know about that. Very interesting 😀🤔

  • @mikelezhnin8601
    @mikelezhnin8601 2 роки тому +68

    I'm missing the point. It's cool and all, but there are two buts:
    1) yes, taylor series do not extrapolate well, but it's not the point of taylor series, they are specifically used to approximate the function in some small area near to some point.
    2) [N/0] padé approximant is the same as taylor series, and then you have the other N versions for padé approximants - [N-1/1], [N-2/2], etc.
    It seems unfair to say that padé approximants work better than taylor series, since padé approximants are a direct extension of taylor series, plus you can cheat by freely choosing how to split N into [N-M, M].

    • @olgittj1507
      @olgittj1507 2 роки тому +27

      Not to mention that the Taylor expansion, if it exists, it is guaranteed to be analytic in the complex plane since it's a polynomial.
      N/M] Padé approximations will introduce poles if M > 0.

    • @theTweak0284
      @theTweak0284 2 роки тому +10

      I agree. I would like to see a better example than for sin(x) because if your goal is to extrapolate an oscillating function, you want it to continue to oscillate as that behavior is completely lost and you are left with a much more troubling outcome: expecting some accuracy but getting none.
      The Pade approximation implies that at a certain point sin is basically 0, which is not true at all.
      Maybe it works better for some functions but unless there is some result I'm ignorant of that gives some kind of mention on how much "longer" it is accurate and what kind of accuracy you are given.
      I'm sure there's a use for it or this approximation would be buried in numerical analysis textbooks and never reach the light of UA-cam.

    • @kenansi1624
      @kenansi1624 2 роки тому +8

      I guess the point is that it is a meaningful way to extend the Taylor expansion for a better approximation with the same amount of parameters. Often the limiting behavior tells something about the behavior near a point. Choosing among [N - M/M] also depends on the limiting behavior. In the example given here, e^-x and sin x, the limiting behavior is the order of constant. So, it’s natural to choose N=2M, and I have a feeling that it gives the best local approximation among all combinations including the Taylor approximation.
      Edit: the asymptotic behavior of e^-x should be approaching zero beyond all finite polynomials. So, the best one should be [0/N].

    • @JamesBlevins0
      @JamesBlevins0 2 роки тому +9

      Worse-case analysis: There are functions for which Padé approximation is no better than Taylor series approximation.
      Most functions: Padé approximants are better, especially for functions with singularities.

    • @deathworld5253
      @deathworld5253 2 роки тому +5

      Also, Taylor series is more suitable for both taking derivatives and integrating because it's just a sum of easy functions. It's much harder to to this with Pade approximations

  • @frentz7
    @frentz7 2 роки тому

    You have such a nice style, in the communicating (both voice and visual) but also / maybe even more so, a nice "pace" of thinking and a swell difficulty level.

  • @speedsterh
    @speedsterh 2 роки тому

    Very interesting and very well explained ! Thank you

  • @charlessmyth
    @charlessmyth 2 роки тому +50

    This stuff is like the authors you never heard of, but were notable for their time :-)

  • @the_math_behind
    @the_math_behind 10 місяців тому

    Great video with a concise and effective explanation!

  • @ianprado1488
    @ianprado1488 2 роки тому +1

    As a calc 2 TA, I absolutely loved this

  • @tadtastic
    @tadtastic 9 місяців тому

    this is a really neat method of approximation explained very clearly and practically! nice vid

  • @TheQxY
    @TheQxY 2 роки тому +3

    After seeing this video I remembered that I actually did learn about this in an advanced mathematics class during my Master's, although then with with much higher order terms. However, they never explained very well how usefull it actually is, for simple approximations as well, so I quickly forgot about this. Thank you for the refesher, very well explained, and I will definitely keep PAdé approximation in mind as a useful tool in the future!
    EDIT: I remember now that the assignment was to find a singularity in a polynomial of order 130. By using Mathematica and gradually decreasing the orders of N and M in the Padé polynomial allowed the finding of a function with the same symmetry as the original giant polynomial, but with much reduced terms. This derivative of the approximated polynomial could then be used to find the singularity, which did not change location during approximation. Just a cool example of a more complicated application of the Padé approximation method for those who are interested.

  • @karambiout9737
    @karambiout9737 2 роки тому

    Oh thank you for the explanation, I will try to apply the padé approximants

  • @diegocrnkowise3102
    @diegocrnkowise3102 7 місяців тому

    Very interesting explanation, I'm currently studying electrical engineering and I've just been slightly introduced to a Padé approximant during a Control Systems practice, but never learned the algebra behind it in previous calculus classes.

  • @fizzygrapedrink4835
    @fizzygrapedrink4835 10 днів тому

    Such a great video! I'm cramming for exams in my uni right now, and this was super useful and pleasant to listen to! Way more understandable than our professor's notes lol

  • @kongolandwalker
    @kongolandwalker 10 місяців тому

    Rewatching because the thing has a lot of potential and i might apply it in my projects.

  • @drgatsis
    @drgatsis 9 місяців тому

    Nicely done!

  • @pjplaysdoom
    @pjplaysdoom 10 місяців тому +2

    Remarkable that in my university Maths degree, I didn't meet Pade Approximants. You explain the topic very clearly. I believe the B_0 = 1 assumption is fine unless the denominator polynomial vanishes at x = 0, which would imply a vertical asymptote in the overall function.

  • @jacquardscootch8939
    @jacquardscootch8939 2 роки тому +12

    This was really interesting. A professor at my college did a lot of research with approximants known as “Chromatic Derivatives”, and these share a similar motivation.

  • @connorfrankston5548
    @connorfrankston5548 9 місяців тому

    Very nice and very simple, makes perfect sense. I feel that these Padé approximants can greatly improve approximate series solutions to difference and differential equations in general.

  • @Yxcell
    @Yxcell 10 місяців тому

    Nice video, @DrWillWood!

  • @dvir-ross
    @dvir-ross 2 роки тому

    Thanks for sharing! I learned something new today 🙂

  • @samieb4712
    @samieb4712 2 роки тому

    Great video thanks !

  • @modolief
    @modolief 2 місяці тому

    The first time I saw Padé approximants was in a paper proving the transcendence of e. Thanks for the useful discussion!

  • @lamediamond4172
    @lamediamond4172 9 місяців тому

    Nice video, interesting stuff.

  • @luizhenriqueamaralcosta629
    @luizhenriqueamaralcosta629 10 місяців тому

    Amazing job

  • @stefan11804
    @stefan11804 9 місяців тому

    Never heard of Pade Approx. Thank you.

  • @geoffrygifari3377
    @geoffrygifari3377 2 роки тому +44

    Physics often use power series expansion because its so easy to just cut off terms higher than some order we want for small values of x, saying that those are "negligible". I imagine picking a polynomial order "just high enough" would be tougher if its in a ratio like the Padé approximant

    • @fatcatzero
      @fatcatzero 2 роки тому +3

      Is the Padé Approximant ever less accurate than the same-order Taylor series?

    • @brandongroth4569
      @brandongroth4569 2 роки тому +11

      ​@@fatcatzero By construction, a Pade Approximant is at least as accurate as a Taylor series because we force it to match the coefficients of the Taylor series. It is just a lot more work to find them via a N+M linear system of equations. If the approximant reduces to something nice like sin, they can be very useful, but that is probably a rare case. In numerical analysis, you often trade simplicity for accuracy, which is what is happening here with Taylor vs Pade.

    • @fatcatzero
      @fatcatzero 2 роки тому

      @@brandongroth4569 ah, I misinterpreted the original comment (mistook the point about "just high enough" to mean "bounding it to smaller than O(x^n) from the actual solution", not "do the easiest thing that will get me within O(x^n)").

    • @geoffrygifari3377
      @geoffrygifari3377 2 роки тому

      @@fatcatzero oh yeah, my point was close to that. what i'm saying is because taylor series results in a *sum* , you can just terminate the sum at nth power to get high-enough accuracy, using polynomial of order n. Now how can you do something like that if the approximant is a ratio? do we terminate the powers at both numerator and denominator? by no means obvious to me

    • @fatcatzero
      @fatcatzero 2 роки тому

      @@geoffrygifari3377 if we start with our n'th degree Taylor series and set N+M=n for the Padé Approximant, it seems like it's always going to be at least as good of an approximation as O(x^n) since it's equal-to-or-better than our O(x^n) Taylor approximation.
      I literally just learned about this concept by watching this video so I by no means know the specifics, but if it is true that the M+N approximant is always at least as good the associated n-degree Taylor series, yes it's by definition more work and it could be hard to determine how much better it is, but the upper bound on deviation from the actual function seems to be the Taylor series where we know very well how to determine the size of our error.
      Do you think any of that is incorrect and/or am I still missing something about your concern with applying this method?

  • @ianstorey1521
    @ianstorey1521 2 роки тому

    Great explanation

  • @Formalec
    @Formalec 10 місяців тому

    Very nice approximation to know

  • @dr.rahulgupta7573
    @dr.rahulgupta7573 2 роки тому

    Excellent presentation. vow !!

  • @ericthecyclist
    @ericthecyclist 9 місяців тому +1

    I always regretted not taking a splines class when i was a grad student because I didn't understand NURBS (Non-uniform rational B-splines) and how to compute their coefficients. In very few minutes, you made it apparent.

  • @zaccandels6695
    @zaccandels6695 2 роки тому +4

    Very cool. I'm somewhat surprised I was never introduced to this in numerical analysis

  • @antoinebrgt
    @antoinebrgt 2 роки тому

    Very nice, thanks!

  • @alessandro.calzavara
    @alessandro.calzavara 2 роки тому

    Wow thanks! So interesting

  • @Moe_Afkani
    @Moe_Afkani 2 роки тому

    Helpful! Thanks.

  • @vcubingx
    @vcubingx 2 роки тому

    This was very well made. Great job!

    • @DrWillWood
      @DrWillWood  2 роки тому

      I appreciate that, thank you!

  • @sradharamvlogs7862
    @sradharamvlogs7862 2 роки тому

    Nice explanation!

  • @feraudyh
    @feraudyh 9 місяців тому

    I have friends who live in a house in rue Paul Padé near Paris. I'm going to ring them and explain what the video has done so well.
    I'm sure it will make their day.

  • @Sheaker
    @Sheaker 2 роки тому

    Thanks! Nice to know!

  • @Bruh-vp6qf
    @Bruh-vp6qf 10 місяців тому

    Such an efficient video

  • @dhhan3100
    @dhhan3100 9 місяців тому

    Very interesting. I heard a few times Pade approximaton, but i did not know why we use that.

  • @usernameisamyth
    @usernameisamyth 2 роки тому

    thanks for sharing

  • @Cathal7707
    @Cathal7707 2 роки тому

    Incredibly useful for curve fitting with functions that you know should tend to zero as x-> inf

  • @aniksamiurrahman6365
    @aniksamiurrahman6365 2 роки тому

    Forget about Padé approximants, I'm sold to this guys accent.

  • @johanekekrantz7325
    @johanekekrantz7325 9 місяців тому

    I really like this video. Very well explained.
    The concept could obviously (so im sure oneone did and that it has fancy name) be generalized to the idea of also thinking about how we want the approximation to behave for lim x -> inf by modelling things as a sum of functions F(X) (fractions of polynomials here) lim x -> F(x) where F defines how you want the approximation to behave for extrapolation. That way we could also think about the derivatives of the approximations at infinity.

  • @bbanahh
    @bbanahh 10 місяців тому

    Brilliant!

  • @jim42078
    @jim42078 2 роки тому

    I'm fairly sure this has givne me an idea that'll be useful, however indirectly, in a paper I'm working on. Thanks for the video!

    • @DrWillWood
      @DrWillWood  2 роки тому +1

      Awesome! good luck with the paper!

  • @nickallbritton3796
    @nickallbritton3796 9 місяців тому

    This is what I call intuitive and useful math.
    Diagnose a problem: Taylor series approximations always diverge quickly to positive or negative infinity but many important functions you want to approximate stay near or approach 0.
    Find a solution: put the dominant term in the denominator.
    I'll definitely keep this trick in my back pocket, thank you!

  • @wmpowell8
    @wmpowell8 9 місяців тому +1

    When watching this I asked myself, why can’t we just construct the first M+N terms of the Taylor series of x^M*f(x), then divide by x^M?
    Then I realized the extra 1 in the denominator helps us avoid vertical asymptotes by moving those asymptotes into the complex plane.

  • @Aufenthalt
    @Aufenthalt 2 роки тому

    I knew already the Pade approximants (see e.g. Bender-Orszag book) but I never understood why they give an advantage. Thanks for explaining.

  • @pepsithebunny2404
    @pepsithebunny2404 2 роки тому

    Never heard of it, may enhance the results i am currently having with Taylor series alone. Tankyou.

  • @Impatient_Ape
    @Impatient_Ape 2 роки тому

    Really good video on this topic.

  • @tomctutor
    @tomctutor 2 роки тому +1

    @4:49 You can do this Padé of sin(x) in online Wolfram Alpha:
    Just input: [2/2] pade of sin(x)
    In general you need to use correct syntax *[N/M] pade of f(x)* where f(x) is your target function
    I got [3/3] pade of sin(x) = (x - (7 x^3)/60)/(x^2/20 + 1)

  • @victorribera5796
    @victorribera5796 2 роки тому +1

    It give you a lot of imporvement for only the extra step of rewritting the coefficients, quite impressive, indeed

  • @spikypichu
    @spikypichu 9 місяців тому

    Another great thing is that rational functions are often procedurally integratable. We can always factor the denominator into linears and quadratics (because of conjugate pairs) then we apply partial fractions. Terms of the form linear/quadratic can be dealt with using u-sub and arctan. If the quadratic is a perfect square then we only need u-sub. If there is an (ax+b)^2 - c form with positive c, we can further factor using difference of squares.

  • @tanmaysinha8138
    @tanmaysinha8138 2 роки тому

    Very amazing video. You really explained the concepts very nicely and concisely. A few things:
    1.) When we say that B_0 can be made 1 WLOG, are we not missing the cases when B_0=0? As such, we may be losing some generality
    2.) Do we always know that the M+N dimensional linear system is non-singular ie has a solution?
    3.) Since we are using an M+N dimensional Taylor expansion of f to get the [N/M] Padé approximation, would it not make more sense to compare against an M+N term Taylor expansion? For sin(x) the issue of diverging to infinity would still remain but atleast the expansion will agree with the curve for a bit longer.