It's an extremely useful technique in the field of asymptotics (see Carl Bender's lecture series). For example, imagine that you have some function f(x), and you can calculate the derivatives of this function near x=0. However, the Taylor series of this function has zero radius of convergence. For example, its terms might be (-x)^n n!. How do you evaluate this function at x=1? Taylor says that such a limit doesn't exist, but Pade says (by seeing that the main sequence of approximants converges) that the limit is actually 0.596... See "Padé approximation of Stieltjes series" by Allen et al (1975) for more details
ua-cam.com/video/LezYCywDtZs/v-deo.htmlsi=NTQpQlJxjmaTIaXY This ^ guy did a rough, but interesting, SoME3 entry video showing how to find the Babylonian Root Approximation from the geometric series. Essentially the result is converting a power series into a Pade'.
Do you mean Taylor says such a limit doesn't exist because you have an infinite item number of terms all equaling 1 so you have basically an infinite number of 1s so the limit goes to infinity and therefore does nkt exist..jjst making sure I understood what you meant.
One specifically good property of the Podé approximation for exp is that taking e^ix, while its Taylor polynomials don't satisfy |Pn(x)| = 1 for real x, the Podé rationals do. An example of its usefulness is in physics numerical computation. You'd sometimes have to multiply a bunch of e^ix's, and if their magnitudes are not 1, things diverge quickly. e^(-iHt) ≈ (1-iHt/2)/(1+iHt/2) is pretty great already. It's also used in creating digital filters from analog filters with the Z-transform z = e^(s*T) ≈ (1+sT/2)/(1+sT/2)
Good catch. One quick sanity check on such a series is, in this case, the function in question, tan⁻¹(x), is odd, so all the even-power coeffs must =0. Fred
Another higher order approximation that is ordinarily skipped in Calculus is Gaussian quadrature. A fun application of fitting polynomials at variable points
a good place to stop should maybe include a word about what happens for different values of n and m. is a (1,5) approximation just as good as a (3,3) since they agree with the function up to the same degree?
Maybe you should "include a word" in video published at your channel? Michael's videos would be close to an hour long if they included all the viewer's perceived improvements.
This is a really good tool to have in one's toolbox for approximating functions. Taylor/Maclaurin series have some well-known pitfalls for numerical approximations. For sin & cos, those Taylor series include a mess of large terms of alternating sign, which makes computation useless, despite their exact convergence everywhere. [Although, to be fair, the reflection & translation symmetries of those functions allow us to limit our interval of approximation to where the series is more manageable.] Fred
I have seen in 2 or 3 Chinese math books about the higher math background of the public exam questions, the authers have introduced Pade approximation in their books.
Some things that come to mind after watching the video: 1. Does it make sense to take the limit of m or n goes to infinity of the Pade ratio? 2. Does the difference between the largest powers m and n matter? are there classes of functions for which m and n are separated by a fixed amount in the approximation? 3. How often do we get the largest power of the denominator (n in this case) being larger than that of the numerator (m)? I imagine in this case the ratio is "more different" to Taylor series compared to when m is larger than n 4. How to make sure we get the right powers m and n for the ansatz at the beginning? What would happen if initially I didn't know what the Pade ratio for sin(x) look like and I chose a4 x^4 + a3 x^3 + .... / a2 x^2 + a1 x +... ? 5. What are the advantages of using Pade approximation compared to Taylor series seeing that the coefficients are more cumbersome to calculate?
1) For that to make sense, you would need the series in the demoninator and numerator being power series on their on their own and the limit would demonstrate that your original function would be a ratio of functions. 2) I don't know of any formal theorems on this but you could do some experimental work using Matlab's curve fitting toolbox. 3) Again, I don't think that there is any formal results, I think that this is an "eyeball" question, you play around with m and n until you get your function being close to what you want. 4) There is no "right", in this case, it's only what gives the closest approximation. 5) Fewer terms in Pade approximates give better results than what more terms than a Taylor series will give usually.
@@mathunt1130 _For that to make sense, you would need the series in the demoninator and numerator being power series on their on their own and the limit would demonstrate that your original function would be a ratio of functions._ Can I "manipulate" the function for this to be valid? lets say I want to write the Pade approximant of e^x by writing e^x = e^(x/2)/e^(-x/2) so that its a ratio and I can approximate the r.h.s. For e^x its kinda obvious but you get the point _There is no "right", in this case, it's only what gives the closest approximation._ I was thinking that if there can be multiple options for largest powers m & n that give "good enough" results, Is there even a _unique_ way to approximate a function as a ratio of two polynomials?
Neat video. 👍 You should consider doing a short follow-up that compares the Pade vs the Taylor approximations for some functions to explain when you might prefer using Pade over Taylor.
I have used these in my line of work, and it was a real guess and check method of looking at the error of the approximation over the set of inputs I expected. a good starting point is to consider what the endpoint behavior should be in the limit. a higher order numerator goes to infinity, a higher order denominator goes to zero, equal orders go to constants. there really isn't a guarantee that you will get the orders you specify (highest order term in numerator or denominator could be zeroed out), or that you will achieve the limiting behavior in your input range. I'm sure there are more well defined methods, but probably not worth the effort in a lot of applications vs guess and check
This method is also useful for analytic continuation. It's insane. But it works because "nice" complex functions are really limited in what they can possibly look like if you already know some values. In other words, you can't just arbitrarily joint the known points by whatever you want, it won't work. Let's say I have a mysterious function f(z), for which I know the following values: f(i), f(2i), f(3i), ..., f(n*i). You can do the Pade approximation of this (for well-chosen numerator and denominator, often informed by known asymptotics) and then simply plug in any complex or real value of z. This is actually used in physics. There are also recursive formulas for calculating the coefficients if you write Pade in a different form.
In analysis, all functions are approximated using Taylor's series. What would be nice if one can replace the Taylor series with a Pade approximate. Interesting exercise. Rational functions seem to work better than polynomials.
Another, simpler, approximation to sin x in the same interval [0, \pi] can be constructed just by a quadratic polynomial y=ax^2+bx+c. Finding the coefficients a, b, c is left as an exercise to the reader haha! Hint: Use three points (0,0), (\pi/2, 1), (\pi, 0). This is a really "perfect" approximation, believe me.
I don't get why in the general case we truncate the Taylor series to the order m+n and not m-n... By truncating to the order m-n, we know that the maximum degree of the polynomial resulting from the multiplication of this truncated series with the denominator will be m (the same order as the numerator) ; isn't it what we want in order to identify coefficients corresponding to the same power of the left and right hand side polynomials ?
Great question! One reason is that we want the coefficients of the Padé approximation's own Taylor series to match the ones of the original function's Taylor series up to a degree k, generally greater than m. More importantly, though, if the numerator has degree m and the denominator has degree n, we have m+n+1 unknown coefficients to solve for (if we conveniently choose one of the coefficients to be 1). If we were to truncate the Taylor series to the m-n degree, we could only set m+1 linear equations, n less than we need to uniquely define all the coefficients. By truncating the Taylor series to the m+n degree and then ignoring any higher degree terms that appear after the multiplication by the denominator, we ensure we have enough equations to solve for the unknown coefficients.
I was wondering if there are other series with different function building blocks besides polynomials (Taylor), rational functions (Pade and Laurent) and sin or cos (Fourier) series that can approximate given functions? Such as a sum of exponentials, logs, abs, square roots or others? Or maybe rather than a sum of these, a product of the building block functions? Thanks!
Hi@@adamnevraumont4027, that's really interesting, thanks! I asked Google Gemini too and it mentioned Wavelet series and Haar series which are pretty interesting, and mentioned that 'approximation theory' has more info about this topic. Is this what you studied about those properties?
@@keithphw no: in an undergraduate real analysis course. It abstractly proved taylor (and fourier) series approximation (in L1 norm). It was ... separates points, contains constants, closed under + and scalar multiplication, and some properties of the space and norm used to measure convergence... maybe a compact support requirement? Now, both fourier and taylor are nice in other ways. Like, they shave off parts of your approximated function incrementally, and the coefficients sort of have meaning. That kind of niceness wasn't implied by the abstract proof we did.
Nice! But, it seems like this technique yields a Pade approximation equal to the truncated Taylor series which is only an approximation of the original function. Can the Pade coefficients be refined, perhaps by a Nelder-Meade algorithm (or maybe something better?) to yield a better approximation of the original function?
not sure the exact theory, but in my experience Pade does a lot better than a (n+m) Taylor since rational functions "behave" a lot better. Taylor polynomials can have a frustrating tendency to go off towards infinity faster than I'd like
@@Minskeeeee and Taylor polynomials have gigantic factorial divisions that are annoying to implement in most programming languages, I'd use this instead for such cases
Yes but also 1/x doesn't have a Taylor series at 0 at all anyways. (And I believe if you try to do it at say a=1 instead, using powers of x-a=x-1 instead of x in your Padé approximation, then you should get 1/((x-1)+1) with b_0=1, which simplifies to 1/x)
@@shirou9790 Yeah, even more briefly, 1/x is already a rational function - why would you try to approximate it with a higher degree rational function? Fred
Please give it a try : N=12345678910111213......2024.(here 1 to 2024 are written in a row).Now the question is what is the remainder if N is divided by 2025?
I recently stumbled upon this sum A(x) = x^L\sum_{t=0}^T (T-t)x^{t}\binom{t + L}{t}. It would be cool if you could cover its solutions for x1 in a video. Keep up with the good work!
It's an extremely useful technique in the field of asymptotics (see Carl Bender's lecture series). For example, imagine that you have some function f(x), and you can calculate the derivatives of this function near x=0. However, the Taylor series of this function has zero radius of convergence. For example, its terms might be (-x)^n n!. How do you evaluate this function at x=1? Taylor says that such a limit doesn't exist, but Pade says (by seeing that the main sequence of approximants converges) that the limit is actually 0.596... See "Padé approximation of Stieltjes series" by Allen et al (1975) for more details
Carl Bender is a god among us.
@@primenumberbuster404those last two words woke up some deep-seethed memories inside of me
ua-cam.com/video/LezYCywDtZs/v-deo.htmlsi=NTQpQlJxjmaTIaXY
This ^ guy did a rough, but interesting, SoME3 entry video showing how to find the Babylonian Root Approximation from the geometric series. Essentially the result is converting a power series into a Pade'.
@@primenumberbuster404Why do you say that?
Do you mean Taylor says such a limit doesn't exist because you have an infinite item number of terms all equaling 1 so you have basically an infinite number of 1s so the limit goes to infinity and therefore does nkt exist..jjst making sure I understood what you meant.
18:12 body once told me the world is gonna roll me
One specifically good property of the Podé approximation for exp is that taking e^ix, while its Taylor polynomials don't satisfy |Pn(x)| = 1 for real x, the Podé rationals do.
An example of its usefulness is in physics numerical computation.
You'd sometimes have to multiply a bunch of e^ix's, and if their magnitudes are not 1, things diverge quickly.
e^(-iHt) ≈ (1-iHt/2)/(1+iHt/2) is pretty great already.
It's also used in creating digital filters from analog filters with the Z-transform
z = e^(s*T) ≈ (1+sT/2)/(1+sT/2)
At the start of the video, the 4th taylor series displayed is for ln(1+x) rather than arctan(x).
arctan(x) = x - x^3/3 + x^5/5 - x^7/7 + ... -
Yes, he should have noticed that mistake. Inserting x=1 would give the series for ln(2), not for pi/4.
Good catch. One quick sanity check on such a series is, in this case, the function in question, tan⁻¹(x), is odd, so all the even-power coeffs must =0.
Fred
Another higher order approximation that is ordinarily skipped in Calculus is Gaussian quadrature. A fun application of fitting polynomials at variable points
Quadratures is such a beautiful and underrated topic!!!
a good place to stop should maybe include a word about what happens for different values of n and m. is a (1,5) approximation just as good as a (3,3) since they agree with the function up to the same degree?
Maybe you should "include a word" in video published at your channel?
Michael's videos would be close to an hour long if they included all the viewer's perceived improvements.
In EE it is used quite ofren in control theory to approximate the transfer function of a delay exp(-sT)
Yep. I didn't come across Padé approximants until well after (at least 10 years) I finished graduate school.
This is a really good tool to have in one's toolbox for approximating functions. Taylor/Maclaurin series have some well-known pitfalls for numerical approximations.
For sin & cos, those Taylor series include a mess of large terms of alternating sign, which makes computation useless, despite their exact convergence everywhere.
[Although, to be fair, the reflection & translation symmetries of those functions allow us to limit our interval of approximation to where the series is more manageable.]
Fred
Just use the fourier series for sin and cos then
I have seen in 2 or 3 Chinese math books about the higher math background of the public exam questions, the authers have introduced Pade approximation in their books.
15:00 note that numerator has odd powers, and denominator has even powers, required for the symmetry of sin (x).
Some things that come to mind after watching the video:
1. Does it make sense to take the limit of m or n goes to infinity of the Pade ratio?
2. Does the difference between the largest powers m and n matter? are there classes of functions for which m and n are separated by a fixed amount in the approximation?
3. How often do we get the largest power of the denominator (n in this case) being larger than that of the numerator (m)? I imagine in this case the ratio is "more different" to Taylor series compared to when m is larger than n
4. How to make sure we get the right powers m and n for the ansatz at the beginning? What would happen if initially I didn't know what the Pade ratio for sin(x) look like and I chose a4 x^4 + a3 x^3 + .... / a2 x^2 + a1 x +... ?
5. What are the advantages of using Pade approximation compared to Taylor series seeing that the coefficients are more cumbersome to calculate?
1) For that to make sense, you would need the series in the demoninator and numerator being power series on their on their own and the limit would demonstrate that your original function would be a ratio of functions.
2) I don't know of any formal theorems on this but you could do some experimental work using Matlab's curve fitting toolbox.
3) Again, I don't think that there is any formal results, I think that this is an "eyeball" question, you play around with m and n until you get your function being close to what you want.
4) There is no "right", in this case, it's only what gives the closest approximation.
5) Fewer terms in Pade approximates give better results than what more terms than a Taylor series will give usually.
@@mathunt1130 Thank you for the answers
@@mathunt1130
_For that to make sense, you would need the series in the demoninator and numerator being power series on their on their own and the limit would demonstrate that your original function would be a ratio of functions._
Can I "manipulate" the function for this to be valid? lets say I want to write the Pade approximant of e^x
by writing e^x = e^(x/2)/e^(-x/2) so that its a ratio and I can approximate the r.h.s. For e^x its kinda obvious but you get the point
_There is no "right", in this case, it's only what gives the closest approximation._
I was thinking that if there can be multiple options for largest powers m & n that give "good enough" results, Is there even a _unique_ way to approximate a function as a ratio of two polynomials?
@@GeoffryGifari yes
Thx for this.
Learned about Pade Approximants in a numerical analysis textbook by Burden an Faires
I used least squares rational aproximation for sin(x) . impressed by the precision and genralization
Neat video. 👍 You should consider doing a short follow-up that compares the Pade vs the Taylor approximations for some functions to explain when you might prefer using Pade over Taylor.
What rules do we want for m and n to make a good approximation? m≥n, m≈n, etc.
I have used these in my line of work, and it was a real guess and check method of looking at the error of the approximation over the set of inputs I expected.
a good starting point is to consider what the endpoint behavior should be in the limit. a higher order numerator goes to infinity, a higher order denominator goes to zero, equal orders go to constants.
there really isn't a guarantee that you will get the orders you specify (highest order term in numerator or denominator could be zeroed out), or that you will achieve the limiting behavior in your input range.
I'm sure there are more well defined methods, but probably not worth the effort in a lot of applications vs guess and check
Calculus course without Pade = Calclueless. I see what you did there.😂
At 12:00 rhe contradicting terms for b2 are not in the x^3 coefficient, but in the x^5 instead
This method is also useful for analytic continuation. It's insane. But it works because "nice" complex functions are really limited in what they can possibly look like if you already know some values. In other words, you can't just arbitrarily joint the known points by whatever you want, it won't work.
Let's say I have a mysterious function f(z), for which I know the following values: f(i), f(2i), f(3i), ..., f(n*i). You can do the Pade approximation of this (for well-chosen numerator and denominator, often informed by known asymptotics) and then simply plug in any complex or real value of z. This is actually used in physics.
There are also recursive formulas for calculating the coefficients if you write Pade in a different form.
3:05 I don't think that arctan x series is correct. Arctan(x) is antisymmetric around x = 0, so there should be no even terms present in its series.
3:01 Well, the series for arctgx is actually wrong (needs to be x-x³/3+x⁵/5-x⁷/7+ο(x⁷)) By the way, it's the series for log(1+x)
At 7:03 it should be a1 - a0*b1 in the numerator. Since a0 = 1 it doesn't change the result. But still
He already calculated a0, so it wouldn't have made sense to write it again afterwards.
@@xinpingdonohoe3978it still seems like he substituted in the x=0 on the top without yet doing so on the bottom?
(Again, not changing the answer)
In analysis, all functions are approximated using Taylor's series. What would be nice if one can replace the Taylor series with a Pade approximate. Interesting exercise.
Rational functions seem to work better than polynomials.
Another, simpler, approximation to sin x in the same interval [0, \pi] can be constructed just by a quadratic polynomial y=ax^2+bx+c. Finding the coefficients a, b, c is left as an exercise to the reader haha! Hint: Use three points (0,0), (\pi/2, 1), (\pi, 0). This is a really "perfect" approximation, believe me.
at 3:00 isn’t that the series expansion of the logarithm?
You could also force the rational function through a mesh of known points for a more global approach. 🙂
There's a typo in the title "Caclulus" :)
Always glad to catch up on a bit of "Caclulus". I wonder how you pronounce that?
The way you spelled it, I'd say it: Cack-clue-luss
I first encountered this in numerical methods course
I don't get why in the general case we truncate the Taylor series to the order m+n and not m-n... By truncating to the order m-n, we know that the maximum degree of the polynomial resulting from the multiplication of this truncated series with the denominator will be m (the same order as the numerator) ; isn't it what we want in order to identify coefficients corresponding to the same power of the left and right hand side polynomials ?
Great question!
One reason is that we want the coefficients of the Padé approximation's own Taylor series to match the ones of the original function's Taylor series up to a degree k, generally greater than m.
More importantly, though, if the numerator has degree m and the denominator has degree n, we have m+n+1 unknown coefficients to solve for (if we conveniently choose one of the coefficients to be 1). If we were to truncate the Taylor series to the m-n degree, we could only set m+1 linear equations, n less than we need to uniquely define all the coefficients.
By truncating the Taylor series to the m+n degree and then ignoring any higher degree terms that appear after the multiplication by the denominator, we ensure we have enough equations to solve for the unknown coefficients.
outstanding!
I had it on calculus Ii
How has no one noticed "Caclulus"
Can they be used analogusly to powersieries to solve to ODEs?
Look at the handbook of differential equations by Daniel Zwillinger. One of the methods is just the Padé approximation.
I was wondering if there are other series with different function building blocks besides polynomials (Taylor), rational functions (Pade and Laurent) and sin or cos (Fourier) series that can approximate given functions?
Such as a sum of exponentials, logs, abs, square roots or others?
Or maybe rather than a sum of these, a product of the building block functions? Thanks!
Yes. I mean, if your family has constant functions and separates arbitrary pairs of points you are almost done. I forget the other properties.
Hi@@adamnevraumont4027, that's really interesting, thanks! I asked Google Gemini too and it mentioned Wavelet series and Haar series which are pretty interesting, and mentioned that 'approximation theory' has more info about this topic. Is this what you studied about those properties?
@@keithphw no: in an undergraduate real analysis course. It abstractly proved taylor (and fourier) series approximation (in L1 norm).
It was ... separates points, contains constants, closed under + and scalar multiplication, and some properties of the space and norm used to measure convergence... maybe a compact support requirement?
Now, both fourier and taylor are nice in other ways. Like, they shave off parts of your approximated function incrementally, and the coefficients sort of have meaning. That kind of niceness wasn't implied by the abstract proof we did.
Thanks for the summary! Fascinating, i didn't know that. Much to learn!
What happens if you take n and m tending towards infinity.
Is there any closed form for that like we have for Taylor series?
Nice! But, it seems like this technique yields a Pade approximation equal to the truncated Taylor series which is only an approximation of the original function. Can the Pade coefficients be refined, perhaps by a Nelder-Meade algorithm (or maybe something better?) to yield a better approximation of the original function?
not sure the exact theory, but in my experience Pade does a lot better than a (n+m) Taylor since rational functions "behave" a lot better. Taylor polynomials can have a frustrating tendency to go off towards infinity faster than I'd like
@@Minskeeeee and Taylor polynomials have gigantic factorial divisions that are annoying to implement in most programming languages, I'd use this instead for such cases
Setting b_0=1 works well untill you try to approximate 1/x
Yes but also 1/x doesn't have a Taylor series at 0 at all anyways. (And I believe if you try to do it at say a=1 instead, using powers of x-a=x-1 instead of x in your Padé approximation, then you should get 1/((x-1)+1) with b_0=1, which simplifies to 1/x)
@@shirou9790 fair
@@shirou9790 Yeah, even more briefly, 1/x is already a rational function - why would you try to approximate it with a higher degree rational function?
Fred
Can the degree of denominator be larger using this approximation?
Also, what makes a function “nice” for us to be able to do this?
I guess that the genious of Padé was to put b0 = 1 in order to make this work!
If b0 1, you can divide everything top and bottom by b0 to get a rational function that has the same values.
Am I the only one who kept thinking _Padmé_ approximation, à la Star Wars? 😂
Please give it a try :
N=12345678910111213......2024.(here 1 to 2024 are written in a row).Now the question is what is the remainder if N is divided by 2025?
324
WHY does this even work..why are m and n different..when does m equal n..isnt everypne else wondering this??
The Taylor series for arctangent is wrong.
you made a mistake in arctan. instead of (-1,1] its [-1,1]
Audio is so low.
with x^2+1 i garanti that the denominator is not zero
i see what you did there.
At least in physics this and gaussian quadrature are offen used . They are not skiped at all
I recently stumbled upon this sum A(x) = x^L\sum_{t=0}^T (T-t)x^{t}\binom{t + L}{t}. It would be cool if you could cover its solutions for x1 in a video. Keep up with the good work!
Nice
Divisions are not nice approximations for computer.
I'm glad this is not taught in courses.
Seems pretty difficult to get and not at all as useful as Taylor series.
Its seen in most caclulus classes, i think you switched caclulus and calculus up.