🌟🌟To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/michaelpenn. The first 200 of you will get 20% off Brilliant's annual premium subscription.🌟🌟
A small annoyance that you probably skipped just because it's overly pedantic: the series expansion of the exponential converges uniformly *in bounded subsets of the complex plane*. The series does not converge uniformly in the entire plane because you can't find a universal upper bound.
@@dexter2392 Yes. Specifically, it grows infinitely large on the right side of the real line. You can never satisfy the definition of uniform convergence, i.e. find an n for which the difference between the function and the truncated power series is bounded by an epsilon regardless of x, because the difference grows infinitely large for positive real z.
This is the right way to make the limiting argument with "uniform convergence" rigourous (what is said in the video is wrong, the fact that (f_n) converges uniformly doesn't imply that (f_n') converges uniformly to the derivative of the limit). But I think your proof for uniform convergence is not enough, your bound doesn't go to 0 with n. So you still need to show that you can bound |x^(a_n)-f(x)| uniformly for x in [p,q]. The easiest way for that is probably to go back to the exponential form x^(a_n)=e^(a_nln(x)) and f(x)=e^(rln(x)) and use that exp is lipschitz on all compact subsets (and a_n*ln(x) is bounded independantly of n and x since a_n is bounded). This almost defeats the point of this proof though so maybe there's a better way to show that.
You should point out, that the uniform convergence is on compact sets, since it is not true in general for the exponential function, but in particular also for the limiting case x^{a_n} -> x^r.
It's all very well differentiating x^(m/n), but how do you define x^(m/n)? Surely we define it as (x^(1/n))^m, where x^(1/n) is the value of the inverse of y -> y^n at x (y -> y^n is monotonic for n=/= 0, so it has an inverse). Then x -> x^(m/n) = (y -> y^m)◦(t -> t^n)^-1 (x). Differentiating using the chain rule, we get (t -> t^(m/n))'(x) = [ (y -> y^m)'◦(t -> t^n)^-1 (x)].((t -> t^n)^-1)'(x) The first factor on the right is (y -> m.y^(m-1))(x^(1/n)) = m.x^((m-1)/n) = m.x^((m/n)-(1/n)) To evaluate the second factor, we use the inverse function rule, (f^-1)'(x) = 1/f'((f^-1)(x)). For f = t -> t^n, this yields ((t -> t^n)^-1)'(x) = 1/[(y - y^n)'((t -> t^n)^-1)(x))] = 1/[(y -> n.y^(n-1))(x^(1/n))] = 1 / n.x^((n-1)/n) = (1/n).x^((1/n) - 1) Thus, (t -> t^(m/n))'(x) = [m.x^((m/n)-(1/n))].[(1/n).x^((1/n) - 1)] = (m/n).x^((m/n)-1), as expected Wrote all that out to the sound of the finale of Dvorjak's 8th symphony playing on the radio. Delightful
I've never understood why in some books they show this formula only for natural numbers, but is so simple to proof for every real number: Calculate the derivative of x^a=e^(alogx), which is e^(alogx)*a/x = x^a * a/x = a*x^a-1
@@lame_lexem I remember that I was taught from the beginning that there were multiple equivalent definitions of the exponential function, one of which was 'the function which is its own derivative and takes the value of 1 at x=0'. Then the teacher went on to prove the equivalences, which took (as I recall) a whole lesson... but made problems such as this one quite trivial, as wojteksocha2002 said. Because we had essentially already proved it from the start :)
This is similar to how we were taught the power rule in calculus 1 at university, they went case by case using more and more other derivative rules. But it's cool that one can do all cases from the limit definition too!
E.g. you can find the derivative of y=x^(m/n) by implicit differentiation of y^n = x^m, or the derivative of x^r with a general real number as the derivative of e^(r lnx) and the chain rule.
But is this proven for real r? See p 456 in section 207 of Hardy's A Course in Pure Mathematics. “So ln ax = x ln a for real x, by definition.” Michael Penn: what say you?
In the context of a calculus class, you didn't actually prove anything. As Michael mentioned, this derivative is one of the first you see, you can't assume the others to be known in this thought exercise.
@@ach3456 Well, I actually took a calculus class many years ago and that's exactly how the professor did it, at the end, when he had proven everything.
I was thinking of a hand-wavy approach of 2 rational numbers, one below and one above the irrational number. And then prove that as the rational numbers come closer together, their respective derivatives of the power function also come closer, so the power function of the irrational number is squeezed between these two derivatives that are the same in the limit. But I will never get any closer than the handwaving as mathematical rigor has never been my strong point.
Limit can also be achieved easier. First take x^r and we're left with (1+t/x)^r - 1 / t and we have to show, that this limit is r/x. 1/x can be achieved by substituting u = t/x and we have to show that limit of (1 + u)^r - 1 / u is r. And it's easy with e^x and ln(x) limits and inequalities
How can we show that lim(x->0)[ sin(x)/x ] = 1 *without* using something equivalent to a Taylor Expansion (for which it's easy)? So far what I've got is an appeal to geometry, and the equality of arc length to angle for a unit radius circle. Knowing this would also permit finding the formula for d/dx(sin(x)) without (again) using a Taylor Expansion (and of course the other usual trig functions).
Using representation on unary circle you can get sin x < x < tg x for positive x or 1 < x / sin x < 1/cos x, by squeeze theorem the limit is 1 for positive, and for negative just observe that both sin x and x are odd functions
It depends how you define sin(x). If you use the triangle definition (or unit circle) then the squeeze theorem approach is the easiest. If you use the series definition then it's trivial. If you define it via complex exponential then that also essentially gives you the series.
@@iooooooo1 great idea of course, but it sort of does include Taylor expansion for the usual proof (or a prior knowledge of the derivitative of sin(x) ).
Show the twice-continuously differentiable solution to the following differential equation is unique: y”+y=0, y(0)=0, y’(0)=1. This is in fact done in another of Michael’s videos. The properties of this solution coincides with what you know as the sine function, i.e. it satisfies the trigonometric identities and so on. In other words, y(x) is basically sin(x). Then lim(x->0)[y(x)/x] = lim(x->0)[y’(x)]=1 by L’Hopital’s rule. Note that here the use of L’Hopital’s rule only rely on the aforementioned defining properties of y(x).
There is not only a single complex number, instead there's a infinite amount of complex numbers. A very small amount of which are real numbers, and a very small amount of these are integers. . . .
13:20......... I’m sorry, WHAT??????? HELL NO ! I’m French, I can not say for sure whether the uniform convergence is the same over there, but... For all n, || Rn || is not finite, (Rn) does not converge uniformly towards 0, therefore neither does this one. Sure, if you place yourself in a bounded subset, fine, but YOU HAVE TO WRITE IT (or at least say it). Question: what’s with the ln/integration switch ? It has no use here... ?
r^n / n! converges to 0 as n goes to infinity for all r. The series for e^x converges to... well, e^x, for all r real numbers. So assuming we are working in the reals it does converge uniformly. Both are always finite.
@@Biggyweezer69 You should check the definition of the uniform convergence. One way to prove there isn’t one is to find (Xn) such that (Rn(Xn)) does not converge towards 0. What to chose... Xn = exp(n!/(r^n))
@lupifa4395 honestly this is too complicated for me to want to discuss in youtube comments. There are proofs that the Taylor series in question does uniformly converge findable on the math stack exchange.
Bishop Berkely would like to point out that when you let t go to the limit of x and divide by x - t you are dividing by zero. Yes, I know there are rigorous ways around this, but it really annoyed him in the 18th century, and it took nearly a hundred years before mathematicians avoided using this dubious trick.
i don't know which part of 0< |x-a|< delta your man couldn't understand, but it doesn't require any sleight of hand to rehabilitate the limit definition of the derivative.
yeah, that is really dumb. I would use the identity f(x) = x^r = exp(r ln x). So d/dxf(x) = exp(r lnx) d/dx (r lnx) = x^r d/dx(r ln x) = x^r r /x = rx^(r-1). But if you like tedious calculations, I cannot help you.
🌟🌟To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/michaelpenn. The first 200 of you will get 20% off Brilliant's annual premium subscription.🌟🌟
A small annoyance that you probably skipped just because it's overly pedantic: the series expansion of the exponential converges uniformly *in bounded subsets of the complex plane*. The series does not converge uniformly in the entire plane because you can't find a universal upper bound.
that's because it keeps growing infinitely as |z| grows, right?
@@dexter2392 Yes. Specifically, it grows infinitely large on the right side of the real line. You can never satisfy the definition of uniform convergence, i.e. find an n for which the difference between the function and the truncated power series is bounded by an epsilon regardless of x, because the difference grows infinitely large for positive real z.
I was convinced you were going to use binomial coefficients to expand (x+dx)^r
Fix p
This is the right way to make the limiting argument with "uniform convergence" rigourous (what is said in the video is wrong, the fact that (f_n) converges uniformly doesn't imply that (f_n') converges uniformly to the derivative of the limit).
But I think your proof for uniform convergence is not enough, your bound doesn't go to 0 with n. So you still need to show that you can bound |x^(a_n)-f(x)| uniformly for x in [p,q]. The easiest way for that is probably to go back to the exponential form x^(a_n)=e^(a_nln(x)) and f(x)=e^(rln(x)) and use that exp is lipschitz on all compact subsets (and a_n*ln(x) is bounded independantly of n and x since a_n is bounded). This almost defeats the point of this proof though so maybe there's a better way to show that.
Sir, what do you think of having a functional analysis course on your second channel? i guess it'll be great!
Funny how for case 1 if n = 0 you get a numerator of x^0 - t^0 = 1 - 1 = 0 from which you can still factor out a (x - t) by writing it as (x - t) * 0.
You should point out, that the uniform convergence is on compact sets, since it is not true in general for the exponential function, but in particular also for the limiting case x^{a_n} -> x^r.
if anyway chain rule is used, why not do it directly on f(x)=x^r=e^(rnx) => f'=e^(rlnx)*(r/x)=rx^(r-1)
11:23 incorrect: you need the _derivatives_ to converge uniformly to _something_ as well. Which they do.
16:11 *ALL* cases? Where are complex numbers? 😤
The last case holds for complexe numbers as well doesn’t it ?
Outside the real number world.
Please don't mix real and complex analysis, ok?
The power series for e^x is exactly the same. Can’t see why it wouldn’t hold for r in C
It is an acute observation for sure but the crime, if any, is hardly egregious no?
@@ruilopes6638 The problem is with the integral, because it doesn't give one value since Log is multivalued there.
It's all very well differentiating x^(m/n), but how do you define x^(m/n)? Surely we define it as (x^(1/n))^m, where x^(1/n) is the value of the inverse of y -> y^n at x (y -> y^n is monotonic for n=/= 0, so it has an inverse).
Then x -> x^(m/n) = (y -> y^m)◦(t -> t^n)^-1 (x).
Differentiating using the chain rule, we get (t -> t^(m/n))'(x) = [ (y -> y^m)'◦(t -> t^n)^-1 (x)].((t -> t^n)^-1)'(x)
The first factor on the right is (y -> m.y^(m-1))(x^(1/n)) = m.x^((m-1)/n) = m.x^((m/n)-(1/n))
To evaluate the second factor, we use the inverse function rule, (f^-1)'(x) = 1/f'((f^-1)(x)). For f = t -> t^n, this yields
((t -> t^n)^-1)'(x) = 1/[(y - y^n)'((t -> t^n)^-1)(x))] = 1/[(y -> n.y^(n-1))(x^(1/n))] = 1 / n.x^((n-1)/n) = (1/n).x^((1/n) - 1)
Thus, (t -> t^(m/n))'(x) = [m.x^((m/n)-(1/n))].[(1/n).x^((1/n) - 1)] = (m/n).x^((m/n)-1), as expected
Wrote all that out to the sound of the finale of Dvorjak's 8th symphony playing on the radio. Delightful
ln(x) and chain rule need to be proven by limit definition anyway, so you just do that first and then use log differentiation.
I appreciate these videos keeping all this fresh in my mind, i have derived these results long agp and use them regularly, but its nice to freshen up!
How do you prove it for complex exponents?
I've never understood why in some books they show this formula only for natural numbers, but is so simple to proof for every real number:
Calculate the derivative of x^a=e^(alogx), which is e^(alogx)*a/x = x^a * a/x = a*x^a-1
definition of the exponential function is usually coming after the derivative of the power function, so it is why
That is incorrect. It is a*x^(a - 1). The grouping symbols are needed.
@@lame_lexemWell tbf most calculus books don't give a precise definition of either.
@@lame_lexem I remember that I was taught from the beginning that there were multiple equivalent definitions of the exponential function, one of which was 'the function which is its own derivative and takes the value of 1 at x=0'. Then the teacher went on to prove the equivalences, which took (as I recall) a whole lesson... but made problems such as this one quite trivial, as wojteksocha2002 said. Because we had essentially already proved it from the start :)
What about complex numbers?
What about for complex numbers
Isn't is circular to apply the power rule to the lnx integral, since this could be irrational too?
He first proved the power rule for integer (and rational) exponents. ln(x) is the base, but the exponent n in the series is always natural.
@@carloseliasmartinez6221 of course thanks!
good point
We must prove the uniforms converge of an *X^(an-1)
And convergence simple of X^an to X^r at least in 1 then we can commute d derive and limite
This is similar to how we were taught the power rule in calculus 1 at university, they went case by case using more and more other derivative rules. But it's cool that one can do all cases from the limit definition too!
E.g. you can find the derivative of y=x^(m/n) by implicit differentiation of y^n = x^m, or the derivative of x^r with a general real number as the derivative of e^(r lnx) and the chain rule.
9:01 it's j and not k in the power of the denominator. Is Pr Penn here as he writes ?
Or, you can write x^r = exp(r*ln x), differentiate it with the chain rule and find r*exp(r*ln x)/x = r*x^(r-1)
But is this proven for real r? See p 456 in section 207 of Hardy's A Course in Pure Mathematics. “So ln ax = x ln a for real x, by definition.” Michael Penn: what say you?
You forgot the prooves of the chain rule, the derivative of exp and the derivative of ln...
Appliying rules is not prooving them...
In the context of a calculus class, you didn't actually prove anything. As Michael mentioned, this derivative is one of the first you see, you can't assume the others to be known in this thought exercise.
@@ach3456 Well, I actually took a calculus class many years ago and that's exactly how the professor did it, at the end, when he had proven everything.
@@felipelopes3171 You mean after the professor had proved the things that Michael proved in the video. That's a circular argument.
no case where r is in complex?
Thanks for making us think of irrational exponents
I was thinking of a hand-wavy approach of 2 rational numbers, one below and one above the irrational number. And then prove that as the rational numbers come closer together, their respective derivatives of the power function also come closer, so the power function of the irrational number is squeezed between these two derivatives that are the same in the limit.
But I will never get any closer than the handwaving as mathematical rigor has never been my strong point.
Easy: Prove it for the positive integers and just forget to tell people the restriction on your proof.
Hey, Michael! @ 8:08 The upper bound of summation should be m-1, not m.
He corrected that a short time later.
Yes, I know. Thanks.
Limit can also be achieved easier. First take x^r and we're left with (1+t/x)^r - 1 / t and we have to show, that this limit is r/x. 1/x can be achieved by substituting u = t/x and we have to show that limit of (1 + u)^r - 1 / u is r. And it's easy with e^x and ln(x) limits and inequalities
If r - 1 is supposed to be an exponent, then it needs to be inside grouping symbols.
@@robertveith6383 u substitution gives us 1/x which gets r-1 in exponent
OMH Thank you ! this is the first theorem i can prove from my head and did it once i was a teen :)
How can we show that lim(x->0)[ sin(x)/x ] = 1 *without* using something equivalent to a Taylor Expansion (for which it's easy)? So far what I've got is an appeal to geometry, and the equality of arc length to angle for a unit radius circle.
Knowing this would also permit finding the formula for d/dx(sin(x)) without (again) using a Taylor Expansion (and of course the other usual trig functions).
Using representation on unary circle you can get sin x < x < tg x for positive x or 1 < x / sin x < 1/cos x, by squeeze theorem the limit is 1 for positive, and for negative just observe that both sin x and x are odd functions
L'Hopital's rule, unless you meant to exclude it as somehow equivalent to a Taylor expansion (but I don't see how that would be).
It depends how you define sin(x). If you use the triangle definition (or unit circle) then the squeeze theorem approach is the easiest. If you use the series definition then it's trivial. If you define it via complex exponential then that also essentially gives you the series.
@@iooooooo1 great idea of course, but it sort of does include Taylor expansion for the usual proof (or a prior knowledge of the derivitative of sin(x) ).
Show the twice-continuously differentiable solution to the following differential equation is unique: y”+y=0, y(0)=0, y’(0)=1. This is in fact done in another of Michael’s videos. The properties of this solution coincides with what you know as the sine function, i.e. it satisfies the trigonometric identities and so on. In other words, y(x) is basically sin(x).
Then lim(x->0)[y(x)/x] = lim(x->0)[y’(x)]=1 by L’Hopital’s rule. Note that here the use of L’Hopital’s rule only rely on the aforementioned defining properties of y(x).
What about the complex number?
There is not only a single complex number, instead there's a infinite amount of complex numbers. A very small amount of which are real numbers, and a very small amount of these are integers. . . .
I meant, does it work for a complex function?
13:20......... I’m sorry, WHAT??????? HELL NO !
I’m French, I can not say for sure whether the uniform convergence is the same over there, but...
For all n, || Rn || is not finite, (Rn) does not converge uniformly towards 0, therefore neither does this one.
Sure, if you place yourself in a bounded subset, fine, but YOU HAVE TO WRITE IT (or at least say it).
Question: what’s with the ln/integration switch ? It has no use here... ?
r^n / n! converges to 0 as n goes to infinity for all r. The series for e^x converges to... well, e^x, for all r real numbers. So assuming we are working in the reals it does converge uniformly. Both are always finite.
@@Biggyweezer69 You should check the definition of the uniform convergence. One way to prove there isn’t one is to find (Xn) such that (Rn(Xn)) does not converge towards 0. What to chose...
Xn = exp(n!/(r^n))
Math is fun, no need to be so rigorous
@lupifa4395 honestly this is too complicated for me to want to discuss in youtube comments. There are proofs that the Taylor series in question does uniformly converge findable on the math stack exchange.
d/dx x^r = d/dx e^(ln(x)*r)
= r*(1/x) * e^(ln(x)*r)
= r/x * x^r
= r * x^(r-1)
Easy!
what about complex exponents!?
Bishop Berkely would like to point out that when you let t go to the limit of x and divide by x - t you are dividing by zero. Yes, I know there are rigorous ways around this, but it really annoyed him in the 18th century, and it took nearly a hundred years before mathematicians avoided using this dubious trick.
i don't know which part of 0< |x-a|< delta your man couldn't understand, but it doesn't require any sleight of hand to rehabilitate the limit definition of the derivative.
It's the limit of the ratio not the ratio of limits. You don't get 0/0.
I thought you were going to something like
x^r=e^ln(x^r)=e^(r ln x)
d/dx(x^r)=d/dx(e^(r ln x)) = e^(r ln x)*r/x=x^r*r/x=r x^(r-1)
yeah, that is really dumb. I would use the identity f(x) = x^r = exp(r ln x). So d/dxf(x) = exp(r lnx) d/dx (r lnx) = x^r d/dx(r ln x) = x^r r /x = rx^(r-1). But if you like tedious calculations, I cannot help you.
He wants to give an elementary, transparent proof.
Highly pedantic and anal point: you have to write exp(r ln x) not e^(r ln x) which would be a circular definition.
A bit of symbolism here... You should write dt^n/dt.