I'm an EEE dude and in Control Systems/Electrical Circuitry, complex numbers and analysis techniques are used all the time. The concepts of LTI stability, minimum and non-minimum phase systems is embedded deep within the complex plane! Sounds like that teacher is an arse!
"... determine the error of Stirling's formula ..." That was also due to Stirling - it's called, Stirling's Series. It's basically Stirling's Formula times a power series that starts with "1," and it's what I thought Papa Flammy was going to derive here, when he started talking about "asymptotic series," and how they are useful for computation/numerical calculation. Asymptotic series [the title is specific to a certain property they have] are kind of the opposite of what you might expect to be useful in mathematics. Take, e.g., the Taylor series for sin(x) - not an asymptotic series. It converges everywhere - sounds ideal for computing, doesn't it? But it isn't, except near x=0, because as you get farther from 0, the series starts with large-valued terms of opposite signs, which largely cancel, leaving large roundoff errors behind. So if you want to compute sines using the Taylor series, you restrict the magnitude of the value going into it, and use reflection & translation formulas to get to the value you're looking for. An asymptotic series, OTOH, diverges everywhere except at 0 - sounds useless for computing, doesn't it? But for any given x≠0, its terms shrink in magnitude for a while, then grow uncontrollably to diverge. And the farther x is from 0, the more well-behaved the initial terms are, the more of them that are shrinking, and the smaller they get before the blowup. This is where "asymptotic series" get their name - asymptotically, as x→∞, they become convergent. So what you do is to judiciously choose your truncation point (how many terms of the series to use) based on the size of |x|. All that said, there are more efficient ways to compute both sin(x) and ∏(x) = x! = Γ(x+1). Fred
Pausing at 11:21. Is there anyone else on UA-cam who explains advanced maths like this, step by step, as if they were explaining the trimesters of a pregnant elephant? This is gold!
an upper bound for n! can be found without gamma function, it is pretty similar to stirling formula. it uses logarithms and integrals to upper bound the sum
@@angelmendez-rivera351 Yes, but the point of his comment was to give additional information that could be useful or interesting to some of the viewers of this video.
Very important in statistical physics where n normally represents a huge number like the number of molecules in a volume gas. However, I vaguely recall a far simpler proof from way back in my University days. It's nice to see π getting involved too - this number we associate with circles is playing a part where we wouldn't expect it to.
I never saw a proof of stirling aproximation formula before but back in my mind I always wanted to see one. The best part of this proof is that it fits with the type of content you make. I think that after watching several of your videos then you should be confortable with the proof because of the apearence of taylor series, the gamma function and its integral form and substitutions. Yes, you can always spice things up using real analysis but this was aweasome already!
The memeroids that appear on screen sometimes move in the direction of their flammy tail. This happens when the memespace is extra dank, and so the flames create a low pressure zone which the memeroid is drawn to.
Would You like to extend binomial distribution to gaussian distribution by taking limit n aproaches to infinity? It would be great derivation of bell curve.
I'd recommend looking into the Method of Steepest Descents Method for evaluating these type of asymptotic integrals. One nice example of this with different asymptotic behaviours in different regions of the complex plane is the asymptotic behaviour of the Airy (First and Second) Function. You will love looking at and working through the approach in Asymptotic Analysis (Springer) by J.D.Murray with pages 54-61 and Exercises 3 and 4 in the same chapter. Looking at this is what got me into Asymptotic Analysis and this "switching" of asymptotic behaviours is called Stokes Phenomenon. This work is what got me into this field and I am writing my thesis on an advancement on these sort of methods. Really interesting to look into it if you have time. I really think you'd enjoy it :) Thanks Papa Flammy
Why not just doing the substitution y=x/z so that the lower bond becomes -z and go to -infinity as z goes to infinity and so that ln(1+y/z)-y/z can be properly Taylor expanded ?
In my opinion, there is an easier and more rigorous intuitive argument for -zx^2/2 being a suitable asymototic approximation for z[ln(1 + x) - x]. Namely, if you can prove that lim (x -> ♾) [ln(1 + x) - x + x^2/2] = 0, and that lim (x -> ♾) [ln(1 + x) - x]/(-x^2/2) = 1, then the approximation is valid as z -> ♾. The first limit is necessary because you can reverse engineer it so that lim (z -> ♾) z!/Stuff = 1. The second is necessary so that there is convergence at the boundaries of integration.
Use partial fraction decomposition, and for the fraction with irreducible quadratic denominator, complete the square and do a tangent substitution. [If you want, you can split off an (x+1/2)/[3*(x^2+x+1)] from that term before completing the square and doing the trig sub on the remaining part, as this part just integrates to a logarithm.] After you've found the antiderivative, combine the log terms before trying to take the limit for the -infinity end.
I'm extremely late to this, but the answer to your question is that you do need to use complex analysis. The value of the integral depends on the contour you choose, and on the contour along Im(x) = 0, the integral diverges, because there is a simple pole at x = 1, which is a point that lies in the interval of integration. If you want to choose this contour anyway, then you can only calculate the Cauchy principal value. If this is what you want to calculate, then below is how you do it. x^3 - 1 = (x - 1)(x^2 + x + 1). Therefore, 1/(x^3 - 1) = A/(x - 1) + B(2x + 1)/(x^2 + x + 1) + C/(x^2 + x + 1). Finding A is trivial, you simply multiply by x - 1 to find that 1/(x^2 + x + 1) = A + [B(2x + 1) + C](x - 1)/(x^2 + x + 1). Let x = 1. Hence A = 1/3. Therefore, [B(2x + 1) + C]/(x^2 + x + 1) = 1/(x^3 - 1) - 1/(3x - 3) = 3/(3x^3 - 3) - (x^2 + x + 1)/(3x^3 - 3) = (-x^2 - x + 2)/(3x^3 - 3), which is equivalent to -(x^2 + x - 2) = 3B(2x + 1)(x - 1) + 3C(x - 1) = 3B(2x^2 - x - 1) + 3Cx - 3C = 6Bx^2 - 3Bx - 3B + 3Cx - 3C, which implies 6B = -1, 3C - 3B = -1, -(3B + 3C) = 2. 6B = -1 implies B = -1/6, and 3C - 3B = -1 implies C - B = -1/3, and -(3C + 3B) = 2 implies C + B = -2/3. Therefore, C = B - 1/3, and B - 1/3 + B = -2/3 = 2B - 1/3 implies B = -1/6 as well. Therefore, C = -(1/3 + 1/6) = -(2/6 + 1/6) = -1/2. Therefore, 1/(x^3 - 1) = (1/3)/(x - 1) - (1/6)(2x + 1)/(x^2 + x + 1) - (1/2)/(x^2 + x + 1). To antidifferentiate this, linearity applies. Once the antiderivatives are known, you can calculate the Cauchy principal value by calculating the respective limits. The antiderivative of 1/(x - 1) is trivially log|x - 1| + A·sgn(x) + B for any complex numbers A and B. Since the Cauchy principal value is being calculated, A = 0. Therefore, the antiderivative of (1/3)/(x - 1) is log|x - 1|/3 + B. The antiderivative of (2x + 1)/(x^2 + x + 1) can be calculated by letting y = x^2 + x + 1. The reason you can do this is that dy = (2x + 1)·dy. Therefore, the antiderivative is log(x^2 + x + 1) + C for some complex number C. Therefore, the antiderivative of (1/6)(2x + 1)/(x^2 + x + 1) is log(x^2 + x + 1)/6 + C. x^2 + x + 1 = (x + 1/2)^2 + 3/4 = (x + 1/2)^2 + [sqrt(3)/2]^2 = [sqrt(3)/2]^2·([(2x + 1)/sqrt(3)]^2 + 1). Let y = (2x + 1)/sqrt(3). Hence dy = 2/sqrt(3)·dx, and 1/(x^2 + x + 1) = [2/sqrt(3)]^2/(y^2 + 1). Hence the integrand with respect to y is [2/sqrt(3)]/(y^2 + 1). The antiderivative with respect to y is [2/sqrt(3)]·arctan(y) + D = [2/sqrt(3)]·arctan[(2x + 1)/sqrt(3)] + D for some complex number D. Therefore, the antiderivative of (1/2)/(x^2 + x + 1) is arctan[(2x + 1)/sqrt(3)]/sqrt(3) + D. Therefore, the antiderivative of 1/(x^3 - 1) in the Cauchy principal value equivalence is log|x - 1|/3 - log(x^2 + x + 1)/6 - arctan[(2x + 1)/sqrt(3)]/sqrt(3) + K, where K is some complex number. Evaluating this at x = y and x = -y, respectively, taking the difference, and then evaluating the limit as y -> ♾, gives the answer. The limitant, thus, is equal to log|y - 1)/(y + 1)|/3 - log[(y^2 + y + 1)/(y^2 - y + 1)]/6 - arctan[(2y + 1)/sqrt(3)]/sqrt(3) - arctan[(2y - 1)/sqrt(3)] after all the simplifying algebra. As y -> ♾, the logarithms vanish to 0, and the arctangents are equal to π/2 each. Therefore, the integral is equal to -π/sqrt(3).
Oh, sorry, I just realized that your comment was asking about the integral from -♾ to 0, not -♾ to +♾. My comment was pertaining to the latter, not the former. To answer your question, and correctly this time: no, you do not need complex analysis for the integral you want, it can be calculated by simply using the fundamental theorem of calculus and linearity, no Cauchy principal value necessary. To use the information I presented already in my previous comment: 1/(x^3 - 1) = (1/3)/(x - 1) - (1/6)(2x + 1)/(x^2 + x + 1) - (1/2)/(x^2 + x + 1), and for x < 1, the antiderivative is equal to log(1 - x)/3 - log(x^2 + x + 1)/6 - arctan[(2x + 1)/sqrt(3)]/sqrt(3) + K for some complex number K. This is equal to (log[(1 - x)^2] - log(x^2 + x + 1))/6 - arctan[(2x + 1)/sqrt(3)]/sqrt(3) + K = log[(x^2 - 2x + 1)/(x^2 + x + 1)]/6 - arctan[(2x + 1)/sqrt(3)]/sqrt(3) + K. As x -> -♾, the logarithm in the above expression vanishes, and -arctan[(2x + 1)/sqrt(3)]/sqrt(3) -> π/[2·sqrt(3)]. If x = 0, then log[(x^2 - 2x + 1)/(x^2 + x + 1)]/6 - arctan[(2x + 1)/sqrt(3)]/sqrt(3) = log(1)/6 - arctan[1/sqrt(3)]/sqrt(3) = -π/[6·sqrt(3)]. Therefore, the integral in question is equal to π/[2·sqrt(3)] + π/[6·sqrt(3)] = 3π/[6·sqrt(3)] + π/[6·sqrt(3)] = 4π/[6·sqrt(3)] = 2π/[3·sqrt(3)].
@@angelmendez-rivera351 Oh wow, thank you for your answer, even if it's been a while since I asked. Still, I appreciate the effort and the clearness of your explanation
Bruh try WKB next its a really cool application of asymptotics and divergent series to quantum mechanics DE's. At this point it seems like I'm gonna become a professional asymptotic boi so this is fun.
Hello FM, I'm fairly new in the world of mathematics, so I wanted to ask about a similar function to the one derived in this video. TL;DR (x-1/2)! ≈ sqrt(2*pi)*(x/e)^x I came about this formula when I was messing around with product integrals and product derivatives. interestingly, the prod derivative of x! was really similar, and got increasingly closer to, when x went to infinity, to the function x+1/2. By product integrating x+1/2, I got C*(x/e)^x, and after taking the limit of (x+0.5)!/(C*(x/e)^x), when x went to infinity, the result was C/sqrt(2*pi) (This was some months ago so I'm not so sure), so, by setting C to sqrt(2*pi), the limit went to 1! Since this most probably is already common knowledge, I wanted to ask if this function has a name, and how was it first derived. Thank you for your amazing videos, they have really sparked the flame of maths within me Much love from Brazil
I don't want to be that guy, but although I see why you would choose infinity as a reasonable upper limit for your approximation, choosing minus infinity as the lower limit is a bit more subtle. You mentioned the dominated convergence theorem at some point. Did you mean that if you were to use higher order approximations for g(x), say g_n(x), the theorem implies that the integrals behave well in the limit? Even if that were the case, don't you need to take into account the radius of convergence of the aproximation? (I know that the negative exponential decays really fast, which means the error should still be small, but it's kind of a sketchy argument nonetheless).
@@angelmendez-rivera351 I don't know what sum you were talking about, but the only power series referred to in this video is that of the logarithm, which does not have infinite radius of convergence. My question was about the approximating integral, after ln(x)-x has been replaced by -(x^2)/2 (which informally seems to make sense, but I was interested in the detail). On the other hand I was unsure about choosing minus infinity as the lower limit of the new integral, but I had forgotten that he was assuming z to be very large, which ensures that the error becomes arbitrarily small in the limit.
@@angelmendez-rivera351 I realize now that I was a bit harsh with my reply. I apologize. After all there were a couple of things I misunderstood the first time I watched this video.
Omar gaber That can't be explained unless you understand some complicated mathematics. That is why the answer can only genuinely by Riemann sligel. If you don't like it, that's your problem, and it just means you need to understand more maths.
How does the limit of a curve approach the limit of a concave downwards non differentiable function (z! for decreasing z) in the limit x -> -1 without crossing the graph of the constant z! function on the other side of 0 which is concave upwards?!? Impossibru, this is not math, this is alchemy. Wtf papa
The goal is to find some function f(z) such that lim z!/f(z) (z -> ♾) = 1. Since z! = z^(z + 1)·e^(-z)·[INT from x = -1 to x -> ♾][e^(z·[ln(1 + x) - x])], we can deduct f(z) = z^(z + 1)·e^(-z)/g(z). As such, finding some g(z) such that lim [INT from x = -1 to x -> ♾][g(z)·e^(z·[ln(1 + x) - x])] (z-> ♾) = 1 is equivalent to solving the original problem. Now, we can apply the dominated convergence theorem, so that the above limit is simply equal to [INT from x = -1 to x -> ♾][lim g(z)·e^(z·[ln(1 + x) - x]) (z -> ♾)] = 1. Formally, and without loss of generality, lim g(z)·K(x, z) (z -> ♾) = h(x). If K(x, z) = e^(z·[ln(1 + x) - x]), then we want some h(x) such that [INT from x = -1 to x -> ♾][h(x)] = 1. In other words, we want h(x) = cp(x), where c > 0, and p(x) is a *probability distribution.* This is important for the derivation, because it restricts the number of maps that would give us an approximation by several cardinalities. e^(z·[ln(1 + x) - x]) = (e^[ln(1 + x) - x])^z, so consider K(x, z) = j(x)^z. Therefore, lim g(z)·j(x)^z (z -> ♾) = h(x). We want some K*(x, z) = j*(x)^z such that lim g(z)·j*(x)^z (z -> ♾) = h(x), so that j(x) can be suitably approximated by j*(x). Since c and p(x) are arbitrary, without loss of generality, we can let c = 1 and p(x) = δ(x), where δ(x) is the Dirac delta distribution. Thus h(x) = h(-x) and [INT over R][h(x)] = 1. This implies lim g(z)·j*(0)^z (z -> ♾) = ♾ and lim g(z)·j*(|x| > 0)^z (z -> ♾) = 0. The former occurs if lim g(z) (z -> ♾) = ♾ or lim j*(0)^z (z -> ♾) = ♾; the latter occurs if lim g(z) (z -> ♾) = 0 or lim j*(|x| > 0)^z (z -> ♾) = 0. If lim g(z) (z -> ♾) = ♾, then lim j*(|x| > 0)^z (z -> ♾) = 0, implying j*(|x| > 0) < 1. If lim g(z) (z -> ♾) = 1, then lim j*(0)^z (z -> ♾) = ♾ and lim j*(|x| > 0)^z (z -> ♾) = 0, implying j*(0) > 1 and j*(|x| > 0) < 1, in which case j*(x) is discontinuous at x = 0. If lim g(z) (z -> ♾) = 0, then lim j*(0)^z (z -> ♾) = ♾, implying j*(0) > 1. δ(x) = lim [z/(2π)]^(1/2)·[e^(-x^2/2)]^z (z -> ♾), suggesting that g(z) = [z/(2π)]^(1/2) with lim g(z) (z -> ♾) = ♾ and j*(x) = e^(-x^2/2), for which j*(x) < 0 whenever |x| > 0. This suitable, since O(-x^2/2) contains ln(1 + x) - x, and j*(x) = j*(-x). Therefore, f(z) = z^(z + 1)·e^(-z)/[z/(2π)]^(1/2) = z^z·e^(-z)·(2πz)^(1/2) = [(z/e)^z][(2πz)^(1/2)].
Hast du solche Fähigkeiten schon vor dem Studium gehabt? Ich studiere demnächst Physik und meine mathematischen Fähigkeiten sind mit deinen nicht mal ansatzweise zu vergleichen 😖
Proof by Desmos intuition
if you zoom in enough you can see that your pp does indeed exist. qed.
A tip: you can watch series on flixzone. Me and my gf have been using them for watching all kinds of movies during the lockdown.
@Holden Kylo yup, I've been watching on Flixzone for months myself :D
I'm in maths class rn and my teacher (an engineer) was trashtalking complex numbers, what a coincidence.
ki kus He did say he is an engineer. You would not necessarily be incorrect.
Get him on a 𝕸eality tv show
I'm an EEE dude and in Control Systems/Electrical Circuitry, complex numbers and analysis techniques are used all the time. The concepts of LTI stability, minimum and non-minimum phase systems is embedded deep within the complex plane! Sounds like that teacher is an arse!
Bullshit even in engineering complex numbers are used
All you need is + - * /
Now derive a formula to determine the error of Stirling's formula compared to the factorial function.
You need to ask for it more politely friendo.
@@jacoboribilik3253 Papa is so kind that he'll at least consider it this way as well.
@@jacoboribilik3253 Doesn't matter... He never hearts these kinds of requests
"... determine the error of Stirling's formula ..."
That was also due to Stirling - it's called, Stirling's Series. It's basically Stirling's Formula times a power series that starts with "1," and it's what I thought Papa Flammy was going to derive here, when he started talking about "asymptotic series," and how they are useful for computation/numerical calculation.
Asymptotic series [the title is specific to a certain property they have] are kind of the opposite of what you might expect to be useful in mathematics.
Take, e.g., the Taylor series for sin(x) - not an asymptotic series. It converges everywhere - sounds ideal for computing, doesn't it? But it isn't, except near x=0, because as you get farther from 0, the series starts with large-valued terms of opposite signs, which largely cancel, leaving large roundoff errors behind. So if you want to compute sines using the Taylor series, you restrict the magnitude of the value going into it, and use reflection & translation formulas to get to the value you're looking for.
An asymptotic series, OTOH, diverges everywhere except at 0 - sounds useless for computing, doesn't it? But for any given x≠0, its terms shrink in magnitude for a while, then grow uncontrollably to diverge. And the farther x is from 0, the more well-behaved the initial terms are, the more of them that are shrinking, and the smaller they get before the blowup. This is where "asymptotic series" get their name - asymptotically, as x→∞, they become convergent.
So what you do is to judiciously choose your truncation point (how many terms of the series to use) based on the size of |x|.
All that said, there are more efficient ways to compute both sin(x) and ∏(x) = x! = Γ(x+1).
Fred
Stat mech ptsd intensifies
Hey Mr Andrew
Hello Andrew
Get a load of this guy!
Yes, the algorithm suggests old videos of both youse guys.
Whether you like it or not, there is always a Gaussian integral lurking in the shadows 16:43
Gauss, Euler ... and some other guys!
"So we can just ignore O(x^3) terms in the limit as they are going to 0"
Papa went full engineer
That's not really engineering, that's actually a legitimate mathematical claim. That's what asymptotic analysis is all about.
Yeah, if it was engineering, his π would have been 3…
14:01 zero dot zero dot zero dot zero ... ?? Papa Flammy uses some new numerical system ...
Hyper-decimal
@@ericthegreat7805 better than this:
0,-1,1,-2,2,-3,3,-4,4,-5,5....
Pausing at 11:21. Is there anyone else on UA-cam who explains advanced maths like this, step by step, as if they were explaining the trimesters of a pregnant elephant? This is gold!
an upper bound for n! can be found without gamma function, it is pretty similar to stirling formula. it uses logarithms and integrals to upper bound the sum
Milan Stojanovic Yes, but the point of the video is precisely to deriva Stirling's approximation.
@@angelmendez-rivera351 Yes, but the point of his comment was to give additional information that could be useful or interesting to some of the viewers of this video.
Imagine waking up to papa flammys intro music starring at you at the end of the bed
no
Very important in statistical physics where n normally represents a huge number like the number of molecules in a volume gas. However, I vaguely recall a far simpler proof from way back in my University days. It's nice to see π getting involved too - this number we associate with circles is playing a part where we wouldn't expect it to.
I never saw a proof of stirling aproximation formula before but back in my mind I always wanted to see one. The best part of this proof is that it fits with the type of content you make. I think that after watching several of your videos then you should be confortable with the proof because of the apearence of taylor series, the gamma function and its integral form and substitutions. Yes, you can always spice things up using real analysis but this was aweasome already!
@@PapaFlammy69 I always do!
Desmos proof sounds like phony Numberphile video but with computer instead of brown paper
do a video on quaternions, i just think theyre the coolest :)
I love this result. Thanks for doing this! 🙂
This makes sense. I understand it now after reading a formal explanation after your helpful intuitive explanation.
The memeroids that appear on screen sometimes move in the direction of their flammy tail. This happens when the memespace is extra dank, and so the flames create a low pressure zone which the memeroid is drawn to.
First time I've seen that in decades.
hi, i want to say thanks most of your videos have helped me to understand my college classes, please continue to make this kind of content. uwu
Thank you very much
Is this quantum loss theory? And at 22:41 ...wow I don't know what to say
Would You like to extend binomial distribution to gaussian distribution by taking limit n aproaches to infinity? It would be great derivation of bell curve.
That's possibly the most intricate explanation I've ever seen of the saddle point approximation method #everyoneissmarterontheinternet
Favorite moment in the video 20:17
Yeayy..
Fappable math :'v
@12:09 when x is zero g(x) is zero and when x is between -1 and 0 g(x) is -ve and approaches -ve inf as x approaches -1
Great ending my dude
Your videos are the only lectures I don't speed up.
thanks for this great video!
Nächstes Semester wird sich "Intuitive Analysis IV" gegönnt
Ok. Thanks. After reflection good way. Thank you very much.
Lawa bound = To be legally bounded by nothingbutt some gamma boi and a few asymptotails of a gaussbell
Can you do video of formula sumiration Ramanjuan?
I'd recommend looking into the Method of Steepest Descents Method for evaluating these type of asymptotic integrals. One nice example of this with different asymptotic behaviours in different regions of the complex plane is the asymptotic behaviour of the Airy (First and Second) Function. You will love looking at and working through the approach in Asymptotic Analysis (Springer) by J.D.Murray with pages 54-61 and Exercises 3 and 4 in the same chapter. Looking at this is what got me into Asymptotic Analysis and this "switching" of asymptotic behaviours is called Stokes Phenomenon.
This work is what got me into this field and I am writing my thesis on an advancement on these sort of methods. Really interesting to look into it if you have time. I really think you'd enjoy it :) Thanks Papa Flammy
How does one integrate from 0 to infinity of I dropped my chalk?
It help me to understand
Isn't lim [ln(1+x) - x] = -inf as x goes to inf?
Lol yeah, he meant e^(ln(1+x)-x)
Why not just doing the substitution y=x/z so that the lower bond becomes -z and go to -infinity as z goes to infinity and so that ln(1+y/z)-y/z can be properly Taylor expanded ?
There is Putnam problem about the "odd numbers" factorial function that is pretty similar to this, from 1995 or 1996.
FlammyBoi plz send help! Shouldn't be a Big Omega instead Big O in Maclaurin expansion??
I got lost from 4:20 to 6:30.. that substitution seems incomprehensible.
I love his content, but OMG, wtf are those dolls in the background, Papa?
my figurines lol
@@PapaFlammy69 A part time job, I presume?
No, I'm collecting them ^^
@@PapaFlammy69 oh, ok. Wish thee luck in thy adventure
haha xD
In my opinion, there is an easier and more rigorous intuitive argument for -zx^2/2 being a suitable asymototic approximation for z[ln(1 + x) - x]. Namely, if you can prove that lim (x -> ♾) [ln(1 + x) - x + x^2/2] = 0, and that lim (x -> ♾) [ln(1 + x) - x]/(-x^2/2) = 1, then the approximation is valid as z -> ♾. The first limit is necessary because you can reverse engineer it so that lim (z -> ♾) z!/Stuff = 1. The second is necessary so that there is convergence at the boundaries of integration.
I just started college as a mathematical engineer, I’m excited to be able to understand this at some point
I would use abels partial summation formula for ln(n!)
Could you do the integral of 1/(x^3-1) from negative infinity to zero?
Not sure whether I have to use complex analysis or not, either way I'm lost
Use partial fraction decomposition, and for the fraction with irreducible quadratic denominator, complete the square and do a tangent substitution. [If you want, you can split off an (x+1/2)/[3*(x^2+x+1)] from that term before completing the square and doing the trig sub on the remaining part, as this part just integrates to a logarithm.]
After you've found the antiderivative, combine the log terms before trying to take the limit for the -infinity end.
You could use partial fractions 🤔
I'm extremely late to this, but the answer to your question is that you do need to use complex analysis. The value of the integral depends on the contour you choose, and on the contour along Im(x) = 0, the integral diverges, because there is a simple pole at x = 1, which is a point that lies in the interval of integration. If you want to choose this contour anyway, then you can only calculate the Cauchy principal value. If this is what you want to calculate, then below is how you do it.
x^3 - 1 = (x - 1)(x^2 + x + 1). Therefore, 1/(x^3 - 1) = A/(x - 1) + B(2x + 1)/(x^2 + x + 1) + C/(x^2 + x + 1). Finding A is trivial, you simply multiply by x - 1 to find that 1/(x^2 + x + 1) = A + [B(2x + 1) + C](x - 1)/(x^2 + x + 1). Let x = 1. Hence A = 1/3. Therefore, [B(2x + 1) + C]/(x^2 + x + 1) = 1/(x^3 - 1) - 1/(3x - 3) = 3/(3x^3 - 3) - (x^2 + x + 1)/(3x^3 - 3) = (-x^2 - x + 2)/(3x^3 - 3), which is equivalent to -(x^2 + x - 2) = 3B(2x + 1)(x - 1) + 3C(x - 1) = 3B(2x^2 - x - 1) + 3Cx - 3C = 6Bx^2 - 3Bx - 3B + 3Cx - 3C, which implies 6B = -1, 3C - 3B = -1, -(3B + 3C) = 2. 6B = -1 implies B = -1/6, and 3C - 3B = -1 implies C - B = -1/3, and -(3C + 3B) = 2 implies C + B = -2/3. Therefore, C = B - 1/3, and B - 1/3 + B = -2/3 = 2B - 1/3 implies B = -1/6 as well. Therefore, C = -(1/3 + 1/6) = -(2/6 + 1/6) = -1/2. Therefore, 1/(x^3 - 1) = (1/3)/(x - 1) - (1/6)(2x + 1)/(x^2 + x + 1) - (1/2)/(x^2 + x + 1). To antidifferentiate this, linearity applies. Once the antiderivatives are known, you can calculate the Cauchy principal value by calculating the respective limits.
The antiderivative of 1/(x - 1) is trivially log|x - 1| + A·sgn(x) + B for any complex numbers A and B. Since the Cauchy principal value is being calculated, A = 0. Therefore, the antiderivative of (1/3)/(x - 1) is log|x - 1|/3 + B.
The antiderivative of (2x + 1)/(x^2 + x + 1) can be calculated by letting y = x^2 + x + 1. The reason you can do this is that dy = (2x + 1)·dy. Therefore, the antiderivative is log(x^2 + x + 1) + C for some complex number C. Therefore, the antiderivative of (1/6)(2x + 1)/(x^2 + x + 1) is log(x^2 + x + 1)/6 + C.
x^2 + x + 1 = (x + 1/2)^2 + 3/4 = (x + 1/2)^2 + [sqrt(3)/2]^2 = [sqrt(3)/2]^2·([(2x + 1)/sqrt(3)]^2 + 1). Let y = (2x + 1)/sqrt(3). Hence dy = 2/sqrt(3)·dx, and 1/(x^2 + x + 1) = [2/sqrt(3)]^2/(y^2 + 1). Hence the integrand with respect to y is [2/sqrt(3)]/(y^2 + 1). The antiderivative with respect to y is [2/sqrt(3)]·arctan(y) + D = [2/sqrt(3)]·arctan[(2x + 1)/sqrt(3)] + D for some complex number D. Therefore, the antiderivative of (1/2)/(x^2 + x + 1) is arctan[(2x + 1)/sqrt(3)]/sqrt(3) + D.
Therefore, the antiderivative of 1/(x^3 - 1) in the Cauchy principal value equivalence is log|x - 1|/3 - log(x^2 + x + 1)/6 - arctan[(2x + 1)/sqrt(3)]/sqrt(3) + K, where K is some complex number. Evaluating this at x = y and x = -y, respectively, taking the difference, and then evaluating the limit as y -> ♾, gives the answer. The limitant, thus, is equal to log|y - 1)/(y + 1)|/3 - log[(y^2 + y + 1)/(y^2 - y + 1)]/6 - arctan[(2y + 1)/sqrt(3)]/sqrt(3) - arctan[(2y - 1)/sqrt(3)] after all the simplifying algebra. As y -> ♾, the logarithms vanish to 0, and the arctangents are equal to π/2 each. Therefore, the integral is equal to -π/sqrt(3).
Oh, sorry, I just realized that your comment was asking about the integral from -♾ to 0, not -♾ to +♾. My comment was pertaining to the latter, not the former. To answer your question, and correctly this time: no, you do not need complex analysis for the integral you want, it can be calculated by simply using the fundamental theorem of calculus and linearity, no Cauchy principal value necessary.
To use the information I presented already in my previous comment: 1/(x^3 - 1) = (1/3)/(x - 1) - (1/6)(2x + 1)/(x^2 + x + 1) - (1/2)/(x^2 + x + 1), and for x < 1, the antiderivative is equal to log(1 - x)/3 - log(x^2 + x + 1)/6 - arctan[(2x + 1)/sqrt(3)]/sqrt(3) + K for some complex number K. This is equal to (log[(1 - x)^2] - log(x^2 + x + 1))/6 - arctan[(2x + 1)/sqrt(3)]/sqrt(3) + K = log[(x^2 - 2x + 1)/(x^2 + x + 1)]/6 - arctan[(2x + 1)/sqrt(3)]/sqrt(3) + K.
As x -> -♾, the logarithm in the above expression vanishes, and -arctan[(2x + 1)/sqrt(3)]/sqrt(3) -> π/[2·sqrt(3)]. If x = 0, then log[(x^2 - 2x + 1)/(x^2 + x + 1)]/6 - arctan[(2x + 1)/sqrt(3)]/sqrt(3) = log(1)/6 - arctan[1/sqrt(3)]/sqrt(3) = -π/[6·sqrt(3)]. Therefore, the integral in question is equal to π/[2·sqrt(3)] + π/[6·sqrt(3)] = 3π/[6·sqrt(3)] + π/[6·sqrt(3)] = 4π/[6·sqrt(3)] = 2π/[3·sqrt(3)].
@@angelmendez-rivera351 Oh wow, thank you for your answer, even if it's been a while since I asked. Still, I appreciate the effort and the clearness of your explanation
Thanks for helping us pleb physicists understand stat mech, but what about the fugacity?
Bruh try WKB next its a really cool application of asymptotics and divergent series to quantum mechanics DE's. At this point it seems like I'm gonna become a professional asymptotic boi so this is fun.
Hello FM, I'm fairly new in the world of mathematics, so I wanted to ask about a similar function to the one derived in this video.
TL;DR (x-1/2)! ≈ sqrt(2*pi)*(x/e)^x
I came about this formula when I was messing around with product integrals and product derivatives.
interestingly, the prod derivative of x! was really similar, and got increasingly closer to, when x went to infinity, to the function x+1/2.
By product integrating x+1/2, I got C*(x/e)^x, and after taking the limit of (x+0.5)!/(C*(x/e)^x), when x went to infinity, the result was C/sqrt(2*pi) (This was some months ago so I'm not so sure), so, by setting C to sqrt(2*pi), the limit went to 1!
Since this most probably is already common knowledge, I wanted to ask if this function has a name, and how was it first derived.
Thank you for your amazing videos, they have really sparked the flame of maths within me
Much love from Brazil
I do not believe such a function has a name other than Stirling's approximation.
Im going to guess that his father and mother are the proudest parents in Germany !
Who is the author of the book on quantum field theory that appears in the beggining of the video?
So you derive it intuitively and then have to prove it?
Could you send the link to e^x>1+X; X>0?
@10:27 you meant e^[z(g(x)]~0 not g(x)~0
Trivial
What is that figures behind you in the start of the video ?
We lost another honorable man to Anime.
I don't want to be that guy, but although I see why you would choose infinity as a reasonable upper limit for your approximation, choosing minus infinity as the lower limit is a bit more subtle. You mentioned the dominated convergence theorem at some point. Did you mean that if you were to use higher order approximations for g(x), say g_n(x), the theorem implies that the integrals behave well in the limit? Even if that were the case, don't you need to take into account the radius of convergence of the aproximation? (I know that the negative exponential decays really fast, which means the error should still be small, but it's kind of a sketchy argument nonetheless).
The radius of convergence is infinite, so I have no idea what your issue is.
@@angelmendez-rivera351 I don't know what sum you were talking about, but the only power series referred to in this video is that of the logarithm, which does not have infinite radius of convergence.
My question was about the approximating integral, after ln(x)-x has been replaced by -(x^2)/2 (which informally seems to make sense, but I was interested in the detail). On the other hand I was unsure about choosing minus infinity as the lower limit of the new integral, but I had forgotten that he was assuming z to be very large, which ensures that the error becomes arbitrarily small in the limit.
Modesto Rosado I see. I think I misunderstood your comment then.
@@angelmendez-rivera351 I realize now that I was a bit harsh with my reply. I apologize. After all there were a couple of things I misunderstood the first time I watched this video.
Pleeeeeeeeese,any one explain to me ,How Riemann found the first nontrivial zeros?!!
Don't say by Riemann sligel
I need the explanation?!!!!!!😓😓
Omar gaber That can't be explained unless you understand some complicated mathematics. That is why the answer can only genuinely by Riemann sligel. If you don't like it, that's your problem, and it just means you need to understand more maths.
He used his Desmos intuition lel
Riemann himself was master at evaluating seemingly impossible things
10:26 This is false. ln(1+x)-x goes to negative infinity as x goes to infinity.
Papa bought new microphone ..[°
How does the limit of a curve approach the limit of a concave downwards non differentiable function (z! for decreasing z) in the limit x -> -1 without crossing the graph of the constant z! function on the other side of 0 which is concave upwards?!? Impossibru, this is not math, this is alchemy. Wtf papa
Hi ! First viewer from INDIA !
Bollywood extravaganza.
@@sahilbaori9052 अरे भाई जो बोलना था अंग्रेजी में नहीं बोलना था नहीं तो यह फिरंगी को समझ आ जाएगा
Is this loss?
yeye
any one name a text book that I can learn it from
This dude makes a math lecture feel like porn. #hawt
Euler e^iπ = -1
Me floor(π) = ceil(e) 😂😂😂
The goal is to find some function f(z) such that lim z!/f(z) (z -> ♾) = 1. Since z! = z^(z + 1)·e^(-z)·[INT from x = -1 to x -> ♾][e^(z·[ln(1 + x) - x])], we can deduct f(z) = z^(z + 1)·e^(-z)/g(z). As such, finding some g(z) such that lim [INT from x = -1 to x -> ♾][g(z)·e^(z·[ln(1 + x) - x])] (z-> ♾) = 1 is equivalent to solving the original problem.
Now, we can apply the dominated convergence theorem, so that the above limit is simply equal to [INT from x = -1 to x -> ♾][lim g(z)·e^(z·[ln(1 + x) - x]) (z -> ♾)] = 1. Formally, and without loss of generality, lim g(z)·K(x, z) (z -> ♾) = h(x). If K(x, z) = e^(z·[ln(1 + x) - x]), then we want some h(x) such that [INT from x = -1 to x -> ♾][h(x)] = 1. In other words, we want h(x) = cp(x), where c > 0, and p(x) is a *probability distribution.* This is important for the derivation, because it restricts the number of maps that would give us an approximation by several cardinalities.
e^(z·[ln(1 + x) - x]) = (e^[ln(1 + x) - x])^z, so consider K(x, z) = j(x)^z. Therefore, lim g(z)·j(x)^z (z -> ♾) = h(x). We want some K*(x, z) = j*(x)^z such that lim g(z)·j*(x)^z (z -> ♾) = h(x), so that j(x) can be suitably approximated by j*(x).
Since c and p(x) are arbitrary, without loss of generality, we can let c = 1 and p(x) = δ(x), where δ(x) is the Dirac delta distribution. Thus h(x) = h(-x) and [INT over R][h(x)] = 1. This implies lim g(z)·j*(0)^z (z -> ♾) = ♾ and lim g(z)·j*(|x| > 0)^z (z -> ♾) = 0. The former occurs if lim g(z) (z -> ♾) = ♾ or lim j*(0)^z (z -> ♾) = ♾; the latter occurs if lim g(z) (z -> ♾) = 0 or lim j*(|x| > 0)^z (z -> ♾) = 0. If lim g(z) (z -> ♾) = ♾, then lim j*(|x| > 0)^z (z -> ♾) = 0, implying j*(|x| > 0) < 1. If lim g(z) (z -> ♾) = 1, then lim j*(0)^z (z -> ♾) = ♾ and lim j*(|x| > 0)^z (z -> ♾) = 0, implying j*(0) > 1 and j*(|x| > 0) < 1, in which case j*(x) is discontinuous at x = 0. If lim g(z) (z -> ♾) = 0, then lim j*(0)^z (z -> ♾) = ♾, implying j*(0) > 1.
δ(x) = lim [z/(2π)]^(1/2)·[e^(-x^2/2)]^z (z -> ♾), suggesting that g(z) = [z/(2π)]^(1/2) with lim g(z) (z -> ♾) = ♾ and j*(x) = e^(-x^2/2), for which j*(x) < 0 whenever |x| > 0. This suitable, since O(-x^2/2) contains ln(1 + x) - x, and j*(x) = j*(-x). Therefore, f(z) = z^(z + 1)·e^(-z)/[z/(2π)]^(1/2) = z^z·e^(-z)·(2πz)^(1/2) = [(z/e)^z][(2πz)^(1/2)].
2iπ=? plese tel me
What graphing website is that?
www.desmos.com/
Desmos
Stirling's Formula? The proof is trivial and left as an exercise to the reader
Just use z!+1. Asymptotically equal by your quotient limit definition
Man plz solve jee advanced paper 2019.I bet you won't be able to do it.
a better approxamation: x! = (n/e)^nsqrt(pi(2n+1/3))
I watched this and thought of this sum for the difference between n^n and n!... sum(i=1..n-1)(i*(n^i)*(n-1-i)!) , Maple will reduce it to n^n-n!
Portuguese subtitle, please
Is lis thoss
Dud
Hast du solche Fähigkeiten schon vor dem Studium gehabt? Ich studiere demnächst Physik und meine mathematischen Fähigkeiten sind mit deinen nicht mal ansatzweise zu vergleichen 😖
17 ads wtf?!!
In teh geh rahl
Check out Carl Benders' series on Mathematical Physics:
ua-cam.com/video/LYNOGk3ZjFM/v-deo.html
yeye
wow a loss me xD
qwoceant
Friggin haccer give Papa's account back pls :'[
Don't be so happy. This formula is useful when n is extremely large. This aint for a fraction.
Third!
not convincing
e^x > 1 + x ????
e^0 > 1+0
1 > 1
???
Problem: lim (z->inf) z!/f(z) = 1. Find f(z).
Solution: Elementary. f(z) = z! 😉