Anyone who might be sophisticated enough to do this Feynman trick would already know how to do the contour integral, which gives the result in just a couple of lines.
Yes I do agree that contour integration and even the laplace transform will derive the result more efficiently. However the purpose of this video is to to demonstrate the use, power and beauty of differentiation under the integral sign
@@maths_505 I think a better example then is to do the full exercise: cos(ax)/(1+x^2) . Since cos is harmonic, it is easy to follow your derivation, if 'a' is far enough from 0.
I personally had no trouble understanding this and potentially could’ve solved it myself. But I have no idea how to even begin understanding complex integration and cauchy’s residue theorem
Amazing result, by the way, you can actually get a better one by applying the fundamental theorem of engineering saying that π=e therefore getting I=1.
Très belle démonstration de la puissance, de l'efficacité et des immenses perspectives offertes par cette méthode ingénieuse pour se sortir de situations apparemment inextricables. Merci pour votre travail
Amazing integration, just want to point out when solving a differential equation you must first find the characteristic equation for the homogeneous part 1st and then find the particular solution. In this specific case this is a strictly homogeneous equation so there is only the characteristic equation to be solved which is r^2-1=0 and hence r=1 or -1 and the form is c1e^(r1a)+c2e^(r2a). This only works when r1 and r2 are different from each other and also real. For repeated roots and complex roots the form of the solution will change. By the way r1=1 and r2=-1. Also it’s important that the solution the second order differential is not 0 because that would imply are original integral is equal to 0 even though 0 is a possible solution to the differential equation. But when graphing cos(x)/(x^2+1) clearly the area is not 0.
Great video Kamal! Isn't that simply the real part of the Fourier Transform of 1/(1+x^2) evaluated at 1? This Fourier Transform is well known (exp of abs) 😊
I don’t think you can switch the differential and the integral because the integral of the derivative doesn’t absolutely converge. In order to do that you would need to do an integration by parts to increase the degree of the denominator thus when you differentiate you get something absolutely integrable
Honestly seems to me like an overkill, I think it is more straightforward to just solve this using complex analysis + residue theorem (with a route around the i pole)
I was wondering why at 4:58, you can take the constant "a" inside the differential? I've never seen this done before so any explanation for why this works would be much appreciated. Thanks!
@@maths_505 I also just tried u-substitution, letting u = ax, and was able to get the same result (equals integral of sin(u) / u du). So that works too 👍
You do need to be careful about the bounds of integration when doing this. In this case they didn't change because they were 0 and infinity and we're assuming that a≥0, but in general they will change by a factor of a.
Because it’s the same process as thinking “ax is inside the sine. it would be nice if I had ax on the outside in the numerator so that a u-substitution would get rid of it, but I must multiply the top and bottom by a to produce ax”
How do we know in instances like these to make it match the bottom by multiplying by 1 and adding 0 like it sometimes seems to be necessary? Is there an alternative way that doesn't require this?
Here's what's bugging me about the derivation. Let's look again at I'(a). Before we manipulate it, it's the integral of -x sin(ax)/(x^2 + 1) dx. Now, if we take the limit as a --> 0, it looks like we get 0, since sin(0) = 0. But in your derivation, you get I'(0) = -pi/2.
Normally, using the dominated convergence theorem, we can justify differentiating under the integral sign by showing for all a in some interval, the partial derivative is dominated by some positive function whose integral converges. But the integral of |(sin ax)*x /(x^2+1)| from 0 to infinity diverges for any a>0. So how do we justify differentiating under the integral sign here?
I'm still very curious to know how we can justify differentiating under the integral sign to get I'(a). I've been thinking about it for over a week, but I still haven't figured it out.
@roderictaylor this is one of my earlier videos and definitely not one of my best. I approached this using a different method in another video which I liked alot more.
@@maths_505 Thank you. I enjoy your channel, and I will check out your other video, but I've been studying differentiation under the integral sign recently, and I'm interested in when it works and doesn't work for its own sake. If we could show it works in this case, I'd be very curious to see it as I'd be learning something new. At this point I don't think it does. Let F(a,x)=cos(ax)/(x^2+1) and let F_a be its partial derivative with respect to a, F_a(a,x)=-x sin(ax) / (x^2+1). To justify differentiating under the integral sign, I believe we'd need to show that the integral from -infty to infty with respect to x of [ (F(a+h,x) - F(a,x))/h - F_a(a,x)] goes to zero as h goes to zero. After some manipulation, I believe this is equivalent to showing the integral from -infty to infty with respect to x of [ x sin(ax) (1 - sin(hx)/(hx))]/(1+x^2) goes to 0 as h goes to 0, and I don't think this is the case.
9:03 Our solution for I’(a) relies on the dirichlet integral evaluating to pi/2 however that doesn’t work for a = 0 which would give \int_{0}^{\infty} sin(0x)/x dx = 0 making I’(a) discontinuous at a = 0.
At 4:54 he doesn't use a = 0 yet and does the suspicious "bring the constant a into the differential," so a being zero hadn't been used yet. That's why he got the right answer at the end despite what you saying being true.
My thoughts exactly! Also, when calculating I'(0), he used I'(a)=-pi/2+ int (0 to inf) sin(ax)/(x(1+x^2)), so he got I'(0) = -pi/2, but I'(a) also equals int (0 to inf) -xsin(ax)/(1+x^2), if you apply this when calculating I'(0), shouldn't I'(0)=0 ???
We're not exactly taking a=0 Were actually taking the limits of I(a) and I'(a) as "a" approaches zero. As far as the confusion about I'(a) is concerned it can be proved using more mathematical rigor that the expression for I'(a) at the 2:48 mark isn't defined for a=0 which is why I pulled out the Dirichlet integral to consider the limit of I'(a) as a approaches zero. This issue was also raised in another comment and it got me thinking about uploading an alternate solution that still uses the Leibniz rule. Unfortunately I forgot to upload it....I'll upload that solution tomorrow as it won't create ambiguities that would force us into being extra rigorous Thank you so much for reminding me via this comment
Yes indeed that is quite disturbing Here's an article that explains the rigor behind our solution (can't explain it properly in a UA-cam comment 😂). Its the last example in the text. kconrad.math.uconn.edu/blurbs/analysis/diffunderint.pdf
@@maths_505 I do like the solution you give here. Just because there are other arguably easier ways to find the answer, doesn't mean we can't also appreciate a solution like this. I'd just like to figure out why it works (and perhaps in the process get a better understanding of when differentiating under the integral sign works). And I just now discovered the paper you linked above which treats this problem, acknowledges there are several invalid steps, and promises to derive it rigorously. I'll need to spend some time studying that.
Nicely done! But what will learning these integrals of such specific mathematical operators lead us to in real world? Request you to corelate integrals with real world applications. That would make this video even more explosive!
why is a partial derivative when it is after integral sign but normal derivative when it is before integral sign? what is the difference in which order we put these operators?
Excellent question We can justify the solution better by considering the behavior of the general solution I(a)=c1e^-a + c2e^a as "a" approaches positive infinity. The only way to get a bounded solution for all positive values of "a" is c2 to be zero, which agrees with the general result of the integration with the parameter being different from 1 (the general result can also be proved using the Laplace transform). We can actually prove the result more rigorously while still using the feynman technique but taking into account the fact that I(a) is not differentiable at a=0. So the value I obtained at the 5:56 mark is actually a limit as "a" approaches zero from the right. That's why I pulled out the Dirichlet integral to make things more clear.
I think the first and last expressions for I’ have equal value for non-zero values of a, but their values are different for a=0. It’s not clear to me how that happened. It’s been almost 30 years since I studied calculus.
I am in my first year of a bachelor's degree in mathematics and I love these integrals but I don't like the way that Feynman do these. Not for me, looks like a physicist thing.
I've uploaded a video on this integral solved using the laplace transform instead of Feynman's technique. Check it out, I think you'll like that better.
is there not a contradiction between your statement about I'(a)=-pi/2 at 8:55 and your earlier initial statement about I'(a) earlier at 2:30, which would surely collapse to 0 when a=0?
I agree that this is not the best way to tackle this integral using Feynman's trick. I solved it using the same technique but with an adjustment to take care of the irregularities in this video.
7:08 That is indeed a solution to the differential equation, but how do you know that it's the correct one? Put another way: the desired function I satisfies I''(a) = I(a), but that does not mean that every function satisfying this condition is necessarily I. Here, you would also have to argue that NO function other than the one you give solves I''(a) = I(a). Can this be done? If so, how?
Not sure how it's normally proved, but thinking about power series it makes sense. You will of course also have f''' = f', f'''' = f'' etc. so when you expand f(x) = f(0) + xf'(0) + 1/2x^2f''(0) + ... all the coefficients will be determined by the first two, so then f(x) = f(0)[1 + 1/2x^2 + 1/24x^4 + ...] + f'(0)[x + 1/6x^3 + /120x^5 + ...] and it should follow from there
At 1:10 you show I(a) equal to an integral (from 0 to infinity) with "a" plugged in, but you dropped the factor of 2, which you showed immediately above. As this percolates down to the end, this would make the final solution 2*pi/e rather than pi/e. Right?
. |=|cos x{dx/(1+x^2)}= =|cos x d(arctan x)= =arctan x.cos x-|arctan x (-sin x dx)= =arctan x.cos x+|arctan x.d(cos x)= =2 arctan x.cos x-|cos x d(arctan x). Maka 2|cos x dx/(1+x^2)= 2 arctan x.cos x Jadi |=cos x.arctan x+C
To be frank, it’s an extremely simple PDE and a standard result. You can assume the result by a little intuitive thought if you forget it. I guessed it before he wrote it
Except the switch up of the integration and differentiation was not justified, and actually the mentioned "trickery" was done to avoid getting a nonconvergent integral without mentioning why one does that
But I don't think the converse holds - that is, I = I' ' doesn't necessarily mean I = I '. Hence the more general expression initially deduced in the working@@cottawalla
good day. You are multipying by X both of numerator and denominator and in the same time calculating integral from "0" to infinity which means that "0" is included....can you?
@@maths_505 sorry not only approaching but it is included. You say limits and I say domain of integration. ......Yes sir you can, just to give you some headache. Kindest regards.
That's a fantastic computation and an amazing result. I wonder if this result could be used to prove something about the number π/e, for example, if it's transcendental or not.
The technique is elegant, no shadows of doubt about it. But would it be possible to identify cases where that technique would be the best approach or something like we do when we learn other integration techniques such as integration by parts, variable substitution, trigonometric integration?
1. This integral can be found with residues in one line. 2. If you do not know complex integration, you may know Fourier transform. The parametric integral is pi/2 exp(-|a|) by taking the Fourier transform of a symmetric decaying exponential. Your solution ignores the case a
The Fourier transform is immediate. Proving the Fourier transform for e^(-|t|) is super easy and then using the inverse Fourier transform for t=1 is exactly the integral in question, constants not withstanding. The Fourier transform is such an amazing tool, shame none of the math channels use it very much.
If you had something like "cosaxsinax", then would you treat it as "cos(ax sin (a)x)", "cos(a) x sin(ax)", "cos(a) sin(a) x^2" etc. or something else? Even if here one can 'guess' that you secretly mean that "sin ax = sin(ax)", it's often problematic, if people just skip brackets like this and we are working with many different variables or expressions that are multiplied together... I never understood why some physicists purposefully keep skipping brackets to create ambiguous expressions and sometimes even technically invalidate correctness of what they do, as if they loved to see the world BURN! :D :D Edit: 5:15 why not just "dax" instead of d(ax)? Let's be consistent in skipping the brackets 🙂
@@florisv559 if even meant what they thought it meant, only a small portion of functions would be even, and a general word like even wouldn’t be used for them
Anyone who might be sophisticated enough to do this Feynman trick would already know how to do the contour integral, which gives the result in just a couple of lines.
Yes I do agree that contour integration and even the laplace transform will derive the result more efficiently.
However the purpose of this video is to to demonstrate the use, power and beauty of differentiation under the integral sign
Yes, residues allow to do the calc very quickly.
@@maths_505 I think a better example then is to do the full exercise: cos(ax)/(1+x^2) . Since cos is harmonic, it is easy to follow your derivation, if 'a' is far enough from 0.
I personally had no trouble understanding this and potentially could’ve solved it myself. But I have no idea how to even begin understanding complex integration and cauchy’s residue theorem
@@hydropage2855 don't worry bro
I'm workin on it
Amazing result, by the way, you can actually get a better one by applying the fundamental theorem of engineering saying that π=e therefore getting I=1.
QED
Another cool integral, presented by the best teacher I‘ve ever had - excellent 👌
Thanks bro but I honestly hate this video
Check out the collab with qncubed3. I solved a similar integral with a much more rigorous solution
Très belle démonstration de la puissance, de l'efficacité et des immenses perspectives offertes par cette méthode ingénieuse pour se sortir de situations apparemment inextricables.
Merci pour votre travail
AN ABSOLUTELY STUNNING RESULT AND YOU ARE A MATHEMATICAL GENIUS.
Great, thank you. I will check out the sinx/x integral.
Amazing integration, just want to point out when solving a differential equation you must first find the characteristic equation for the homogeneous part 1st and then find the particular solution. In this specific case this is a strictly homogeneous equation so there is only the characteristic equation to be solved which is r^2-1=0 and hence r=1 or -1 and the form is c1e^(r1a)+c2e^(r2a). This only works when r1 and r2 are different from each other and also real. For repeated roots and complex roots the form of the solution will change. By the way r1=1 and r2=-1. Also it’s important that the solution the second order differential is not 0 because that would imply are original integral is equal to 0 even though 0 is a possible solution to the differential equation. But when graphing cos(x)/(x^2+1) clearly the area is not 0.
I really love this technique
What a crazily beautiful result!
really nice result and nice video.
How do you know you can differentiate under the integral sign when you don't have dominated convergence on the integrand of I'(a)?
8:57 okay nice nice very nice😂
Btw that was very cool integral
Great video Kamal!
Isn't that simply the real part of the Fourier Transform of 1/(1+x^2) evaluated at 1?
This Fourier Transform is well known (exp of abs) 😊
I don’t think you can switch the differential and the integral because the integral of the derivative doesn’t absolutely converge. In order to do that you would need to do an integration by parts to increase the degree of the denominator thus when you differentiate you get something absolutely integrable
Honestly seems to me like an overkill, I think it is more straightforward to just solve this using complex analysis + residue theorem (with a route around the i pole)
muh resi-jew theorem
I don’t know why i typed that
I will try this question by solving INT exp(iax)/(1+x²).
Excellent work sir
If you apply the fundamental theorem of engineering, the final answer is 1, as pi = e
I was wondering why at 4:58, you can take the constant "a" inside the differential? I've never seen this done before so any explanation for why this works would be much appreciated. Thanks!
No problem
d(ax)=adx if a is a constant
So that's the thought process behind it
@@maths_505 Okay, thank you!
@@maths_505 I also just tried u-substitution, letting u = ax, and was able to get the same result (equals integral of sin(u) / u du). So that works too 👍
You do need to be careful about the bounds of integration when doing this. In this case they didn't change because they were 0 and infinity and we're assuming that a≥0, but in general they will change by a factor of a.
Because it’s the same process as thinking “ax is inside the sine. it would be nice if I had ax on the outside in the numerator so that a u-substitution would get rid of it, but I must multiply the top and bottom by a to produce ax”
How do we know in instances like these to make it match the bottom by multiplying by 1 and adding 0 like it sometimes seems to be necessary? Is there an alternative way that doesn't require this?
Great video again !!
Here's what's bugging me about the derivation. Let's look again at I'(a). Before we manipulate it, it's the integral of -x sin(ax)/(x^2 + 1) dx.
Now, if we take the limit as a --> 0, it looks like we get 0, since sin(0) = 0. But in your derivation, you get I'(0) = -pi/2.
Je me suis posé la même question. Mais visiblement ça ne dérange personne...
Teacher: solve this one integral.
Student: casually solves infinitely many integrals.
I actually laughed out loud
Normally, using the dominated convergence theorem, we can justify differentiating under the integral sign by showing for all a in some interval, the partial derivative is dominated by some positive function whose integral converges. But the integral of |(sin ax)*x /(x^2+1)| from 0 to infinity diverges for any a>0. So how do we justify differentiating under the integral sign here?
I'm still very curious to know how we can justify differentiating under the integral sign to get I'(a). I've been thinking about it for over a week, but I still haven't figured it out.
@roderictaylor this is one of my earlier videos and definitely not one of my best. I approached this using a different method in another video which I liked alot more.
@@maths_505 Thank you. I enjoy your channel, and I will check out your other video, but I've been studying differentiation under the integral sign recently, and I'm interested in when it works and doesn't work for its own sake. If we could show it works in this case, I'd be very curious to see it as I'd be learning something new. At this point I don't think it does. Let F(a,x)=cos(ax)/(x^2+1) and let F_a be its partial derivative with respect to a, F_a(a,x)=-x sin(ax) / (x^2+1). To justify differentiating under the integral sign, I believe we'd need to show that the integral from -infty to infty with respect to x of [ (F(a+h,x) - F(a,x))/h - F_a(a,x)] goes to zero as h goes to zero. After some manipulation, I believe this is equivalent to showing the integral from -infty to infty with respect to x of [ x sin(ax) (1 - sin(hx)/(hx))]/(1+x^2) goes to 0 as h goes to 0, and I don't think this is the case.
@@roderictaylor here's the video I was talking about. ua-cam.com/video/S52DapoH17M/v-deo.htmlsi=uYzq-XiIVjkzjCwm
It can be done easily using cauchy residue theorem
9:03 Our solution for I’(a) relies on the dirichlet integral evaluating to pi/2 however that doesn’t work for a = 0 which would give \int_{0}^{\infty} sin(0x)/x dx = 0 making I’(a) discontinuous at a = 0.
At 4:54 he doesn't use a = 0 yet and does the suspicious "bring the constant a into the differential," so a being zero hadn't been used yet. That's why he got the right answer at the end despite what you saying being true.
So, since the differential element was dax, not dx and a = 0 not being employed yet, it was just the Dirichlet integral with a goofy ax instead of x.
Good work
Why can't I deal with I(a)=I(-a) that means c1=c2????
I(a)=pi/2(e^a+e(-a))->I(1)=pi/2(e+1/e)
My thoughts exactly! Also, when calculating I'(0), he used I'(a)=-pi/2+ int (0 to inf) sin(ax)/(x(1+x^2)), so he got I'(0) = -pi/2, but I'(a) also equals int (0 to inf) -xsin(ax)/(1+x^2), if you apply this when calculating I'(0), shouldn't I'(0)=0 ???
8:30 u can't take a=0 because previously u divided by a. Notice if u take the expression 2:48 I'(0)=0
We're not exactly taking a=0
Were actually taking the limits of I(a) and I'(a) as "a" approaches zero.
As far as the confusion about I'(a) is concerned it can be proved using more mathematical rigor that the expression for I'(a) at the 2:48 mark isn't defined for a=0 which is why I pulled out the Dirichlet integral to consider the limit of I'(a) as a approaches zero.
This issue was also raised in another comment and it got me thinking about uploading an alternate solution that still uses the Leibniz rule. Unfortunately I forgot to upload it....I'll upload that solution tomorrow as it won't create ambiguities that would force us into being extra rigorous
Thank you so much for reminding me via this comment
@@maths_505 but why 2:38 isnt valid for a=0 ? Taking the limit or not the result is 0 which its the result expected if computed
Yes indeed that is quite disturbing
Here's an article that explains the rigor behind our solution (can't explain it properly in a UA-cam comment 😂). Its the last example in the text.
kconrad.math.uconn.edu/blurbs/analysis/diffunderint.pdf
@@maths_505 I do like the solution you give here. Just because there are other arguably easier ways to find the answer, doesn't mean we can't also appreciate a solution like this. I'd just like to figure out why it works (and perhaps in the process get a better understanding of when differentiating under the integral sign works).
And I just now discovered the paper you linked above which treats this problem, acknowledges there are several invalid steps, and promises to derive it rigorously. I'll need to spend some time studying that.
After watching this I wonder how many people could actually replicate the solution on their own. I think not many.
By the definition of I we should have I(-a)=I(a) which is not satisfied by I(a)=pi/2.exp(-a).
Nicely done! But what will learning these integrals of such specific mathematical operators lead us to in real world? Request you to corelate integrals with real world applications. That would make this video even more explosive!
the area under the normal curve?
@@quingquinglol.
Just ask Wolfram Alpha
LuRnInG gAy, JuSt FoAr MuNnIeZ
Integration of functiins with compkex variable is much more straigtforward
why is a partial derivative when it is after integral sign but normal derivative when it is before integral sign? what is the difference in which order we put these operators?
not veryfing that we can change integral and derivative - wrong
Can anyone suggest me a book to start with feynmanns integrals???
Please which app are u using to solve this problem
Can you please explain how I(a) = C1•e^(-a) + C2•e^a ?
It's a general solution for this type of ODE ( f"(x) = f(x) )
At 2:35 I’(0)=0 because the integrand is 0. But at 5:56, I’(0) is -pi/2. Please clarify.
Excellent question
We can justify the solution better by considering the behavior of the general solution I(a)=c1e^-a + c2e^a as "a" approaches positive infinity. The only way to get a bounded solution for all positive values of "a" is c2 to be zero, which agrees with the general result of the integration with the parameter being different from 1 (the general result can also be proved using the Laplace transform).
We can actually prove the result more rigorously while still using the feynman technique but taking into account the fact that I(a) is not differentiable at a=0.
So the value I obtained at the 5:56 mark is actually a limit as "a" approaches zero from the right. That's why I pulled out the Dirichlet integral to make things more clear.
I think the first and last expressions for I’ have equal value for non-zero values of a, but their values are different for a=0. It’s not clear to me how that happened. It’s been almost 30 years since I studied calculus.
Любопытно! 👍
He forgot the dx at the end
this was a journey
If there is another high-intelligence species in this universe, do they know Feynman's technique too?
Your mind must be a very interesting place 😂
And now I’m curious too 😂😂
nice
I am in my first year of a bachelor's degree in mathematics and I love these integrals but I don't like the way that Feynman do these. Not for me, looks like a physicist thing.
I've uploaded a video on this integral solved using the laplace transform instead of Feynman's technique.
Check it out, I think you'll like that better.
@@maths_505 Yeah I saw it thanks. Seems a bit difficult for now. I'll learn it in my future lessons
@@amzion
The switch up between the differential and integral was Justified by Leibniz himself.
People afterwards found a creative usage for it
is there not a contradiction between your statement about I'(a)=-pi/2 at 8:55 and your earlier initial statement about I'(a) earlier at 2:30, which would surely collapse to 0 when a=0?
I agree that this is not the best way to tackle this integral using Feynman's trick. I solved it using the same technique but with an adjustment to take care of the irregularities in this video.
Use contour integration and solve it 1/2 real part of e^iz/z2+1
Cool! Now, which definite integral gives pi e?
Scroll through the list of the integrals here and you'll find one that has Rick Sanchez on the thumbnail. That's the result you want.
Maths 505 could you please prove that integral from 0 to inf cos(mx)/(1+x^2)dx = integral from 0 to inf xsin(mx)/(1+x^2)dx
7:08 That is indeed a solution to the differential equation, but how do you know that it's the correct one? Put another way: the desired function I satisfies I''(a) = I(a), but that does not mean that every function satisfying this condition is necessarily I. Here, you would also have to argue that NO function other than the one you give solves I''(a) = I(a). Can this be done? If so, how?
I believe it relies on the existence and uniqueness theorem which idr how to prove but is really important for solving differential equations.
Could someone explain how did he solve the differential equation part ? How to solve f"(x)=f(x)
Not sure how it's normally proved, but thinking about power series it makes sense. You will of course also have f''' = f', f'''' = f'' etc. so when you expand f(x) = f(0) + xf'(0) + 1/2x^2f''(0) + ... all the coefficients will be determined by the first two, so then
f(x) = f(0)[1 + 1/2x^2 + 1/24x^4 + ...] + f'(0)[x + 1/6x^3 + /120x^5 + ...]
and it should follow from there
At 1:10 you show I(a) equal to an integral (from 0 to infinity) with "a" plugged in, but you dropped the factor of 2, which you showed immediately above. As this percolates down to the end, this would make the final solution 2*pi/e rather than pi/e. Right?
Bro u weren't paying attention. The integral function gave the answer π/2e. Twice of it is π/e.
.
|=|cos x{dx/(1+x^2)}=
=|cos x d(arctan x)=
=arctan x.cos x-|arctan x (-sin x dx)=
=arctan x.cos x+|arctan x.d(cos x)=
=2 arctan x.cos x-|cos x d(arctan x).
Maka 2|cos x dx/(1+x^2)= 2 arctan x.cos x
Jadi |=cos x.arctan x+C
Incredible hard question. Even you know Feynman method, you need to have knowledge of PDE to solve this question.
To be frank, it’s an extremely simple PDE and a standard result. You can assume the result by a little intuitive thought if you forget it. I guessed it before he wrote it
@@hydropage2855 bro has mathematical sharingan
I will solve it by Residue Theorem
Except the switch up of the integration and differentiation was not justified, and actually the mentioned "trickery" was done to avoid getting a nonconvergent integral without mentioning why one does that
If I(a) = I' '(a), do you need all that extra working to arrive at e^a as the solution? It's basically the definition of e^a, right?
isn't that I(a)=I'(a) that makes I(a)= e^a ?
@BenDRobinson yes, if
I(a) = I'(a), then
I'(a) = I' '(a), then
I(a) = I' '(a).
But I don't think the converse holds - that is, I = I' ' doesn't necessarily mean I = I '. Hence the more general expression initially deduced in the working@@cottawalla
@BenDRobinson I may be mis-recalling the problem now but I believe I(a) = I' '(a) was effectively given.
That is actually mad
good day. You are multipying by X both of numerator and denominator and in the same time calculating integral from "0" to infinity which means that "0" is included....can you?
Excellent question
The limits of integration are limits so we can divide by x because x approaches zero as a limit only
@@maths_505 sorry not only approaching but it is included. You say limits and I say domain of integration. ......Yes sir you can, just to give you some headache. Kindest regards.
@@abderrahmanebelazouz1574 you can becuase x/x exists for all values of x and is trivially equal to 1.
@@fartsniffa8043 ohhhhh!!!! and what if x=0?
@@abderrahmanebelazouz1574x is not equal to zero, it can be only very close to zero in this case
That's a fantastic computation and an amazing result. I wonder if this result could be used to prove something about the number π/e, for example, if it's transcendental or not.
The technique is elegant, no shadows of doubt about it.
But would it be possible to identify cases where that technique would be the best approach or something like we do when we learn other integration techniques such as integration by parts, variable substitution, trigonometric integration?
WOW! pi / e !!!!!
1. This integral can be found with residues in one line.
2. If you do not know complex integration, you may know Fourier transform. The parametric integral is pi/2 exp(-|a|) by taking the Fourier transform of a symmetric decaying exponential.
Your solution ignores the case a
The Fourier transform is immediate. Proving the Fourier transform for e^(-|t|) is super easy and then using the inverse Fourier transform for t=1 is exactly the integral in question, constants not withstanding. The Fourier transform is such an amazing tool, shame none of the math channels use it very much.
Great trickery, but you really need to work on the readability your π
If you had something like "cosaxsinax", then would you treat it as "cos(ax sin (a)x)", "cos(a) x sin(ax)", "cos(a) sin(a) x^2" etc. or something else?
Even if here one can 'guess' that you secretly mean that "sin ax = sin(ax)", it's often problematic, if people just skip brackets like this and we are working with many different variables or expressions that are multiplied together...
I never understood why some physicists purposefully keep skipping brackets to create ambiguous expressions and sometimes even technically invalidate correctness of what they do, as if they loved to see the world BURN! :D :D
Edit: 5:15 why not just "dax" instead of d(ax)? Let's be consistent in skipping the brackets 🙂
This is possibly the most pedantic content I've ever seen on a math video and a super useless one to boot.
Very cumbersome to say the least.
how is x^2+1 an even function? if x=2, result is 5.
Bro that's not what we mean by even functions...
An even function means that f(-x)=f(x)
@@dacomputernerd4096 yup
Lovely. All the BS about mathematics not having any of the ambiguities of natural languages.
An even function is just a function that gives the same value for x as well as -x.
@@florisv559 if even meant what they thought it meant, only a small portion of functions would be even, and a general word like even wouldn’t be used for them