Yeah but it relies on the fact that deriving an integral (with respect to a parameter) is the same as integrating the partial derivative :) which is a god level tool (and it’s hard to prove)
Trefor: this trick requires that the order of differentiation and integration can be interchanged because your F'(s) is actually d/ds [ int_0^infty e^(-sx) sin(x)/x dx ] . You nonchalantly swapped the order of these two operations. This is thoroughly valid provided the integrand satisfies certain integrability conditions etc. Perhaps it would be worth noting this (and going over these conditions), so if people are looking to use Feynman's trick, they will be on the look out for potential tripping points.
Beautiful! Funny that the Laplace transform shows up. I only knew how to do this integral by changing sin(x)/x into sin(xt)/x and then taking the Laplace transform of the entire thing, but your solution seems much easier :)
I suppose this method is mostly equivalent. I'd actually suggest your is more true to the spirit of Feynman's trick and a bit more generalizable, but perhaps the way I showed slightly more efficient for 1st year calc students
@@DrTrefor But why and how did Feynman or whoever else come up with this? I could never admit I couldn't or Im not a math whiz who would come up with this or something similar--why not just have e^x or e^sx where s is just a constant and make it a function of x still--was this tried--Thanks for sharing and hope to hear from you.
The reason is the behavior as x tends to 0, where the function behaves approximately like 1/x. Perhaps you should consider the integral from 1 to infinity in order to get a more interesting result.
Since you briefly mentioned the Laplace transform, I feel like it'd be a waste not to mention the super important Fourier transform in this context, because the Fourier transform lets you solve the Dirichlet integral almost immediately. It turns out, the Fourier transform of a window function of from -1 to 1 is sin(w)/w, so using the inverse fourier transform, you get the value of the Dirichlet integral.
I learnt both the fourier transform and reverse transform and immediately knew sine(x)/x. I wasn't aware of the link between them lol. I'd add that if you view sine(x)/x as a spectrum, its energy across actual time is indeed related to the integral of sine(x)/x from 0 to infinity.
@@donaldmcronald2331 Your second sentence restates "Parseval" equation. It physically means, no matter in what domain (frequency or time) you integrate, the energy is same.
I always love to watch different people's take on the Dirichlet integral. It's second only to the Gaussian integral for me. :) The interesting thing about the Dirichlet integral is that it's not Lebesgue-integrable. Put some of that stuff in your pipe and smoke it! Dr Bazett's take is basically a Laplace transform, and I think it's cute!
@@DrTrefor I don't see why you take the limit at 5:52 instead of say plugging in the value of zero for s to get C? Isn't that more logical and intuitive?
@@DrTrefor and I don't see why anyone would take the limit as S goes to infinity--isn't there some other way to get C--if you set s to zero you get F(0)=C
I am really happy to see Differentiation Under Integral sign rule here to calculate integration of Sinx/x. Actually today in the class i taught this rule and after that i saw your video. I amazed that how you start with combining exponential term in the integral, In real life there are so many situations where you can use this cause there exist always parameter with your function.
The terminology related to this method is itself somewhat interesting - when I first came across it (about 40 years ago), I don't think it was given any specific name (except differentiating under the integral sign), then the name "Leibniz rule" seemed to become more popular, and in the last 5 years or so, the "Feynman method" began to reign supreme - a tribute to continuing popularity of Mr Feynman, I guess. And it's worth pointing out that there are conditions required for the method to work: continuity of f(x,s) in both x and s and (partial) df/ds over the region of integration, IIRC. (corrections gratefully accepted if I misremember). Also, there's a generalisation of the method that takes account of variable limits depending on s.
there are 4 conditions 1)continuity (or more generaly the function has to messurable) 2)continuos partial derivatives 3)very important is you have to find a mayorant to derivatice of the intagrated function with respect to the parameter, which must have finite value when integrated 4)you have to find a least 1 parameter for which you can calculate the integral Overall the most important part about this ''trick'' is figuring out whether you can even use it!
@@Ninja20704 Is there a generalized closed formula for the n-th power of sin(x)/x? Like with the Dirichlet series, or the Zeta function for the even positive integers
@@OmegaQuark for the zeta function at even positive integers, its been proven that zeta(2n) will always be some rational multiple of pi^2n. Figuring out that rational multiple is pretty complicated but doable.
@@General12th Feynman is on video on youtube saying how he read it in a book, and anyway it's all just based on work done by Leibniz 100s of years ago. Feynman did re-popularise it though.
You can't plug in s = 0 as the Feynman trick can only be applied with s > 0, due to the absolute convergence requirement, though the limit as s goes to 0 indeed equals pi/2
Yeah you need to be careful when limiting to zero here. Probably there’s a nice dominated convergence argument to say that F is continuous at 0 from the right.
@@miloweising9781 it is actually continus at x=0 and here is a complete proof of this fact: ( let f(s,x) be the function inside the integral ) -for all x>0 the function s --> f(s,x) is continus. -for all s in R : x-->f(s,x) is continus. - there exist a continus integrable and positive function g : R+--> R+ such that for all x>0 and for all s in R we have : |f(s,x)|
You can also view the integral as the imaginary part of the integral of e^(izx)/x, evaluated at z=1. Now integrating under the integral sign yields I'(z)=[e^(izx)/z]_0^\infty = -1/z for complex z with positive imaginary part. Hence I(z)=c-log(z). Remember that we are only interested in the imaginary part of I(z), therefore we only need the imaginary part of c. Now let the imaginary part of z approach infinity, while the real part remains constant. Then the imaginary part of log(z) will approach pi/2, while the integral of e^(izx)/x approaches zero. Hence Im(c)=pi/2 and the limit of Im(c-log(z)) as z approaches 1 is pi/2.
Great video ! I just wanted to point out that even though the result is correct, the reasoning here wasn't completely true, or was at least incomplete : by this reasoning, which uses the dominated convergence theorem and it's equivalents to derivate under the integral, you can't directly prove this formula for all s greater or equal to zero, but the formula is only true for s stricly bigger than zero, meaning you can't directly plug in 0 at the end (this is because you can only dominate the first function you want to derive under the integral for all s>0). But the formula still holds for all s>0. So the correct way to prove it is to show that the limit as s goes to zero of F(s) is indeed the integral of sin(x)/x from 0 to infinity, and you can use the fact that the right side of the equation is continuous at zero to give the final result. However, showing that you can interchange the limit and the integral as s goes to zero is not that trivial, since you can't dominate the function properly. To do that, you first need to integrate by parts and only after that you can dominate the function properly and do an interchange of limits and integral that is valid, and get the final result.
For the continuity at 0, you can also show that if you take the sequence of functions F_n(s) = integral from 0 to n of (...), the sequence converges uniformly on [0, infty(, and each F_n is continuous.
Some bits of maths are like p v np, really hard to first find the method (np), but manageable to verify that the solution works (p). Actually it isn't "some bits", it is a lot of the bits and it is cumulative - Newton's "If I have seen further than other men it is by standing on the shoulders of giants".
Another way: I = Im[int(0,inf) e^(iz)/z dz] J = int(0,inf) e^(iz)/z dz Draw a semicircular contour of radius R in the top right quadrant of the complex plane, which goes around the simple pole at z = 0. The bottom contour -> J as r->inf. By the indentation lemma, the contribution around z = 0 pole is -i*pi/2 as epsilon goes to 0, where epsilon is the distance away from the pole of the contour going around 0. The contour from (infinity)i to 0 gives 0 contribution, as the integrand tends to 0. The circular contour joining R and iR tends to 0 as R tends to infinity by Jordan’s Lemma. The full contour contribution is 0 since it encloses no singularities. Thus, we have: J - i*pi/2 = 0 ==> J = i*pi/2 ==> I = int(0,infinity) sin(x)/x dx = pi/2 N.B sin(x)/x has a removable singularity at x = 0, and hence the integral converges. The same cannot be said for cos(x)/x; that integral diverges.
Chapter 17 of Vladimir A. Zorich’s Mathematical Analysis II goes into detail justifying both differentiating under the integral sign for s>0 and that F(s) is continuous at s=0.
Great explanation. My follow up question would be, why does this work and when should one use this trick? I only knew the Double integral solution for this Problem.
With a lot of these integration tricks, ultimately it works when it works. With some practice you can gain some intuition for when you can parameterize and integral and do this differentiate with respect to the parameter trick, but there isn't some general rule for when it always works.
It’s more common to think that when applying Feynmans trick that you would put the parameter s inside of the sin(x) function aka sin(sx). The only reason that you use this decreasing exponential function is that so it converges on the interval 0 to infinity because the original integral converges as well. If you use the first method then you will obtain cos(sx) after differentiation with respect to s. Which does not converge. You should explain it this way to the students on UA-cam.
Contour integration works great for this problem too. Using a semicircle contour and taking the imaginary part of the integral of e^iz/z is maybe even an easier method.
Pi devided by 2 amoubts to 1/3 of a Cubical measure. All measures are 3D so must be no more or less than Cubical or "Powered to 3", before and or after that are simply sizes that are fractions, Positive or negative to the control body size.
fuun, so the are under sin(x)/x over R is pi. This function is just, the typical sinusoidal sin(x), but as the x increases, it is divided by the factor x, so each period 2pi, it's just the sin wave getting "linearly" smaller. On the other hand, this scalling over each x, makes it such that the area over the whole domain is pi, which is the area of a circle of r=1. Which clearly has some meaning, the sin is constantly alternating its sign, and x is either x>0 or x
Hello Dr.Trefor Bazzet, I wanted to take a moment to thank you for all the beautiful content you're creating. They're awesome. By the way, I'm supposed to do one of my class projects using maple. But I haven't gotten used to it. Do you suggest any particular tutorial teaching how to use maple?
Thank you! Sorry I don’t have any particularly great resources at my fingertips, mostly because it can do so much it really depends what you need to use it for!
@@DrTrefor I am supposed to solve differential equations for a dynamical systems course. Anyways, I think if I google every step, it will be somehow manageable.
Although this trick is named after Feynman, I believe he found it in an advanced calculus book his high school physics teacher gave him. It was apparently developed, at least partially, by Leibniz: “I had learned to do integrals by various methods shown in a book that my high school physics teacher Mr. Bader had given me. [It] showed how to differentiate parameters under the integral sign - it’s a certain operation. It turns out that’s not taught very much in the universities; they don’t emphasize it. But I caught on how to use that method, and I used that one damn tool again and again. [If] guys at MIT or Princeton had trouble doing a certain integral, [then] I come along and try differentiating under the integral sign, and often it worked. So I got a great reputation for doing integrals, only because my box of tools was different from everybody else’s, and they had tried all their tools on it before giving the problem to me.” (Surely you’re Joking, Mr. Feynman!) www.cantorsparadise.com/richard-feynmans-integral-trick-e7afae85e25c
Hi Dr Brazet. Wonderful video. But, I have a question here. Why is this method known as Feynman's technique? In fact the idea of Differentiating Under the Integral sign (DIUS) was due to Leibnitz, and also the idea of introducing a parameter 's' in the form e^-sx is the Laplace transform or can even be thought of as the Gamma function? Why do you then still call this the Feynman trick? Just wanted to know.
Hello Professor, Thanks for all your brilliant videos, It was a really nice and technical( a bit similar to laplace technique) although I couldn't find a proof or nonexample for this technique from the internet, I am very curious to know that could we use this technique by any function other than exponential terms, or is it because of the uniform continuity of the laplace transform that we can use this trick? since the problem can also be thought as a differential equation/ a dynamical system biforcating on the parameter s . I really would like to know more about it. Thank you. Br,
Yes similarly! Using the Feynman trick here reflects the relation to complex analysis: Choosing the function f(z) = exp(iz)/z and integrate over the (positive) indented semicircle and some appropriate contour avoiding 0.
Pretty cool trick. I'm okay with all the steps except perhaps the interchange of limit and the integral as s->infinity. Is there some theorem in real analysis that justifies this?
What if instead of during the integration by parts you apply again the Feymann technique and than (probably) will get a differential equation and solve it? Would it work?
Is 2:06 even legal? I suppose it must work in most common cases but can you prove it generally? I guess you can't and it is indeed strange. Anyway I really enjoyed that 🙂 It's a bit like with platonic solids, you differentiate its Volume formula and get its Surface Area formula ... and most people are convinced ...except that it only works for Platonic Solids not the general mish mash you'll meet in Real Physics.
This can replace part of the computation (the double integration by parts) but you still need to use some trick at the beginning before you can use this.
And you use the same Laplace Transform / Feynman's Technique at start right? If yes, I recommend you use these rules instead: sin(x) = Im(e^xi) k * Im(z) = Im(kz) ∫ Im(z) dx = Im(∫ z dx)
Easier to note that this is the imaginary part of e^(iz)/z over the interval [-infty,infty] in the complex plane, and work over a semi-circle contour in the upper half plane.
Would that mean integrating six/x without limits give us pi? Im very interested in using feynmans trick cause I currently in the progress of learning Fourier analysis.
To justify the answer, we need to show that F(s) is continuous at s=0. Our integrand is not dominated in absolute value by an integrable function for s near 0, so this cannot be justified using the usual argument. I was able to do it, but it took me quite a while. Even so, I'm still not intuitively sure why it works.
Look like the total area of this function equals to pi, in other words, the area of a unit circle. Contrast to the Gaussian bell shape curve where the square of the total area is that of a unit circle. Very interesting, no wonder people argue that god doesn’t play dice.
It can be complex, but we ultimately are only using it to get this differential equation which we only evaluate at infinity and zero so it doesn’t matter if other values could be complex
Remember real number set is a subset of complex number set. If an equation is valid in the whole complex plane, the same equation is valid in the whole real number line, all we need is to limit the imaginary part to 0.
What is the limit of F(s) as s goes to - infinity? Doesn't the relation we found fail since the integral is not bounded, while arctan is? How do I define the domain of validity of the identity?
It may be that the step at 1:47 only works for positive s. What we are doing is interchanging 2 limits (pulling the derivative inside the integral), which we can't *freely* do
I don't know why this is called Feynman's trick? differentiation under the integral sign is known before Feynman. It's just the application of Leibniz theorem for integrals dependent on a parameter!
I’m sure this is beneath your notice but I have to ask and hope for the best. I need to know how to find a point in 3 space that’s equidistant from 3 other points. I’ve been looking online and for the life of my I can’t find how to do it. It’s been a very long time since I took any math classes.
All the physicists think that this method is Feynman trick but it existed way way before. It is a little abusive to atribute the credit to Feynman when this is actually extremely classical.
Sir I have a question to you. I am an Engineering student from Bangladesh. Sir I derived an alternative procedure or technique to solve a math content which is being solved in complex ways nowadays. Now I want to publish it. Can I use LaTeX to write my paper? Again how and where I can publish it. If you give me some piece of advice then it will be very helpful for me.
Chapter 17 of Vladimir A. Zorich’s Mathematical Analysis II goes into detail justifying both differentiating under the integral sign for s>0 and that F(s) is continuous at s=0.
Feynman DESTROYS Dirichlet Integral with FACTS and LOGIC
😢
*Leibniz
Yeah but it relies on the fact that deriving an integral (with respect to a parameter) is the same as integrating the partial derivative :) which is a god level tool (and it’s hard to prove)
@@Juan_Carl0s plug the upper limit multiplied by differential of upper limit minus plug the lower limit multiplied by differential if lower limit
@@Juan_Carl0s Thank you. This "Feynman technique" thing is obnoxious. Feynman was great enough on his own merits, he doesn't need any help.
Trefor: this trick requires that the order of differentiation and integration can be interchanged because your F'(s) is actually d/ds [ int_0^infty e^(-sx) sin(x)/x dx ] . You nonchalantly swapped the order of these two operations. This is thoroughly valid provided the integrand satisfies certain integrability conditions etc. Perhaps it would be worth noting this (and going over these conditions), so if people are looking to use Feynman's trick, they will be on the look out for potential tripping points.
I just (today) learnt this integral in Fourier Transform, and here you come up with a video to make it permanent my memory.
Beautiful! Funny that the Laplace transform shows up. I only knew how to do this integral by changing sin(x)/x into sin(xt)/x and then taking the Laplace transform of the entire thing, but your solution seems much easier :)
I suppose this method is mostly equivalent. I'd actually suggest your is more true to the spirit of Feynman's trick and a bit more generalizable, but perhaps the way I showed slightly more efficient for 1st year calc students
@@DrTrefor But why and how did Feynman or whoever else come up with this? I could never admit I couldn't or Im not a math whiz who would come up with this or something similar--why not just have e^x or e^sx where s is just a constant and make it a function of x still--was this tried--Thanks for sharing and hope to hear from you.
@@leif1075your integral still needs to converge so plugging in e^x wont do you any good
I directly tried to solve the integral of cos(x)/x with the same method and found out that this one diverges. Great video!
The reason is the behavior as x tends to 0, where the function behaves approximately like 1/x. Perhaps you should consider the integral from 1 to infinity in order to get a more interesting result.
You have no idea how many videos I've watched about Feynmann's technique. I finally understand it. Thank you so much!
Since you briefly mentioned the Laplace transform, I feel like it'd be a waste not to mention the super important Fourier transform in this context, because the Fourier transform lets you solve the Dirichlet integral almost immediately. It turns out, the Fourier transform of a window function of from -1 to 1 is sin(w)/w, so using the inverse fourier transform, you get the value of the Dirichlet integral.
I learnt both the fourier transform and reverse transform and immediately knew sine(x)/x. I wasn't aware of the link between them lol.
I'd add that if you view sine(x)/x as a spectrum, its energy across actual time is indeed related to the integral of sine(x)/x from 0 to infinity.
@@donaldmcronald2331
Your second sentence restates "Parseval" equation. It physically means, no matter in what domain (frequency or time) you integrate, the energy is same.
I always love to watch different people's take on the Dirichlet integral. It's second only to the Gaussian integral for me. :)
The interesting thing about the Dirichlet integral is that it's not Lebesgue-integrable. Put some of that stuff in your pipe and smoke it!
Dr Bazett's take is basically a Laplace transform, and I think it's cute!
That's a neat trick!
i came across this integral yesterday, interestingly enough - although now I know how to do it faster!
Isn't it cool!?
@@DrTrefor I don't see why you take the limit at 5:52 instead of say plugging in the value of zero for s to get C? Isn't that more logical and intuitive?
@@DrTrefor and I don't see why anyone would take the limit as S goes to infinity--isn't there some other way to get C--if you set s to zero you get F(0)=C
@@leif1075Unfortunately that doesn't work here as it's the value at s=0 we're trying to find.
The beauty of Laplace transforms! Amazing video❤
I am really happy to see Differentiation Under Integral sign rule here to calculate integration of Sinx/x.
Actually today in the class i taught this rule and after that i saw your video.
I amazed that how you start with combining exponential term in the integral, In real life there are so many situations where you can use this cause there exist always parameter with your function.
You remind me of my tuition teacher who is also a big mathemagician and you both are my ideals 🙌
So kind!
I always love your videos Dr. B. Thank you for sharing. I’m now going to utilize the Feynman trick in my calc 3 class
The terminology related to this method is itself somewhat interesting - when I first came across it (about 40 years ago), I don't think it was given any specific name (except differentiating under the integral sign), then the name "Leibniz rule" seemed to become more popular, and in the last 5 years or so, the "Feynman method" began to reign supreme - a tribute to continuing popularity of Mr Feynman, I guess.
And it's worth pointing out that there are conditions required for the method to work: continuity of f(x,s) in both x and s and (partial) df/ds over the region of integration, IIRC. (corrections gratefully accepted if I misremember).
Also, there's a generalisation of the method that takes account of variable limits depending on s.
I guess Feynman's contribution to the Leibniz method was "to hell with boring mathematical justifications".
there are 4 conditions
1)continuity (or more generaly the function has to messurable)
2)continuos partial derivatives
3)very important is you have to find a mayorant to derivatice of the intagrated function with respect to the parameter, which must have finite value when integrated
4)you have to find a least 1 parameter for which you can calculate the integral
Overall the most important part about this ''trick'' is figuring out whether you can even use it!
We live at a weird time. When doctors make videos with clickbaity titles. *Roll eyes*.
Fun fact: the integral of sin(x)^2/x^2 from 0 to infinity is also pi/2
But go to the third power and then it breaks down. That integral will be 3*pi/8
wow
@@Ninja20704 Is there a generalized closed formula for the n-th power of sin(x)/x? Like with the Dirichlet series, or the Zeta function for the even positive integers
@@OmegaQuarkyes, but it's a bit complicated
@@OmegaQuark for the zeta function at even positive integers, its been proven that zeta(2n) will always be some rational multiple of pi^2n. Figuring out that rational multiple is pretty complicated but doable.
Hi Dr. Bazett!
Feynman's technique is one of my favorite in all of integral math.
So long as you are aware it owes nothing to Feynman.
@@frenchimp No. I'm not aware of that. Why don't you explain it to me?
@@General12th Feynman is on video on youtube saying how he read it in a book, and anyway it's all just based on work done by Leibniz 100s of years ago. Feynman did re-popularise it though.
You can't plug in s = 0 as the Feynman trick can only be applied with s > 0, due to the absolute convergence requirement, though the limit as s goes to 0 indeed equals pi/2
Yeah you need to be careful when limiting to zero here. Probably there’s a nice dominated convergence argument to say that F is continuous at 0 from the right.
@@miloweising9781 it is actually continus at x=0 and here is a complete proof of this fact: ( let f(s,x) be the function inside the integral )
-for all x>0 the function s --> f(s,x) is continus.
-for all s in R : x-->f(s,x) is continus.
- there exist a continus integrable and positive function g : R+--> R+ such that for all x>0 and for all s in R we have :
|f(s,x)|
@@xaxuser5033 That's not correct, as e^(-sx) -> 1 when s -> 0, which is not integrable on R+
@@giovanni1946 i didn't understand what u want to say , where did i write e^(-sx) ?
@@xaxuser5033 This theorem cannot be applied here, you can't choose g(x) to be e^(-x) as e^(-sx) gets bigger when s -> 0
3:10 This can also be solved using complex numbers by rewriting this as Im(e^(-sx-ix)).
Thanks!
Thank you so much!!
You can also view the integral as the imaginary part of the integral of e^(izx)/x, evaluated at z=1. Now integrating under the integral sign yields I'(z)=[e^(izx)/z]_0^\infty = -1/z for complex z with positive imaginary part. Hence I(z)=c-log(z). Remember that we are only interested in the imaginary part of I(z), therefore we only need the imaginary part of c. Now let the imaginary part of z approach infinity, while the real part remains constant. Then the imaginary part of log(z) will approach pi/2, while the integral of e^(izx)/x approaches zero. Hence Im(c)=pi/2 and the limit of Im(c-log(z)) as z approaches 1 is pi/2.
Great video ! I just wanted to point out that even though the result is correct, the reasoning here wasn't completely true, or was at least incomplete : by this reasoning, which uses the dominated convergence theorem and it's equivalents to derivate under the integral, you can't directly prove this formula for all s greater or equal to zero, but the formula is only true for s stricly bigger than zero, meaning you can't directly plug in 0 at the end (this is because you can only dominate the first function you want to derive under the integral for all s>0). But the formula still holds for all s>0. So the correct way to prove it is to show that the limit as s goes to zero of F(s) is indeed the integral of sin(x)/x from 0 to infinity, and you can use the fact that the right side of the equation is continuous at zero to give the final result. However, showing that you can interchange the limit and the integral as s goes to zero is not that trivial, since you can't dominate the function properly. To do that, you first need to integrate by parts and only after that you can dominate the function properly and do an interchange of limits and integral that is valid, and get the final result.
For the continuity at 0, you can also show that if you take the sequence of functions F_n(s) = integral from 0 to n of (...), the sequence converges uniformly on [0, infty(, and each F_n is continuous.
Some bits of maths are like p v np, really hard to first find the method (np), but manageable to verify that the solution works (p). Actually it isn't "some bits", it is a lot of the bits and it is cumulative - Newton's "If I have seen further than other men it is by standing on the shoulders of giants".
Another way:
I = Im[int(0,inf) e^(iz)/z dz]
J = int(0,inf) e^(iz)/z dz
Draw a semicircular contour of radius R in the top right quadrant of the complex plane, which goes around the simple pole at z = 0.
The bottom contour -> J as r->inf.
By the indentation lemma, the contribution around z = 0 pole is -i*pi/2 as epsilon goes to 0, where epsilon is the distance away from the pole of the contour going around 0.
The contour from (infinity)i to 0 gives 0 contribution, as the integrand tends to 0.
The circular contour joining R and iR tends to 0 as R tends to infinity by Jordan’s Lemma.
The full contour contribution is 0 since it encloses no singularities. Thus, we have:
J - i*pi/2 = 0
==> J = i*pi/2
==> I = int(0,infinity) sin(x)/x dx = pi/2
N.B sin(x)/x has a removable singularity at x = 0, and hence the integral converges. The same cannot be said for cos(x)/x; that integral diverges.
Chapter 17 of Vladimir A. Zorich’s Mathematical Analysis II goes into detail justifying both differentiating under the integral sign for s>0 and that F(s) is continuous at s=0.
7:14 you can only use s>0 because then in 3:18 left part doesn't disappear.
@ 3:31 Personally I would have used the formula for sine in terms of complex exponentials
Thank you so much Dr. Trefor
You're most welcome!
Did I miss something? At 1:29 if you set s = 0, you do not get back what you started with, since the entire exponent is zero if s = 0, and e^0 = 1.
This is just spiced up Laplace transforms 😂. Nice video though always happy to learn abit more math.
I dont anymore people to know about it ,
Sometimes good things are not good to be shared ❤
Great explanation. My follow up question would be, why does this work and when should one use this trick? I only knew the Double integral solution for this Problem.
With a lot of these integration tricks, ultimately it works when it works. With some practice you can gain some intuition for when you can parameterize and integral and do this differentiate with respect to the parameter trick, but there isn't some general rule for when it always works.
Wow that was so satisfying.
It’s more common to think that when applying Feynmans trick that you would put the parameter s inside of the sin(x) function aka sin(sx). The only reason that you use this decreasing exponential function is that so it converges on the interval 0 to infinity because the original integral converges as well. If you use the first method then you will obtain cos(sx) after differentiation with respect to s. Which does not converge. You should explain it this way to the students on UA-cam.
true when i did this method i end up getting sin of infinite which i dont even know
Thanks mr,obvious
@ that’s not obvious one bit.
Dr. You are amazing❤
Thank you!
Wow! Loved it!
Contour integration works great for this problem too. Using a semicircle contour and taking the imaginary part of the integral of e^iz/z is maybe even an easier method.
Interestingly yes😀
Dirichlet Integral, good explanation
Pi devided by 2 amoubts to 1/3 of a Cubical measure.
All measures are 3D so must be no more or less than Cubical or "Powered to 3", before and or after that are simply sizes that are fractions, Positive or negative to the control body size.
Brilliant !! Thanks for making this video :D !!!
fuun, so the are under sin(x)/x over R is pi.
This function is just, the typical sinusoidal sin(x), but as the x increases, it is divided by the factor x, so each period 2pi, it's just the sin wave getting "linearly" smaller.
On the other hand, this scalling over each x, makes it such that the area over the whole domain is pi, which is the area of a circle of r=1.
Which clearly has some meaning, the sin is constantly alternating its sign, and x is either x>0 or x
Excellent Math problem
Is there a video for when it is allowed to change the order of an integral sign followed by a summation notation? Thank you.
I really should make this video
Always a good day when Dr trefor bazette shows up in my UA-cam recommended
Hello Dr.Trefor Bazzet,
I wanted to take a moment to thank you for all the beautiful content you're creating. They're awesome.
By the way, I'm supposed to do one of my class projects using maple. But I haven't gotten used to it. Do you suggest any particular tutorial teaching how to use maple?
Thank you! Sorry I don’t have any particularly great resources at my fingertips, mostly because it can do so much it really depends what you need to use it for!
@@DrTrefor I am supposed to solve differential equations for a dynamical systems course.
Anyways, I think if I google every step, it will be somehow manageable.
What a fantastic method
Glad you think so!
Although this trick is named after Feynman, I believe he found it in an advanced calculus book his high school physics teacher gave him. It was apparently developed, at least partially, by Leibniz:
“I had learned to do integrals by various methods shown in a book that my high school physics teacher Mr. Bader had given me. [It] showed how to differentiate parameters under the integral sign - it’s a certain operation. It turns out that’s not taught very much in the universities; they don’t emphasize it. But I caught on how to use that method, and I used that one damn tool again and again. [If] guys at MIT or Princeton had trouble doing a certain integral, [then] I come along and try differentiating under the integral sign, and often it worked. So I got a great reputation for doing integrals, only because my box of tools was different from everybody else’s, and they had tried all their tools on it before giving the problem to me.” (Surely you’re Joking, Mr. Feynman!)
www.cantorsparadise.com/richard-feynmans-integral-trick-e7afae85e25c
Cool!
Must reread my Feynman books - nice shout out by Feynman to his teacher.
damn this is basically a laplace transform trick :O
wat a coincidence that im learning laplace transform right now
Laplace transform is SO useful!
Hi Dr Brazet. Wonderful video.
But, I have a question here. Why is this method known as Feynman's technique? In fact the idea of Differentiating Under the Integral sign (DIUS) was due to Leibnitz, and also the idea of introducing a parameter 's' in the form e^-sx is the Laplace transform or can even be thought of as the Gamma function?
Why do you then still call this the Feynman trick? Just wanted to know.
Because at least in the physical community in the USA it has been popularized by Feynman
Sometimes good thing are better not to be shared.
it must be weird to look and point at a green screen when saying "Like this one" at 0:04
Hello Professor,
Thanks for all your brilliant videos,
It was a really nice and technical( a bit similar to laplace technique)
although I couldn't find a proof or nonexample for this technique from the internet,
I am very curious to know that could we use this technique by any function other than exponential terms, or is it because of the uniform continuity of the laplace transform that we can use this trick?
since the problem can also be thought as a differential equation/ a dynamical system biforcating on the parameter s .
I really would like to know more about it.
Thank you.
Br,
When I listen to a Feynman lecture I hear a smart version of Ed Norton.
3:22 If s is 0 and not a positive, then why do you consider it a negative exponential?
Nice, I think I would had inserted sin(x) = (e^ix - e^-ix) instead of integrating by parts, but I’m not sure if that makes things easier or not..
Yes similarly! Using the Feynman trick here reflects the relation to complex analysis: Choosing the function f(z) = exp(iz)/z and integrate over the (positive) indented semicircle and some appropriate contour avoiding 0.
Complex analysis ON TOP
C'est juste très élégant
Please make a mathematics books recommendation video
Well, Feynman was the first "hacker" in history...I'm not surprised about his ability to tackle dirichelet integrals..
Amazing sir 😮😮😮😮😮
But (3:25) if s=0, the exponential doesn't kill the cos term. This is not strictly valid for the value of s we care about.
What doesnt Feynman destroy? Guy was a beast.
Pretty cool trick. I'm okay with all the steps except perhaps the interchange of limit and the integral as s->infinity. Is there some theorem in real analysis that justifies this?
Muy buen video Genio!
Isn't Feynman's trick just using Liebniz's Rule on the Laplace transform? Also, the DI method works well for these IBPs .
good prof. thankks
I love your vids could you do some on statistics and probability
That's insane
If we expand sin(x) in Taylor´s series and divide by x we obtain arctg(x) in just one step and the answer much more fast
What if instead of during the integration by parts you apply again the Feymann technique and than (probably) will get a differential equation and solve it?
Would it work?
Is 2:06 even legal? I suppose it must work in most common cases but can you prove it generally? I guess you can't and it is indeed strange. Anyway I really enjoyed that 🙂 It's a bit like with platonic solids, you differentiate its Volume formula and get its Surface Area formula ... and most people are convinced ...except that it only works for Platonic Solids not the general mish mash you'll meet in Real Physics.
Can't we use exponential definition of sine, (exp(ix)-exp(-ix))/2i to solve it?
This can replace part of the computation (the double integration by parts) but you still need to use some trick at the beginning before you can use this.
Thanks! I'll try.
And you use the same Laplace Transform / Feynman's Technique at start right?
If yes, I recommend you use these rules instead:
sin(x) = Im(e^xi)
k * Im(z) = Im(kz)
∫ Im(z) dx = Im(∫ z dx)
Easier to note that this is the imaginary part of e^(iz)/z over the interval [-infty,infty] in the complex plane, and work over a semi-circle contour in the upper half plane.
Please which app are you using to plot those functions?
Would that mean integrating six/x without limits give us pi? Im very interested in using feynmans trick cause I currently in the progress of learning Fourier analysis.
To justify the answer, we need to show that F(s) is continuous at s=0. Our integrand is not dominated in absolute value by an integrable function for s near 0, so this cannot be justified using the usual argument. I was able to do it, but it took me quite a while. Even so, I'm still not intuitively sure why it works.
Mr. Feynman thought that math began with him. 🤣
Doesn't feymann trick needs some sort of uniform convergence?
I believe using the residue is the fastest way to calc that integral
Look like the total area of this function equals to pi, in other words, the area of a unit circle. Contrast to the Gaussian bell shape curve where the square of the total area is that of a unit circle. Very interesting, no wonder people argue that god doesn’t play dice.
thank you
I thought the Laplace transform required a complex 's'. Does that affect this method?
It can be complex, but we ultimately are only using it to get this differential equation which we only evaluate at infinity and zero so it doesn’t matter if other values could be complex
Remember real number set is a subset of complex number set.
If an equation is valid in the whole complex plane, the same equation is valid in the whole real number line, all we need is to limit the imaginary part to 0.
What is the limit of F(s) as s goes to - infinity? Doesn't the relation we found fail since the integral is not bounded, while arctan is? How do I define the domain of validity of the identity?
It may be that the step at 1:47 only works for positive s. What we are doing is interchanging 2 limits (pulling the derivative inside the integral), which we can't *freely* do
I don't know why this is called Feynman's trick? differentiation under the integral sign is known before Feynman. It's just the application of Leibniz theorem for integrals dependent on a parameter!
Also, the types of integrals Feynman was considering were nothing like these, but nevertheless the attribution seems to stick
@@DrTrefor So why perpetuate the ridiculous notion that this technique is due to Feynman in these videos?
Can we apply cauchy integral test here??
arctan (♾️) not only equal to π/2 , it can also be 5π/2, how do you prove that π/2 is an unique answer?
Dr. for which class of function can one use the Feynman trick?
It was satisfying to watch this video 👌
Can we use (sin x/x) / pi/2 in probability ?
I’m sure this is beneath your notice but I have to ask and hope for the best. I need to know how to find a point in 3 space that’s equidistant from 3 other points. I’ve been looking online and for the life of my I can’t find how to do it. It’s been a very long time since I took any math classes.
All the physicists think that this method is Feynman trick but it existed way way before. It is a little abusive to atribute the credit to Feynman when this is actually extremely classical.
Every physicist knows that the integration technique existed before Feynman, it’s just kind of that Feynman was famous for using it.
we can use laplace transformation?
brother didnt mention why we can switch order of differentiation and integration, Leibniz rule applies because F(s) bounded and continuous
Oh thanks, I actually had it in my notes to point this out verbally but somehow forgot to say it out loud during recording time:)
@@DrTrefor can you elaborate it to me please sir
Will you give a proof for Existence of local Maxima b/w two consecutive local minima😢
You need your function to be continuous, of course. Then the Weierstrass Extreme value theorem does it for you. See in Wikipedia.
Sir I have a question to you. I am an Engineering student from Bangladesh. Sir I derived an alternative procedure or technique to solve a math content which is being solved in complex ways nowadays. Now I want to publish it. Can I use LaTeX to write my paper? Again how and where I can publish it.
If you give me some piece of advice then it will be very helpful for me.
Perdón!
Busca una revista científica de tú localidad, y midiendo el terreno puedes pasar a una publicación en idioma ingles.
Is this the only way to solve the original integral?
❤❤❤❤❤❤❤❤❤thanks much
This is Leibniz's integral rule. More context here: en.wikipedia.org/wiki/Leibniz_integral_rule
cannot believe you forgot the + c
Where can I get that t-shirt?
Merch link in description!
It is just the Laplace transform of sin(x)/x.
You should at least mention, that certain conditions need to be met in order to be able to use this method
well it is beautiful, but the real question is can we define the function as continuous and derivable everywhere which makes the trick less evident
I don't think the conditions needed to say that F is differentiable and take the partial differential inside the integral are there.
Chapter 17 of Vladimir A. Zorich’s Mathematical Analysis II goes into detail justifying both differentiating under the integral sign for s>0 and that F(s) is continuous at s=0.