your integration is limited by 1 variable x. In control theory we have a more generalize way to do any integration in 1 step. The trick is to integrate a vector. for example dx^2 (int int aceleration) can be seen as x1=x(position), x2=(x1)dot (velocity) so int [x1dot, x2dot] is 1 integral, but internally you are doing a second one over x. if you want an n degree integral, you would need to integrate an n dimensional vector.
Incredible! Calculus classes often teach a limited view of integrals, only thinking of them as antiderivatives. But now I realize the implications of just how specific the fundamental theorem of calculus is written: Only integrals that fit that pattern are antiderivatives, and there are way more than just those integrals out there!
@@Xnoob545 idk if its right to say nuch more than areas under the curve, but integrals are used for more things than just that. Integrals can act as sums, integrals can be used to represent functions, there are integral transforms, there is line integral and flux integral and both do not represent area under curve. but Integral more generally, is just a sum or continuous sum.
@@randomguy2169 To get technical, the integral is a linear-transformation of a continuous function. Even more is true: if you allow integrals with respect to measures, you get all possible linear-transformations of continuous functions.
This was absolutely incredible. The intriguing and seemingly nonsensical question at the beginning (especially from the perspective of someone who only knows basic calculus), the pacing and 3d animation to visualize the intuition behind taking a double integral, and the teaser for the idea of in-between integrals/derivatives. This was mind-blowing to watch and appreciate, both from the perspective of a learner and an aspiring teacher.
1:40 Numerically, derivatives are notoriously difficult because computing them involves subtractive cancellation. Integration is much more well-behaved from a numerical standpoint precisely because it only requires summation.
@@zanekaminski this is actually true as you can’t truly animate the 8th derivative of y = (1/x)(sin(x+p)) where p is a value/slider, I tried it on my machine (16 gigs RAM, R3050, RY7) and it only animates 2-3 frames per second, I tried this in desmos. My machine can otherwise run elements with upto 10,000 elements quite easily (at about 40 Fps) and 1,000 elements effortlessly (at 144Fps!!).
@@creepervideo1106 Desmos is not a very well optimized application for that purpose, and cannot take full advantage of your hardware. You absolutely can animate that derivative with software more capable using the hardware available to it and more able to analyze the problem and find shortcuts. For example, Desmos actually calculates every single intermediate derivative numerically for every x value it draws every time the animation parameter changes, but software designed for that purpose would likely be able to symbolically solve for the 8th derivative of y = (1/x)(sin(x+p)), giving it a much faster function to recompute when p changes.
@Drew Duncan is still basically correct, it's a problem of relative precision when the curve values are much larger than the difference between them and accuracy is lost due to computational rounding - though I would think the solution to this is simply to translate the entire curve to put the point being calculated at the origin so that maximum accuracy can be maintained...
Awesome! This formula felt like black magic when I first saw it (and it still does), but it feels a lot less mysterious now that I can see such a straightforward derivation of the n=2 case.
In practice, we use this technique in reverse to replace an operator norm with a double integral. Also, (16:19) if you're content with working with complex analytic functions, the Bergman kernel allows you to write a derivative as an integral. You can write any power of a derivative as an integral also. I agree though that integrals are very powerful.
Derivative Compression, the "n"th derivative of a function, can be expressed for some nice functions as an extension of cauchy's integral formula in complex analysis. this result is highly related to the residue theorem and, consequentially, this yields no (simple) results in the field of fractional calculus, as inputting a fractional n changes the pole in the denominator of the integral to a branch cut, which is not easy (or often even possible) to evaluate. this function also only returns the value at a point, not a function over the entire complex plane.
@@dj_laundry_list Complex analisis, complex calculus (calculus IV also works) and any type of advanced calculus courses, I just viewed las month on a mathematical methods for geophycisists so it was sparse but show how the Cauchy integration formula works and it's used (the course was more centered on series as computing is fundamental in the field)
This video is amazingly painful. As a high school student bored with the basic calculus I had been doing in class (volumes of revolution and arc length etc.) I began to play around with trying integrate the volumes of other solids. I eventually figured out how to tie two integrals together and find the volumes of lots more shapes - like ellipsoids. It’s really painful to see just how close I was to this amazing formula! I wish I had obtained it myself… Great video and great animations. Well done!
@@user-zp3xc4to1t thanks. In the little experience I have, deriving or proving a result is often the easy part. Asking the right question and seeing something mathematically that you don’t already know exists is the much harder part.
If you're doing anything even close to this level in HS, there is absolutely NOTHING to be ashamed of & even some pride is in order. In fact, I'd say we can already expect great things from you.
I found it interesting, that the resulting formula is similar to convolution of x^n with f(x). After thinking about it it makes sense though and gives another way of deriving the formula: Using Laplace transforms we can transform a function from the time domain into the frequency domain, using s as the new variable. An integration in the time domain shows up as a multiplication with 1/s in the frequency domain, so double integration becomes 1/s^2 etc. Transforming it back we can however use the fact, that multiplication in the frequency domain becomes a convolution in the time domain. And what does 1/s^2 correspond to? It corresponds to x. 1/s^n corresponds to 1/(n-1)! x^(n-1). Plugging this into the formula gives the above result. It's quite nice how in mathematics all roads lead to the same spot.
I wonder if this works for compressing multi derivatives. You'd just need values for s^n in terms of x. Ah dang, that part seems to just cycle back to derivatives of the delta distribution, which means that EVALUATING the multi-derivative in practice means doing all the derivatives individually, just as for the compressed multi-integral. And my intuition tells me that numerical evaluation isn't simplified either.
The resulting formula isn't like the convolution with the function t, it is exactly that. The video is just visualizing the Laplace transform. This property is why the control theorists love using Laplace transforms.
A lot of this stuff appears in physics, especially when we're dealing with phase spaces where you have to expand out momentums and positions (could be in terms of hyperspheres, hypercubes, ..., etc). They also appear when you do higher dimensional Fourier transforms, can occur in special relativity too if your taking account of the fourth dimension (which is usually time), and as well as dealing with characteristic functions. However, I have to say that in physics we write our integrals as ∫d⁴x *which is a neat way of condensing it), and then it comes out as ∫∫∫∫dxdx'dx''dx'''. In terms of phase space where p is the momentum and r is the position, we have ∫d³p∫d³r = ∫∫∫dpdp'dp''∫∫∫drdr'dr'' = ∫∫∫∫∫∫dpdp'dp''drdr'dr'' (which is a six-fold integral). To take it further you can also write ∫∫∫∫∫∫dpdp'dp''drdr'dr'' as one integral ∫dpdp'dp''drdr'dr'', but have many integrals appear out looks nice and aesthetic. Sometimes the integrals derivatives are written with subscript x_1 and so on. These types of integrals appear in quantum optics (where you deal with a lot of things in phase space), quantum field theory, quantum mechanics, and statistical mechanics.
Outstanding video. Really opens up creative thought at a high-school calculus level. Wish I had this sort of direction when I was learning. Instant subscription and can't wait to see what's next.
16:20 you may want to have a look into Leibnitz's theorem. It's a method for taking the nth derivative of a function. It splits the function into parts and takes the derivative of both parts n times then combines them together.
Most excellent! I've wanted to see this for so long and was frustrated about it, ever since seeing the formula in the first pages of a text on integral equations. Thank You so much!
That was really cool, it was like seeing a u-substitution but in a perfect 3D form. Or like he turned the u-sub into an operator function, again, crazy cool. It's like this guy understands integration at a level higher than Riemann or Leibniz, well played. Like a version of super calculus?
oooh another really fun derivation from this is the subfactorial complex extension, making use of the subfactorial property that is most similar to normal factorials (!n=n*!(n-1)+(-1)^n) you just have to modify the formula slightly to change the nonderivative term of the partial integration to match this property instead of the factorial "n!=n*(n-1)!" and then it turns out you get complex numbers in return (makes sense since the formula contains -1^(z))
Legendary explanation I have waited for to explain in visual terms double integrals. New insights that bring nostalgia of first learning integral calculus and the FTC itself. I am giving a Subscribe and thank you.
In terms of Laplace transforms: Let * operator denote convolution. Recall the definition f(t) * g(t) = int f(t-x) g(x)dx. Then, int (t-x) f(x) dt = f(t) * t t = u(t) * u(t) So, int (t-x) f(x) dt = f(t) * u(t) * u(t) Since convolving by the unit step function u(t) is the same as integrating, int( (t-x) f(x) dt = int^2 f(x) dx. This video actually explains the relationship between the functions and their Laplace transforms.
I can see this makes sense but I don’t understand this completely. Can you write out the general case as laplace transforms? I’m unfortunately not familiar enough with them
At 12:44 you mentioned that the integrand is not the inner integral. This is because we are switching the limits of the integration, ie the inner integral goes out and the outer one comes inside. This is also what you did geometrically by slicing the integral a different way.
When I thought thta 2:30 part problem, I think we can just make this as general form where: ∬6x dx dx can be expressed as k*∬x^n where n =1, k=6 for example, if we take one integral as in our mind, we see that it grow up power n with +1. Also we have to take division k/(n+1) * ∫x^(n+1) dx we see how it end up, we can write basic form multi-integral as formula: (k / (∏(n+i,i,1,t)) )*x^(n+t) where k=constant, n=x's power on starting, t=number of integrals. for example, if we have ∭∭14 sqrt(x) dx^6 . We can write this as: 14/((n+1)(n+2)(n+3)(n+4)(n+5)(n+6)) * x^(n+6) where n =1/2 ((14)/(∏(n+i,i,1,t)))*x^(n+6)|n=((1)/(2)) and t=6 ▸ ((128*x^(13))/(19305))
I’m curious, can you prove Cauchy’s formula for repeated integration by using a 2-to-1 integral formula repeatedly? So, something like proof by induction. Or does it give a different uglier result?
actually the 'classical' proof is done by induction :) and yes, imo it's a messy and ugly task. Also, if you want to take this whole concept of repeated integration to a continuous case with, say, r (real) times of iterated integration instead of n (natural) times of iterated integration, you'll find out the Gel'fand-Shilov function convoluted with the f(t) function. Edit: references ua-cam.com/video/Cu53TKl8atg/v-deo.html Solved exercises in fractional calculus - Edmundo Capelas de Oliveira. p.175
Looks like this is really just replacing the "nested" integrals with other integrals that just happen to have a well known form (areas of rectangles, rectangular prisms, etc.). Reminds me of a video about the circumference of an ellipse having no closed-form constant w.r.t. axis, but a circle does - pi, but pi is just a special constant we've come to recognize and could easily define another constant for other ellipses
You know ; I just HATED it when he closed with a teaser about the Gamma- function and its role in computing a multiple integral. But that's the way to get more subscribers I suppose. And in my case : it WORKED. So I just subscribed and will DEFINITELY be back and recommend this channel to any math-geek I talk to. Keep up the good work: "Aux Revoir "......
This formula for repeated integration also appears when expressing the remainder of a Taylor polynomial approximation in the "integral form" - because, unlike the Peano, Lagrange, or Cauchy forms, it uses an integral
Fun fact: to integrate (t-x)(f(x)), unless f(x) is a polynomial or some easy function, we would need to integrate x*f(x) using integration by parts, which requires us to find the double integral of f(x)
Also, another interesting thing is contour integrals. A contour integral is an integral that integrates over a boundary in any dimensions, unlike normal integrals, which only integrate in their respective dimensions. To differentiate the differentials of contour and normal integrals, we use s's rather than conventional letters, so dx -> ds, dA/dxdy/rdrdθ -> dS, dV/dxdydz/rdrdθdz/ρ^2*sin(φ)dρdφdθ -> dC (for cellular volume). Also, ds, dS, and dC are magnitudes of the dr, dS, and the dC vectors: dr = , dS = , dC = . These differentials are crucial in finding arclength, surface area, and cellular volume. For example, the arclength of a given function in 2-space is given by ds, which is √(1+y'^2)dx, √(x'^2+y'^2)dt, and √(r^2+r'^2)dθ. Surface area follows a similar pattern. The most common formula for surface areas are dS = sqrt(1+∂x+∂y)dA = |∂s x ∂t|dsdt, and 2πrds, from solids of revolution. Because of that, we can derive the formulas for cellular volume being sqrt(1+∂x+∂y+∂z)dV = |∂r x ∂s x ∂t|drdsdt, and 2πrdS. To evaluate contour integrals, you need to make sure your differentials are in magnitude form. Sometimes, they would already be in such form, like when you are trying to find the mass of a spring or a solid lamina, but most of the time, you are given a vector field and are trying to compute the circulation or the flux across a given path or surface, in this case, you must dot the vector field with the normal to turn it into a magnitude. You also need to parametrize the surfaces as well. However, there are some useful formulas: Fundamental Theorem: If a vector field is conservative, meaning it is the gradient of a function, a line integral is just taken as an ordinary antiderivative, as the differential of a multivariable function is just the gradient dotted with the dr vector. Divergence Theorem: The flux of a given vector field is proportional to its divergence and its interior content: ∯ F ∙ dS = ∭div F * dV, ∰ F ∙ dC = ⨌div F * dH Stokes' Theorem: The surface integral of a given vector field is proportional to its curl and its circulation: ∮ F ∙ dr = ∯ curl F ∙ dS
There is a way to compute iterated derivatives using matrix diagonalization. Since derivatives are regular invertible linear operators, they can be encoded as a diagonalizable matrix. Using eigenvalue decomposition to compute powers of this matrix and representing functions as vectors with real coefficients in a function space is equivalent to computing iterated derivatives.
Actually theres formulas for n-th derivatives for p much every function including chains, it's just usually a sum or product up to n, which isn't really expandable for non integer n, however they are p beautiful if you asked me!
Before watching video: cauchys integral formula for the nth derivative... and u can use negitive numbers to mean integrals.. but you gotta use gamma formula and complex numbers. But it works After watching video.. no cauchys integral formula is equivalent to the one you showed
@Morphocular heres one thing about differentiating.. it can go below 0 and integration can go above 0. why don't we regularly perform these functions? why do we infact simplify to cancelation as a rule? x^0 goes to # 1, but X^-1 is right there as a possibililty for the next differential. simarly the reverse for integration. why dont we go past these arbitary lines? Or is there something im missing?
A quick tip I learned in calculus classes that I think is good for anyone taking multivariable calculus: For an integrand of 1, 1 integral means 1 dimension of measurement. So a triple integral with 1 as the integrand measures volume. For any other integrand, n integrals means n+1 dimensions of measurement. So a double integral with a non-1 integrand measures volume. Sometimes its easier to define the shape in the integrals' limits, sometimes its easier to define the shape with less integrals and an integrand. Though not a rigorous statement in the least, it should still help those who just "need to pass". (Helpful for tutoring more than anything)
Is this alternate way to think of nested integration related to how Lebesgue integration is considered superior for higher dimensions (input or output), as it tends to generalize cleaner than Reimann integration?
If you extend this to fractional integrals as you implied at the end, couldn't you just go negative and find derivatives as well, as silly as that might seem? I mean I know the gamma function spells trouble for negative integers, but you can totally do fractional values, right? Could be really useful for numeric *differentiation* though, as that tends to be much less stable than numeric *integration,* right? ('course a lot of the time the real way to go there is to just do algorithmic *symbolic* differentiation)
The 2-fold case looks remarkably identical to a convolution of f(x) with the function g(x)=x, is there possibly a visual connection that can be made here?!
Is this related to stoke's theorem? (not the surface integral to integral of divergence of a function over a volume, but the more general one) From what I understand it's a generalization of this "integral reduction" we see with the fundamental theorem of calculus, divergence theorem and the "little" stoke's theorem taught in a first course on vector calc.
A quick note about the alleged fact that numerical derivatives are easier to compute than quadratures. Although technically yes, they usually need less function evaluations, most often numerical derivatives are quite unstable and a royal pain in the ass also for seemingly "easy" functions, whereas numerical integration is much less critical - saved some critical cases.
The sequel is here:
ua-cam.com/video/2dwQUUDt5Is/v-deo.html
your integration is limited by 1 variable x.
In control theory we have a more generalize way to do any integration in 1 step.
The trick is to integrate a vector.
for example dx^2 (int int aceleration) can be seen as
x1=x(position), x2=(x1)dot (velocity)
so int [x1dot, x2dot]
is 1 integral, but internally you are doing a second one over x.
if you want an n degree integral, you would need to integrate an n dimensional vector.
Yes
Incredible! Calculus classes often teach a limited view of integrals, only thinking of them as antiderivatives. But now I realize the implications of just how specific the fundamental theorem of calculus is written: Only integrals that fit that pattern are antiderivatives, and there are way more than just those integrals out there!
Yes. An integral is a pretty general tool, representing much more than areas under a graph. And that's cool!
@@dexter2392 wait so what the hell is an integral
@@Xnoob545 idk if its right to say nuch more than areas under the curve, but integrals are used for more things than just that. Integrals can act as sums, integrals can be used to represent functions, there are integral transforms, there is line integral and flux integral and both do not represent area under curve. but Integral more generally, is just a sum or continuous sum.
@@randomguy2169 yeah, area under curves is just a visual representation of integrals, albeit a very intuitive one for beginner at that
@@randomguy2169 To get technical, the integral is a linear-transformation of a continuous function. Even more is true: if you allow integrals with respect to measures, you get all possible linear-transformations of continuous functions.
The animation quality is incredible. For example the transition from a 2D to 3D view at 6:34. So smooth!
Love the snake integral beasts vs derivative peons. Classic visuals not blatantly copying 3blue one brown and staying original.
And at
7:40 - 7:57
8:25
9:45
Pretty nice visuals
There is nothing incredible about any of this.
This is just made in 3D from the start, she's just changing the camera orientation.
@@xl000 he made this animation library by himself.
So there's the impressive bit
@@GetPsyched6 what programming languages do you know ?
This was absolutely incredible. The intriguing and seemingly nonsensical question at the beginning (especially from the perspective of someone who only knows basic calculus), the pacing and 3d animation to visualize the intuition behind taking a double integral, and the teaser for the idea of in-between integrals/derivatives. This was mind-blowing to watch and appreciate, both from the perspective of a learner and an aspiring teacher.
1:40 Numerically, derivatives are notoriously difficult because computing them involves subtractive cancellation. Integration is much more well-behaved from a numerical standpoint precisely because it only requires summation.
Very funny.
@@zanekaminski this is actually true as you can’t truly animate the 8th derivative of y = (1/x)(sin(x+p)) where p is a value/slider, I tried it on my machine (16 gigs RAM, R3050, RY7) and it only animates 2-3 frames per second, I tried this in desmos. My machine can otherwise run elements with upto 10,000 elements quite easily (at about 40 Fps) and 1,000 elements effortlessly (at 144Fps!!).
@@creepervideo1106 Desmos is not a very well optimized application for that purpose, and cannot take full advantage of your hardware. You absolutely can animate that derivative with software more capable using the hardware available to it and more able to analyze the problem and find shortcuts. For example, Desmos actually calculates every single intermediate derivative numerically for every x value it draws every time the animation parameter changes, but software designed for that purpose would likely be able to symbolically solve for the 8th derivative of y = (1/x)(sin(x+p)), giving it a much faster function to recompute when p changes.
@@Howtheheckarehandleswit Thanks for the tip!
@Drew Duncan is still basically correct, it's a problem of relative precision when the curve values are much larger than the difference between them and accuracy is lost due to computational rounding - though I would think the solution to this is simply to translate the entire curve to put the point being calculated at the origin so that maximum accuracy can be maintained...
Awesome!
This formula felt like black magic when I first saw it (and it still does), but it feels a lot less mysterious now that I can see such a straightforward derivation of the n=2 case.
Its him
In practice, we use this technique in reverse to replace an operator norm with a double integral. Also, (16:19) if you're content with working with complex analytic functions, the Bergman kernel allows you to write a derivative as an integral. You can write any power of a derivative as an integral also. I agree though that integrals are very powerful.
Derivative Compression, the "n"th derivative of a function, can be expressed for some nice functions as an extension of cauchy's integral formula in complex analysis.
this result is highly related to the residue theorem and, consequentially, this yields no (simple) results in the field of fractional calculus, as inputting a fractional n changes the pole in the denominator of the integral to a branch cut, which is not easy (or often even possible) to evaluate. this function also only returns the value at a point, not a function over the entire complex plane.
i understood like 3 of those words
Do you know of a source that explains this in more detail?
@@dj_laundry_list Complex analisis, complex calculus (calculus IV also works) and any type of advanced calculus courses, I just viewed las month on a mathematical methods for geophycisists so it was sparse but show how the Cauchy integration formula works and it's used (the course was more centered on series as computing is fundamental in the field)
Well done! The animation at 6:30 that change the 2D representation to 3D is very smooth!
I just cannot put in words much this helped me understand why limit points for double integrals are the way they are. Just brilliant stuff.
This video is amazingly painful.
As a high school student bored with the basic calculus I had been doing in class (volumes of revolution and arc length etc.) I began to play around with trying integrate the volumes of other solids.
I eventually figured out how to tie two integrals together and find the volumes of lots more shapes - like ellipsoids.
It’s really painful to see just how close I was to this amazing formula! I wish I had obtained it myself…
Great video and great animations. Well done!
you were "close" to a cauchy formula as a high school student! thats insane. You should be proud of yourself
@@user-zp3xc4to1t thanks. In the little experience I have, deriving or proving a result is often the easy part. Asking the right question and seeing something mathematically that you don’t already know exists is the much harder part.
If you're doing anything even close to this level in HS, there is absolutely NOTHING to be ashamed of & even some pride is in order. In fact, I'd say we can already expect great things from you.
I found it interesting, that the resulting formula is similar to convolution of x^n with f(x). After thinking about it it makes sense though and gives another way of deriving the formula:
Using Laplace transforms we can transform a function from the time domain into the frequency domain, using s as the new variable. An integration in the time domain shows up as a multiplication with 1/s in the frequency domain, so double integration becomes 1/s^2 etc.
Transforming it back we can however use the fact, that multiplication in the frequency domain becomes a convolution in the time domain.
And what does 1/s^2 correspond to? It corresponds to x. 1/s^n corresponds to 1/(n-1)! x^(n-1).
Plugging this into the formula gives the above result. It's quite nice how in mathematics all roads lead to the same spot.
I wonder if this works for compressing multi derivatives. You'd just need values for s^n in terms of x. Ah dang, that part seems to just cycle back to derivatives of the delta distribution, which means that EVALUATING the multi-derivative in practice means doing all the derivatives individually, just as for the compressed multi-integral.
And my intuition tells me that numerical evaluation isn't simplified either.
Congratulations! You have just reinvented Sobolev's spaces of fractional derivatives!.. :)
This is really interesting. Thanks for sharing. I think this makes the entire concept fit in my tiny brain better
frequency domain is fourier
s is complex domain bruv
The resulting formula isn't like the convolution with the function t, it is exactly that. The video is just visualizing the Laplace transform. This property is why the control theorists love using Laplace transforms.
Wow, the visuals are amazing. I literally said "slice it the other way" out loud at 6:55, just as you intended!
So underrated! You explain things so clearly and present a very intuitive visual interpretation. You deserve so many more subs.
Just took calculus 3 last semester, this blew my mind. Well done, perfect video
A lot of this stuff appears in physics, especially when we're dealing with phase spaces where you have to expand out momentums and positions (could be in terms of hyperspheres, hypercubes, ..., etc). They also appear when you do higher dimensional Fourier transforms, can occur in special relativity too if your taking account of the fourth dimension (which is usually time), and as well as dealing with characteristic functions. However, I have to say that in physics we write our integrals as ∫d⁴x *which is a neat way of condensing it), and then it comes out as ∫∫∫∫dxdx'dx''dx'''. In terms of phase space where p is the momentum and r is the position, we have ∫d³p∫d³r = ∫∫∫dpdp'dp''∫∫∫drdr'dr'' = ∫∫∫∫∫∫dpdp'dp''drdr'dr'' (which is a six-fold integral). To take it further you can also write ∫∫∫∫∫∫dpdp'dp''drdr'dr'' as one integral ∫dpdp'dp''drdr'dr'', but have many integrals appear out looks nice and aesthetic. Sometimes the integrals derivatives are written with subscript x_1 and so on. These types of integrals appear in quantum optics (where you deal with a lot of things in phase space), quantum field theory, quantum mechanics, and statistical mechanics.
you could do five or six integrals, or... just one.
i dont need more, they disappoint me
Some say the integrals went to Canada. Others say Toronto.
What is this referencing hmm
Outstanding video. Really opens up creative thought at a high-school calculus level. Wish I had this sort of direction when I was learning. Instant subscription and can't wait to see what's next.
16:20 you may want to have a look into Leibnitz's theorem. It's a method for taking the nth derivative of a function.
It splits the function into parts and takes the derivative of both parts n times then combines them together.
Wow! Just great educational content! Keep making math learning more interesting & engaging!
Most excellent! I've wanted to see this for so long and was frustrated about it, ever since seeing the formula in the first pages of a text on integral equations. Thank You so much!
That was really cool, it was like seeing a u-substitution but in a perfect 3D form. Or like he turned the u-sub into an operator function, again, crazy cool.
It's like this guy understands integration at a level higher than Riemann or Leibniz, well played. Like a version of super calculus?
Omg, half integrals? Fractional integrals? Can't wait to get the π's ∫ of a function
oooh another really fun derivation from this is the subfactorial complex extension, making use of the subfactorial property that is most similar to normal factorials (!n=n*!(n-1)+(-1)^n) you just have to modify the formula slightly to change the nonderivative term of the partial integration to match this property instead of the factorial "n!=n*(n-1)!" and then it turns out you get complex numbers in return (makes sense since the formula contains -1^(z))
Legendary explanation I have waited for to explain in visual terms double integrals. New insights that bring nostalgia of first learning integral calculus and the FTC itself. I am giving a Subscribe and thank you.
Wooow!! This channel is so underrated. I really enjoy your content, Thanks a lot.
In terms of Laplace transforms:
Let * operator denote convolution. Recall the definition f(t) * g(t) = int f(t-x) g(x)dx. Then,
int (t-x) f(x) dt = f(t) * t
t = u(t) * u(t)
So, int (t-x) f(x) dt = f(t) * u(t) * u(t)
Since convolving by the unit step function u(t) is the same as integrating, int( (t-x) f(x) dt = int^2 f(x) dx. This video actually explains the relationship between the functions and their Laplace transforms.
3:40 Well, I suppose using the laplace transform would be a way of integrating without integrating: ℒ{ ∫∫ 6x dx dx } = 6 ℒ{x} / s² = 3! / s⁴ = ℒ{ x³ }
a laplace transform is an integral..
L[take](s)
How on Earth were you able to type in such a beautifully-fonted reply?.. I envy you... :)
I can see this makes sense but I don’t understand this completely. Can you write out the general case as laplace transforms? I’m unfortunately not familiar enough with them
@@gaHuJIa_Macmep using fancy unicode characters. Specifically: U+2112 "Script Capital L", U+222B "Integral", U+00B2 "Superscript Two", U+00B3 "Superscript Three", and U+00B4 "Superscript Four".
How do you not have 100 times as many subscribers?
Your content is of the best math on UA-cam!
At 12:44 you mentioned that the integrand is not the inner integral. This is because we are switching the limits of the integration, ie the inner integral goes out and the outer one comes inside. This is also what you did geometrically by slicing the integral a different way.
When I thought thta 2:30 part problem, I think we can just make this as general form where:
∬6x dx dx can be expressed as k*∬x^n where n =1, k=6
for example, if we take one integral as in our mind, we see that it grow up power n with +1. Also we have to take division k/(n+1) * ∫x^(n+1) dx
we see how it end up, we can write basic form multi-integral as formula:
(k / (∏(n+i,i,1,t)) )*x^(n+t) where k=constant, n=x's power on starting, t=number of integrals.
for example, if we have ∭∭14 sqrt(x) dx^6 . We can write this as:
14/((n+1)(n+2)(n+3)(n+4)(n+5)(n+6)) * x^(n+6) where n =1/2
((14)/(∏(n+i,i,1,t)))*x^(n+6)|n=((1)/(2)) and t=6 ▸ ((128*x^(13))/(19305))
👏🏼👏🏼👏🏼 Very well done.
You've made a pretty abstract concept very accessible and easy to understand.
This was amazing. It opened up a new horizon for me. Thank You.
I’m curious, can you prove Cauchy’s formula for repeated integration by using a 2-to-1 integral formula repeatedly?
So, something like proof by induction. Or does it give a different uglier result?
actually the 'classical' proof is done by induction :) and yes, imo it's a messy and ugly task.
Also, if you want to take this whole concept of repeated integration to a continuous case with, say, r (real) times of iterated integration instead of n (natural) times of iterated integration, you'll find out the Gel'fand-Shilov function convoluted with the f(t) function.
Edit: references
ua-cam.com/video/Cu53TKl8atg/v-deo.html
Solved exercises in fractional calculus - Edmundo Capelas de Oliveira. p.175
Try to read my comment up where I described how to get the general formula without induction, just from the first principles...
Thank you for making calculus seems so easy and appealing
This was beautiful stuff - blew me away on a Saturday night!
Very interesting!
I like it how you used 2*Pi, Pi, sqrt(2), Pi/2 and e and so one at 13:11.
Yay a new video from this math youtuber!
Keep up the good work
I was going to ask about that last part just before it apeared, great introduction to fractional derivatives and integrals
Looks like this is really just replacing the "nested" integrals with other integrals that just happen to have a well known form (areas of rectangles, rectangular prisms, etc.). Reminds me of a video about the circumference of an ellipse having no closed-form constant w.r.t. axis, but a circle does - pi, but pi is just a special constant we've come to recognize and could easily define another constant for other ellipses
You know ; I just HATED it when he closed with a teaser about the Gamma- function and its role in computing a multiple integral. But that's the way to get more subscribers I suppose. And in my case : it WORKED. So I just subscribed and will DEFINITELY be back and recommend this channel to any math-geek I talk to.
Keep up the good work: "Aux Revoir "......
This formula for repeated integration also appears when expressing the remainder of a Taylor polynomial approximation in the "integral form" - because, unlike the Peano, Lagrange, or Cauchy forms, it uses an integral
It can also be thought as taking the integral with the greens function for (d/dx)^n operator
Exactly!
Really outstanding video, every question I had as I was watching you answered two seconds later
Mark my words it's only a matter of time before this channel blows up
you forgot the constant c
I don't give a shi..
How about you take this D and shut up
Nobody asked
You hear that ?? I didn't ask
You dad left to get c and never came back
Damn you really ended it with a plot twist. Fractional integrals and thereby opening doors to Fractional derivatives.
Fun fact: to integrate (t-x)(f(x)), unless f(x) is a polynomial or some easy function, we would need to integrate x*f(x) using integration by parts, which requires us to find the double integral of f(x)
Nice cliffhanger at the end ;) waiting for a sequel!
Beautifully explained! Thank you!
Also, another interesting thing is contour integrals.
A contour integral is an integral that integrates over a boundary in any dimensions, unlike normal integrals, which only integrate in their respective dimensions. To differentiate the differentials of contour and normal integrals, we use s's rather than conventional letters, so dx -> ds, dA/dxdy/rdrdθ -> dS, dV/dxdydz/rdrdθdz/ρ^2*sin(φ)dρdφdθ -> dC (for cellular volume). Also, ds, dS, and dC are magnitudes of the dr, dS, and the dC vectors: dr = , dS = , dC = . These differentials are crucial in finding arclength, surface area, and cellular volume. For example, the arclength of a given function in 2-space is given by ds, which is √(1+y'^2)dx, √(x'^2+y'^2)dt, and √(r^2+r'^2)dθ. Surface area follows a similar pattern. The most common formula for surface areas are dS = sqrt(1+∂x+∂y)dA = |∂s x ∂t|dsdt, and 2πrds, from solids of revolution. Because of that, we can derive the formulas for cellular volume being sqrt(1+∂x+∂y+∂z)dV = |∂r x ∂s x ∂t|drdsdt, and 2πrdS. To evaluate contour integrals, you need to make sure your differentials are in magnitude form. Sometimes, they would already be in such form, like when you are trying to find the mass of a spring or a solid lamina, but most of the time, you are given a vector field and are trying to compute the circulation or the flux across a given path or surface, in this case, you must dot the vector field with the normal to turn it into a magnitude. You also need to parametrize the surfaces as well. However, there are some useful formulas:
Fundamental Theorem: If a vector field is conservative, meaning it is the gradient of a function, a line integral is just taken as an ordinary antiderivative, as the differential of a multivariable function is just the gradient dotted with the dr vector.
Divergence Theorem: The flux of a given vector field is proportional to its divergence and its interior content: ∯ F ∙ dS = ∭div F * dV, ∰ F ∙ dC = ⨌div F * dH
Stokes' Theorem: The surface integral of a given vector field is proportional to its curl and its circulation: ∮ F ∙ dr = ∯ curl F ∙ dS
There is a way to compute iterated derivatives using matrix diagonalization. Since derivatives are regular invertible linear operators, they can be encoded as a diagonalizable matrix. Using eigenvalue decomposition to compute powers of this matrix and representing functions as vectors with real coefficients in a function space is equivalent to computing iterated derivatives.
“How can I turn this double integral into a single integral?”
“you integrate it, dummy.”
“But I don’t wannnaaaaa 😭😭😭.”
Magnificent Meticulously Innovative Method ascertain the truly ideal Teacher
This was fantastic! Keep it up!
ive never seen a person do a math cliffhanger lol
me neither lol
I've*
1:43 John Gabriel's New Calculus mentioned!!?
16:10 we can prove this by induction, right?(show its true for n=1 and when this is true for n=k, show it must be true for n=k+1)
Oh boy! Fractional calculus here we come!
Awesome video! Thank you!
The music and visuals are giving me nostalgia
Video starts around 3:40
Edit: turns out it's actually an amazing video ♡
Actually theres formulas for n-th derivatives for p much every function including chains, it's just usually a sum or product up to n, which isn't really expandable for non integer n, however they are p beautiful if you asked me!
Before watching video: cauchys integral formula for the nth derivative... and u can use negitive numbers to mean integrals.. but you gotta use gamma formula and complex numbers. But it works
After watching video.. no cauchys integral formula is equivalent to the one you showed
Clicked for the impossible task. Stayed for the amazing explanation
Holy.. Your's math is super fun!!!!
@Morphocular heres one thing about differentiating.. it can go below 0 and integration can go above 0. why don't we regularly perform these functions? why do we infact simplify to cancelation as a rule? x^0 goes to # 1, but X^-1 is right there as a possibililty for the next differential. simarly the reverse for integration. why dont we go past these arbitary lines? Or is there something im missing?
I wonder too, hope someone explains
A quick tip I learned in calculus classes that I think is good for anyone taking multivariable calculus:
For an integrand of 1, 1 integral means 1 dimension of measurement. So a triple integral with 1 as the integrand measures volume.
For any other integrand, n integrals means n+1 dimensions of measurement. So a double integral with a non-1 integrand measures volume.
Sometimes its easier to define the shape in the integrals' limits, sometimes its easier to define the shape with less integrals and an integrand.
Though not a rigorous statement in the least, it should still help those who just "need to pass". (Helpful for tutoring more than anything)
Noice broooo keep making videos with moree quality over quantity
Just beautiful and poetic
Interpreting things with geometrifying trigonometry give meaningful visualization for Cauchys repeated integrals
Thanks for this knowledge :> Hopefully I can use it when I encounter multiple integrals in my self study journey
I just realized the equation at 10:55 is essentially just a convolution where f(x) is any function and g(x)=x
the audacity you have to end this video on a cliffhanger!! Anyways, it was a really interesting video, thank you very much!!
Nested Integral : You can't beat me, I'm compressible.
Derivative: You're right I can't. But he can. (d2y/dx2)
15:35 D.I method: am I a joke to you?
Very fascinating!
Is there a way this could be expanded for multivariable funtions? Like the double integral of f(x,y)dx dy
17:45 This is a good lead in to your next video, fractional derivatives / integrals
Is this alternate way to think of nested integration related to how Lebesgue integration is considered superior for higher dimensions (input or output), as it tends to generalize cleaner than Reimann integration?
Amazing video and visualizations
If you extend this to fractional integrals as you implied at the end, couldn't you just go negative and find derivatives as well, as silly as that might seem?
I mean I know the gamma function spells trouble for negative integers, but you can totally do fractional values, right?
Could be really useful for numeric *differentiation* though, as that tends to be much less stable than numeric *integration,* right?
('course a lot of the time the real way to go there is to just do algorithmic *symbolic* differentiation)
Oh you literally answer all that in the next video. I got this one recommended first oops
Super explanation 👌
the cliffhanger at the end was better than most netflix shows ngl
Great video, keep going!
Amazing quality content
6:34! Brilliant!
Thankyou! I learned a lot! Very good!
I was thinking that IS the lebesgue integration applied tô dimensional reduction on definite multidimensional integrals
Nice video, thank you !
O(1) formulas are incredibly powerful.
You made my day
The 2-fold case looks remarkably identical to a convolution of f(x) with the function g(x)=x, is there possibly a visual connection that can be made here?!
Is this related to stoke's theorem? (not the surface integral to integral of divergence of a function over a volume, but the more general one) From what I understand it's a generalization of this "integral reduction" we see with the fundamental theorem of calculus, divergence theorem and the "little" stoke's theorem taught in a first course on vector calc.
beautiful video
2:48 constant term using vanishing magic.
What will happen at the end when n =1/2 :o?
👍👍I think i remember learning this formula in 'calc 3', multivariable calc
A quick note about the alleged fact that numerical derivatives are easier to compute than quadratures. Although technically yes, they usually need less function evaluations, most often numerical derivatives are quite unstable and a royal pain in the ass also for seemingly "easy" functions, whereas numerical integration is much less critical - saved some critical cases.
Brilliant!