Reference to a "last video" is 'Local linearity for a multivariable function' Many years ago, back when Fortran was the coolest thing you could find in a computer I tried to understand what linear algebra. Back then it was hoped that if you kept solving enough problems that sooner or later the light would go on and you would know it all. as it turned out my light was a small flickering candle. I'm 70 this year and now I have the time to take a closer look at the beauty of math.Thanks for presenting concepts rather than processes. Khan and 3b1b rock.
It is helpful for intuition to multiply df1/dx at 2:37 by 1. df1/dx is a rate and you need to multiply it by 1 to give you the x-output of the unit vector (1,0). Note that the first column of the Jacobian represents what the unit vector (1,0) becomes after the transformation.
That is only true if the local linear approximation is still valid at further distances. Let me explain some more: The columns in the matrix track where the points (x_o+dx, y_o) and (x_o, y_o+dy) are mapped with respect to where (x_o, y_o) is mapped. For example, f(x_o+dx, y_o) - f(x_o, y_o) is the distance between (x_o+dx, y_o) and (x_o, y_o) after being mapped. This is simply our Jacobian times (dx, 0). Since the second entry is zero, we only recover (df_1/dx) + (df_2/dx), which by the chain rule is simply the total derivative of f with respect to x. Likewise, we can show the Jacobian times (0, dy) is the total derivative of f with respect to y. So the Jacobian matrix is mapping *differential* vector quantities that are in the direction of our original basis vectors. We can think of these differential vectors dx and dy as our new basis! But if we choose a basis that is very small, we better make sure our transformation returns a number that isn't very small too. This is why we can imagine "normalising" by the small quantities "dx" and "dy" in the bottom of our matrix. In a normal transformation matrix, we know the denominator is simply "1". But since our function isn't actually linear, we do not have the luxury of using such a simple basis. We can only act on small vectors accurately, so we re-scale :)
This closely relates to divergence and curl. If you, from the final matrix (The Jacobian) add the top left and bottom right entry ( the partial derivative of x with respect to x and the same for y), you get the divergence. If you subtract the top right entry from the bottom left one (subtract the partial derivative of x with respect to y from the partial derivative of y with respect to x), you get the curl.
This is the only video that I understood for Jacobians. Why other UA-camrs just start spitting equations? If I were able to understand just by that, I'd read the book!
Can someone please explain: Grant said at 2:10 that the x component of the 2-D movement in output space is seen as partial change in f1, why do we say this why does that x comp equals the partial of f1??
397/5000 hi, I have a question, how can I align a real surface to the CAD model by touching the real part? then I have the 3D model, I take n points, then I go to the real part and start looking for the surface going in the same coordinate(with a robot for example), after which if in the real piece I have a rotation / translation I have to correct the error. Actually I don't know how to do it ... could you recommend me some techniques?
Great video! However, I have a doubt. When you were tracking that yellow square, the grid lines transformed like a linear transformation. However, the grid itself translated to another coordinate [near (-1, 0)]. Since we know that translations are NOT linear transformations, then how can we say that the grid represents linear transformation?
He's not considering that translation, he's just considering the linear transfromation around (-2,1). It's like when we are on earth, we don't consider that earth is moving when we are doing some physics calculations.
At about 1:20, you can see he selects -2,1 on the original matrix. That selected point moves to near -1,0 after the various partial transformations performed throughout the video.
Yes it indeed is. Here is the link to the entire playlist which is called "Multivariable Calculus": ua-cam.com/play/PLSQl0a2vh4HC5feHa6Rc5c0wbRTx56nF7.html
Why did we divide delf1 and delf2 with delx and dely? I understood that the x component would be delf1 and y component would be delf2, but then we divide it with dely and delx.... Why
The reason is that in the approximation the Jacobian is multiplied by the vector [delx,dely]. If the vector was [1,1] you'd be correct that it should be just delf1 and delf2. Think that the approximation (taking some liberties with notation) is dF = J*dX, where F is the vector of the function and X is the vector with the variables.
Great video! But one thing I don't quite get is why you divide by del x and del y to find the different components of the Jacobian. Could someone please explain?
I still don't understand why this relates to the Jacobian pointing in the direction of steepest ascent? So it's basically the gradient, but for functions that output vectors?
I have a question here, which kind of seems to be self-explanatory, but it would still be nice to get some confirmation. Is local linearity a property of every point in every transformation? The reason is ask this is that due to non-differentiability at some points we may not be able to calculate the value of some of the partial derivatives for certain kind of functions. How should this be interpreted ?
Locally linear = differentiable. If it's not differentiable at a certain point, this means that it can't be locally approximated by a linear transformation, and vice versa.
It depends on how you see the input space, that is, if it's filled with vectors (things that can be added to each other and scaled by numbers) or dots (simple pairs of numbers that cannot be added or scaled). If you think about vectores, then it's a transformation, like those you see in Linear Algebra, but this time they are not necessarily linear. If you think the function is mapping dots to vectors, then it's a vector space. But I think that Grant's point in this course is that those are two complementary ways of seeing the same thing, it's just that the transformation has this agile nature of taking vectors from one place to another, while vector spaces are more static.
@@PfropfNo1 can you help a bit, I have a doubt, is all the entries in jacobian matrix represent : Change in output space divide by change in input space ? Considering the jacobian matrix the mapping of basis in input space must be transformed in to the entries of jacobian matrix. But I did not get how delf1/delx,dlef1/dy.... are obtained ?
What bothers me is that the non linear transformation translates a point to another. But this is never captured by the jacobian matrix. Why isnt the translation important?
Same problem here. I hoped the next video "Computing a Jacobian matrix" would clarify that and answer this question, but nope. What is missing here is a real example of how the use of the jacobian matrix would give a satisfying solution to a problem otherwise too complicated. So far, the best I can understand is that the Jacobian matrix can simplify determining what's happening to the *neighborhood* of the point by using only linear functions, but I can't imagine a situation where I would need that. Finally, my best bet is that I missed something important.
Yup, 1x1 Jacobian Matrix is essentially a derivative of a univariate scalar function, 1 x m Jacobian Matrix is the transposed gradient vector of a multivariate scalar function. Cool beans.
Shouldn't the origin also remain fixed? Won't we also need the information of where the origin movess? Just recording the information in a 2 X 2 matrix seems insufficient. So
Harish D Keep in mind that when using this matrix, we’re only focusing on local points surrounding the point we originally focused on, not the grid as a whole. The fact that we’re taking partial derivatives automatically encapsulates this idea of locality. Also, the origin in this example moves because the matrix transformation isn’t linear.
Any possible linear transformation of x and y can be conceptually represented as shown in the video by the matrix (with a-f being constants): [ ax+by+e] [ cx+dy+f ] (As should be expected, these are just equations for lines.) What happens if you apply the Jacobian to this matrix? It reduces to precisely the linear transformation matrix that's normally used to transform (x,y) points: [ a b ] [ c d ] Why is this so? Why is it just constants? ... Because the Jacobian expresses how much a transformation is "changing things locally", and a _linear_ transformation changes the entire transformation space in exactly the same way (which is why lines stay parallel, and whatnot). In other words, it does not vary; it stays constant. It is comprised entirely of uniform scaling and shearing (and potentially translating). In short, the reason the (general, i.e., unevaluated) Jacobian shown in the video varies from point to point is _because_ the functions selected for the transformation were *not* linear (sine and cosine). If they were linear, the resulting matrix would have simply been full of constants.
The Jacobian matrix is a linear approximation. For a linear transformation (matrix multiplication), the Jacobian would be the linear transformation itself. Kind of what happens in 1-d derivation, when multiplying a constant by x the derivative is the constant itself.
Perhaps my explanation to Daneil C above may help! We are mapping small changes (dx, 0) and (0, dy) to small changes of f using the chain rule! J (dx, 0) = df_1/dx + df_2/dx But this is just the total derivative of f with respect to x by the chain rule. Likewise for (0, dy) = total derivative of f with respect to y. So, locally, we know how far we would move from the point we are evaluating if we took small steps.
Partial derivatives represent rate , but i dont really get it. The values in the matrix should represent coordinates of where basis vectors land. Can someone make this clear
Well, each of the partial derivatives will give you a function that tells you the rate of change of one function with respect to another, and when we evaluate it at a specific point, its going to tell us what that change was. That's the important part, and he said it, that we have to evaluate it and it will just turn into a matrix with numbers in it instead of functions.
Reference to a "last video" is 'Local linearity for a multivariable function' Many years ago, back when Fortran was the coolest thing you could find in a computer I tried to understand what linear algebra. Back then it was hoped that if you kept solving enough problems that sooner or later the light would go on and you would know it all. as it turned out my light was a small flickering candle. I'm 70 this year and now I have the time to take a closer look at the beauty of math.Thanks for presenting concepts rather than processes. Khan and 3b1b rock.
thank for your comment, i know which video is "last video". :D
Thank you, for "last video" was "Jacobian prerequisite knowledge" in the playlist I'm watching ...
@@joluju2375 ua-cam.com/video/VmfTXVG9S0U/v-deo.html
you rock yourself.
You are of great help!
THX
I've never seen someone make so much sense in my life. Grant, you are the GOAT
this guy knows a thing or two about maths. he should start his own channel
I Knew it!
Dude!! that’s “Grant Sanderson” from 3blue1brown.
@@jjqerfcvddv bruh...thats the joke
@@sindhiyadevimaheshwaran3738 not everyone knows my friend - we must convert the vast unwashed into Grant's mathematical following
BRUH I LEGIT JUST THOUIGHT ABOUT THAT
Doing physics I have been using Jacobians for years. This video finally lifted this beyond a 'trick' and me the insight of what it really means.
The best math teacher on UA-cam. Even kids can understand any big subjects he’s teaching.
Yes, I'm trying to get my 5 year old started. Hopefully he will be able to watch some of these videos soon.
This sounds and looks like 3blue1brown
Michael L He is...or so I hear
thats awesome i was just looking on his channel for this very topic
He IS 3B1B
Yeah, you are right. This does sound like 3Blue1brown
I think that's because it is ( ͡° ͜ʖ ͡°)
It is helpful for intuition to multiply df1/dx at 2:37 by 1. df1/dx is a rate and you need to multiply it by 1 to give you the x-output of the unit vector (1,0). Note that the first column of the Jacobian represents what the unit vector (1,0) becomes after the transformation.
That is only true if the local linear approximation is still valid at further distances. Let me explain some more:
The columns in the matrix track where the points (x_o+dx, y_o) and (x_o, y_o+dy) are mapped with respect to where (x_o, y_o) is mapped.
For example, f(x_o+dx, y_o) - f(x_o, y_o) is the distance between (x_o+dx, y_o) and (x_o, y_o) after being mapped. This is simply our Jacobian times (dx, 0). Since the second entry is zero, we only recover (df_1/dx) + (df_2/dx), which by the chain rule is simply the total derivative of f with respect to x.
Likewise, we can show the Jacobian times (0, dy) is the total derivative of f with respect to y.
So the Jacobian matrix is mapping *differential* vector quantities that are in the direction of our original basis vectors. We can think of these differential vectors dx and dy as our new basis!
But if we choose a basis that is very small, we better make sure our transformation returns a number that isn't very small too. This is why we can imagine "normalising" by the small quantities "dx" and "dy" in the bottom of our matrix. In a normal transformation matrix, we know the denominator is simply "1". But since our function isn't actually linear, we do not have the luxury of using such a simple basis. We can only act on small vectors accurately, so we re-scale :)
"Some years ago at Khan Academy, I made many videos and articles on multivariable calculus. " -Grant Sanderson (3blue, 1brown)
This closely relates to divergence and curl.
If you, from the final matrix (The Jacobian) add the top left and bottom right entry ( the partial derivative of x with respect to x and the same for y),
you get the divergence.
If you subtract the top right entry from the bottom left one (subtract the partial derivative of x with respect to y from the partial derivative of y with respect to x),
you get the curl.
Thanks Grant. You are awesome. Finally, I have understood what Jacobian Matrix really represent.
no you havent. the jacobian matrix is much deeper than this . he is just touching the tip of the iceberg.
@@maxpercer7119 Then please do point in the direction where I can gain an even deeper understanding.
@@aryamanpatel8250 you could say... please point him in the locally linear direction so he can arrive at his deeper destiantion ;)
@@maxpercer7119 ma vai a cagare
This is the only video that I understood for Jacobians. Why other UA-camrs just start spitting equations? If I were able to understand just by that, I'd read the book!
Simply awesome. I wish we could give him Nobel prize or something.
This is so incredibly well explained.
Wow, this guy is good at explaining math, maybe he should start an independent UA-cam channel or something...
Thank you. That last paragraph was just SO well constructed. Rest of the video too.
Great video! I didn't get why the x component of transformed dx must be df1/dx
The best explanation I've ever seen!
Amazing video! precisely what I was looking for. The physically intuition is so important to understand a concept.
The green and red arrows are too small. They should be zoomed in to give readers a clear idea of what happens when both x and y change a tiny amount
Can someone please explain: Grant said at 2:10 that the x component of the 2-D movement in output space is seen as partial change in f1, why do we say this why does that x comp equals the partial of f1??
change in dx results in change in df and it has two components which is the first column in jacobian matrix ,similarly for dy
Thanks for this vídeo, it finally clicked for me, you are great.
397/5000
hi, I have a question, how can I align a real surface to the CAD model by touching the real part? then I have the 3D model, I take n points, then I go to the real part and start looking for the surface going in the same coordinate(with a robot for example), after which if in the real piece I have a rotation / translation I have to correct the error. Actually I don't know how to do it ... could you recommend me some techniques?
grant has the most beautiful voice on UA-cam
Great video! However, I have a doubt. When you were tracking that yellow square, the grid lines transformed like a linear transformation. However, the grid itself translated to another coordinate [near (-1, 0)]. Since we know that translations are NOT linear transformations, then how can we say that the grid represents linear transformation?
He's not considering that translation, he's just considering the linear transfromation around (-2,1). It's like when we are on earth, we don't consider that earth is moving when we are doing some physics calculations.
At about 1:20, you can see he selects -2,1 on the original matrix. That selected point moves to near -1,0 after the various partial transformations performed throughout the video.
which one is the next video? Is there a playlist for this series?
Did you ever find out?
Yes it indeed is. Here is the link to the entire playlist which is called "Multivariable Calculus":
ua-cam.com/play/PLSQl0a2vh4HC5feHa6Rc5c0wbRTx56nF7.html
Why did we divide delf1 and delf2 with delx and dely?
I understood that the x component would be delf1 and y component would be delf2, but then we divide it with dely and delx.... Why
The reason is that in the approximation the Jacobian is multiplied by the vector [delx,dely]. If the vector was [1,1] you'd be correct that it should be just delf1 and delf2.
Think that the approximation (taking some liberties with notation) is dF = J*dX, where F is the vector of the function and X is the vector with the variables.
soothing voice
It is 3Blue1Brown voice!
Great video! But one thing I don't quite get is why you divide by del x and del y to find the different components of the Jacobian. Could someone please explain?
Because we are looking for the ratio of how much the axis is stretched or squeezed
He is 3 blue 1 brown
value of jacobian matrix ( its determinant ) should be high or low, what is its ideal value
I still don't understand why this relates to the Jacobian pointing in the direction of steepest ascent? So it's basically the gradient, but for functions that output vectors?
Sir, what is the difference between Newton-Raphson and Gauss-Newton Methods... any video link regarding these methods?
what is the name of this playlist?
wheres the next vid...the vids should have previous and next on the description
It's a playlist dude
I have a question here, which kind of seems to be self-explanatory, but it would still be nice to get some confirmation. Is local linearity a property of every point in every transformation? The reason is ask this is that due to non-differentiability at some points we may not be able to calculate the value of some of the partial derivatives for certain kind of functions. How should this be interpreted ?
Locally linear = differentiable. If it's not differentiable at a certain point, this means that it can't be locally approximated by a linear transformation, and vice versa.
@@Cessedilha Thanks a lot!
How does he make all those animations? Such as bending the plane
He has made a project named "manim" for these animations. Check it out!
is this extracting axis of rotation ? so this seems like an "eigenvector" ? is there any relationship between Jacobian matrix and eigenvector ?
what a nice lacture
So, how to know when is a transformation and when is a vector field?
It depends on how you see the input space, that is, if it's filled with vectors (things that can be added to each other and scaled by numbers) or dots (simple pairs of numbers that cannot be added or scaled). If you think about vectores, then it's a transformation, like those you see in Linear Algebra, but this time they are not necessarily linear. If you think the function is mapping dots to vectors, then it's a vector space. But I think that Grant's point in this course is that those are two complementary ways of seeing the same thing, it's just that the transformation has this agile nature of taking vectors from one place to another, while vector spaces are more static.
Legend! you explained that so well
Hey thanks.
Change of variable in double intregral
Are there functions that are not even locally linear? What can be an example of that function?
Abs(x) at 0; 1/x at 0 etc.
@@PfropfNo1 can you help a bit, I have a doubt, is all the entries in jacobian matrix represent :
Change in output space divide by change in input space ?
Considering the jacobian matrix the mapping of basis in input space must be transformed in to the entries of jacobian matrix.
But I did not get how
delf1/delx,dlef1/dy.... are obtained ?
Wow! He is mathematical wizard...
What bothers me is that the non linear transformation translates a point to another. But this is never captured by the jacobian matrix. Why isnt the translation important?
Same problem here. I hoped the next video "Computing a Jacobian matrix" would clarify that and answer this question, but nope.
What is missing here is a real example of how the use of the jacobian matrix would give a satisfying solution to a problem otherwise too complicated.
So far, the best I can understand is that the Jacobian matrix can simplify determining what's happening to the *neighborhood* of the point by using only linear functions, but I can't imagine a situation where I would need that.
Finally, my best bet is that I missed something important.
Why do you divide Partial f by dx i dont understand
Can you please add a link to the software you are using for this and perhaps the code.
I bet it's proprietary
github.com/3b1b
Does the local linearity have to be at (-2,1)? or Every point is locally linear after the transformation if you zoom closely enough?
local linearity is true at every point
@@li_chengliang for any function?
what is 1dimensiona jacobian matrix? just df/dx ?
sounds right
Yup, 1x1 Jacobian Matrix is essentially a derivative of a univariate scalar function, 1 x m Jacobian Matrix is the transposed gradient vector of a multivariate scalar function. Cool beans.
Shouldn't the origin also remain fixed? Won't we also need the information of where the origin movess? Just recording the information in a 2 X 2 matrix seems insufficient. So
Harish D Keep in mind that when using this matrix, we’re only focusing on local points surrounding the point we originally focused on, not the grid as a whole. The fact that we’re taking partial derivatives automatically encapsulates this idea of locality. Also, the origin in this example moves because the matrix transformation isn’t linear.
Simple and to the point (Y)
Does Jacobian matrix has some sence for a linear transformation because there we don't need to zoom?
Any possible linear transformation of x and y can be conceptually represented as shown in the video by the matrix (with a-f being constants):
[ ax+by+e]
[ cx+dy+f ]
(As should be expected, these are just equations for lines.)
What happens if you apply the Jacobian to this matrix? It reduces to precisely the linear transformation matrix that's normally used to transform (x,y) points:
[ a b ]
[ c d ]
Why is this so? Why is it just constants? ... Because the Jacobian expresses how much a transformation is "changing things locally", and a _linear_ transformation changes the entire transformation space in exactly the same way (which is why lines stay parallel, and whatnot). In other words, it does not vary; it stays constant. It is comprised entirely of uniform scaling and shearing (and potentially translating).
In short, the reason the (general, i.e., unevaluated) Jacobian shown in the video varies from point to point is _because_ the functions selected for the transformation were *not* linear (sine and cosine). If they were linear, the resulting matrix would have simply been full of constants.
The Jacobian matrix is a linear approximation. For a linear transformation (matrix multiplication), the Jacobian would be the linear transformation itself. Kind of what happens in 1-d derivation, when multiplying a constant by x the derivative is the constant itself.
3Blue1Brown? You take my hand in the darkness and lead me through perdition.
Hold on, so gradient is a jacobian matrix of only 1 column because there is only f1(x)??
Only one row*
Confused after transformation of the graphics :(
Perhaps my explanation to Daneil C above may help!
We are mapping small changes (dx, 0) and (0, dy) to small changes of f using the chain rule!
J (dx, 0) = df_1/dx + df_2/dx
But this is just the total derivative of f with respect to x by the chain rule.
Likewise for (0, dy) = total derivative of f with respect to y.
So, locally, we know how far we would move from the point we are evaluating if we took small steps.
send me a message if any one doesn't understand the concept
thank you so much
Grant is on khan??
Is this from precalculus?
what is the name of this series!?!?
Thank you!
I love you, Grant
Partial derivatives represent rate , but i dont really get it. The values in the matrix should represent coordinates of where basis vectors land. Can someone make this clear
Well, each of the partial derivatives will give you a function that tells you the rate of change of one function with respect to another, and when we evaluate it at a specific point, its going to tell us what that change was. That's the important part, and he said it, that we have to evaluate it and it will just turn into a matrix with numbers in it instead of functions.
You're awesome❤
Wait is this The Talking Pi
I am totally confused......
the channel name is khan something but he sounds suspiciously like 3b1b
Same guy. Khan academy has different teachers, one of whom is the same guy as from 3b1b.
it’s grant !
This is nice.
excellent
This definitely sounds like 3blue1brown.
it is 3b1b! :)
Howto & Style? How is this not education?
What do you mean howto ? This isn't some simple cooking tutorial or something this is academic teaching
@@That_One_Guy... I didn't mean anything, that is how UA-cam categorized the video.
Brilliant
notif squad wer u at? no one? mkay....
paricin469 Nerd
am here
Justin Ward nonintellectuals…
what
Super maa more than super.
No disrespect, but your symbol for the partial derivative needs work. It looks like a g
I don't understand nothing 🤦♀️
You don't understand anything or if u say nothing it means u understand :P
男優と数学者以外に仕事をしたくないです。
Wow
* watches video and recognizes the narrator's voice *
* looks down at comment *
Grant Sanderson! I knew it! @3blue1brown