The Jacobian matrix

Поділитися
Вставка
  • Опубліковано 28 лис 2024

КОМЕНТАРІ • 153

  • @raybroomall8383
    @raybroomall8383 6 років тому +421

    Reference to a "last video" is 'Local linearity for a multivariable function' Many years ago, back when Fortran was the coolest thing you could find in a computer I tried to understand what linear algebra. Back then it was hoped that if you kept solving enough problems that sooner or later the light would go on and you would know it all. as it turned out my light was a small flickering candle. I'm 70 this year and now I have the time to take a closer look at the beauty of math.Thanks for presenting concepts rather than processes. Khan and 3b1b rock.

    • @ggsgetafaf1167
      @ggsgetafaf1167 5 років тому +2

      thank for your comment, i know which video is "last video". :D

    • @joluju2375
      @joluju2375 5 років тому

      Thank you, for "last video" was "Jacobian prerequisite knowledge" in the playlist I'm watching ...

    • @suratvita
      @suratvita 5 років тому +1

      @@joluju2375 ua-cam.com/video/VmfTXVG9S0U/v-deo.html

    • @lakshya6235
      @lakshya6235 4 роки тому +6

      you rock yourself.

    • @leiberlyu1493
      @leiberlyu1493 3 роки тому

      You are of great help!
      THX

  • @deepakmecheri4668
    @deepakmecheri4668 5 років тому +54

    I've never seen someone make so much sense in my life. Grant, you are the GOAT

  • @ericobukhanich5575
    @ericobukhanich5575 2 роки тому +243

    this guy knows a thing or two about maths. he should start his own channel

    • @marcopel83
      @marcopel83 2 роки тому +5

      I Knew it!

    • @jjqerfcvddv
      @jjqerfcvddv 2 роки тому +20

      Dude!! that’s “Grant Sanderson” from 3blue1brown.

    • @sindhiyadevimaheshwaran3738
      @sindhiyadevimaheshwaran3738 Рік тому +15

      @@jjqerfcvddv bruh...thats the joke

    • @PurasamaMan
      @PurasamaMan Рік тому +9

      @@sindhiyadevimaheshwaran3738 not everyone knows my friend - we must convert the vast unwashed into Grant's mathematical following

    • @bossdelta501
      @bossdelta501 Рік тому +1

      BRUH I LEGIT JUST THOUIGHT ABOUT THAT

  • @Lutterot
    @Lutterot 3 роки тому +16

    Doing physics I have been using Jacobians for years. This video finally lifted this beyond a 'trick' and me the insight of what it really means.

  • @smallvilleclark3990
    @smallvilleclark3990 Рік тому +4

    The best math teacher on UA-cam. Even kids can understand any big subjects he’s teaching.

    • @uvicjames
      @uvicjames 19 днів тому +2

      Yes, I'm trying to get my 5 year old started. Hopefully he will be able to watch some of these videos soon.

  • @mrarkus7431
    @mrarkus7431 7 років тому +594

    This sounds and looks like 3blue1brown

    • @aseempatwardhan6778
      @aseempatwardhan6778 7 років тому +47

      Michael L He is...or so I hear

    • @davepogue543
      @davepogue543 7 років тому +18

      thats awesome i was just looking on his channel for this very topic

    • @luffyorama
      @luffyorama 7 років тому +71

      He IS 3B1B

    • @muzammil360
      @muzammil360 7 років тому +15

      Yeah, you are right. This does sound like 3Blue1brown

    • @Fasteroid
      @Fasteroid 6 років тому +16

      I think that's because it is ( ͡° ͜ʖ ͡°)

  • @danielc4267
    @danielc4267 7 років тому +19

    It is helpful for intuition to multiply df1/dx at 2:37 by 1. df1/dx is a rate and you need to multiply it by 1 to give you the x-output of the unit vector (1,0). Note that the first column of the Jacobian represents what the unit vector (1,0) becomes after the transformation.

    • @aSeaofTroubles
      @aSeaofTroubles 7 років тому +18

      That is only true if the local linear approximation is still valid at further distances. Let me explain some more:
      The columns in the matrix track where the points (x_o+dx, y_o) and (x_o, y_o+dy) are mapped with respect to where (x_o, y_o) is mapped.
      For example, f(x_o+dx, y_o) - f(x_o, y_o) is the distance between (x_o+dx, y_o) and (x_o, y_o) after being mapped. This is simply our Jacobian times (dx, 0). Since the second entry is zero, we only recover (df_1/dx) + (df_2/dx), which by the chain rule is simply the total derivative of f with respect to x.
      Likewise, we can show the Jacobian times (0, dy) is the total derivative of f with respect to y.
      So the Jacobian matrix is mapping *differential* vector quantities that are in the direction of our original basis vectors. We can think of these differential vectors dx and dy as our new basis!
      But if we choose a basis that is very small, we better make sure our transformation returns a number that isn't very small too. This is why we can imagine "normalising" by the small quantities "dx" and "dy" in the bottom of our matrix. In a normal transformation matrix, we know the denominator is simply "1". But since our function isn't actually linear, we do not have the luxury of using such a simple basis. We can only act on small vectors accurately, so we re-scale :)

  • @hektor6766
    @hektor6766 3 роки тому +12

    "Some years ago at Khan Academy, I made many videos and articles on multivariable calculus. " -Grant Sanderson (3blue, 1brown)

  • @fritzzz1372
    @fritzzz1372 3 роки тому +17

    This closely relates to divergence and curl.
    If you, from the final matrix (The Jacobian) add the top left and bottom right entry ( the partial derivative of x with respect to x and the same for y),
    you get the divergence.
    If you subtract the top right entry from the bottom left one (subtract the partial derivative of x with respect to y from the partial derivative of y with respect to x),
    you get the curl.

  • @nishparadox
    @nishparadox 6 років тому +32

    Thanks Grant. You are awesome. Finally, I have understood what Jacobian Matrix really represent.

    • @maxpercer7119
      @maxpercer7119 4 роки тому +1

      no you havent. the jacobian matrix is much deeper than this . he is just touching the tip of the iceberg.

    • @aryamanpatel8250
      @aryamanpatel8250 4 роки тому +6

      @@maxpercer7119 Then please do point in the direction where I can gain an even deeper understanding.

    • @ryan_chew97
      @ryan_chew97 3 роки тому +1

      @@aryamanpatel8250 you could say... please point him in the locally linear direction so he can arrive at his deeper destiantion ;)

    • @lampa298
      @lampa298 3 роки тому +1

      @@maxpercer7119 ma vai a cagare

  • @dan_pal
    @dan_pal 3 місяці тому +1

    This is the only video that I understood for Jacobians. Why other UA-camrs just start spitting equations? If I were able to understand just by that, I'd read the book!

  • @SB-zv9ls
    @SB-zv9ls Рік тому +2

    Simply awesome. I wish we could give him Nobel prize or something.

  • @IJKersten
    @IJKersten 9 місяців тому

    This is so incredibly well explained.

  • @cringotopia8850
    @cringotopia8850 Рік тому +2

    Wow, this guy is good at explaining math, maybe he should start an independent UA-cam channel or something...

  • @NovaWarrior77
    @NovaWarrior77 4 роки тому +1

    Thank you. That last paragraph was just SO well constructed. Rest of the video too.

  • @MayankGoel447
    @MayankGoel447 2 роки тому +4

    Great video! I didn't get why the x component of transformed dx must be df1/dx

  • @ClosiusBeg
    @ClosiusBeg 3 роки тому

    The best explanation I've ever seen!

  • @divyamgarg9078
    @divyamgarg9078 Рік тому

    Amazing video! precisely what I was looking for. The physically intuition is so important to understand a concept.

  • @xruan6582
    @xruan6582 4 роки тому +1

    The green and red arrows are too small. They should be zoomed in to give readers a clear idea of what happens when both x and y change a tiny amount

  • @MohitYadav09
    @MohitYadav09 Рік тому +1

    Can someone please explain: Grant said at 2:10 that the x component of the 2-D movement in output space is seen as partial change in f1, why do we say this why does that x comp equals the partial of f1??

    • @gate2024-c5b
      @gate2024-c5b Рік тому

      change in dx results in change in df and it has two components which is the first column in jacobian matrix ,similarly for dy

  • @FugieGamers
    @FugieGamers 6 років тому +4

    Thanks for this vídeo, it finally clicked for me, you are great.

  • @giuseppealmontelalli840
    @giuseppealmontelalli840 4 роки тому +3

    397/5000
    hi, I have a question, how can I align a real surface to the CAD model by touching the real part? then I have the 3D model, I take n points, then I go to the real part and start looking for the surface going in the same coordinate(with a robot for example), after which if in the real piece I have a rotation / translation I have to correct the error. Actually I don't know how to do it ... could you recommend me some techniques?

  • @Juniper-111
    @Juniper-111 2 роки тому +1

    grant has the most beautiful voice on UA-cam

  • @avneeshkhanna
    @avneeshkhanna 4 роки тому +2

    Great video! However, I have a doubt. When you were tracking that yellow square, the grid lines transformed like a linear transformation. However, the grid itself translated to another coordinate [near (-1, 0)]. Since we know that translations are NOT linear transformations, then how can we say that the grid represents linear transformation?

    • @palantea1367
      @palantea1367 3 роки тому

      He's not considering that translation, he's just considering the linear transfromation around (-2,1). It's like when we are on earth, we don't consider that earth is moving when we are doing some physics calculations.

    • @hektor6766
      @hektor6766 3 роки тому

      At about 1:20, you can see he selects -2,1 on the original matrix. That selected point moves to near -1,0 after the various partial transformations performed throughout the video.

  • @PunitSoni00
    @PunitSoni00 6 років тому +2

    which one is the next video? Is there a playlist for this series?

    • @slashholidae
      @slashholidae 5 років тому

      Did you ever find out?

    • @bharasiva96
      @bharasiva96 5 років тому +2

      Yes it indeed is. Here is the link to the entire playlist which is called "Multivariable Calculus":
      ua-cam.com/play/PLSQl0a2vh4HC5feHa6Rc5c0wbRTx56nF7.html

  • @RCPN
    @RCPN 5 років тому +4

    Why did we divide delf1 and delf2 with delx and dely?
    I understood that the x component would be delf1 and y component would be delf2, but then we divide it with dely and delx.... Why

    • @Cessedilha
      @Cessedilha 4 роки тому +1

      The reason is that in the approximation the Jacobian is multiplied by the vector [delx,dely]. If the vector was [1,1] you'd be correct that it should be just delf1 and delf2.
      Think that the approximation (taking some liberties with notation) is dF = J*dX, where F is the vector of the function and X is the vector with the variables.

  • @1.4142
    @1.4142 2 роки тому +1

    soothing voice

  • @Dhruvbala
    @Dhruvbala 4 роки тому +1

    Great video! But one thing I don't quite get is why you divide by del x and del y to find the different components of the Jacobian. Could someone please explain?

    • @kartikvarshney9257
      @kartikvarshney9257 4 роки тому +2

      Because we are looking for the ratio of how much the axis is stretched or squeezed

  • @sivasudharshan5444
    @sivasudharshan5444 5 років тому +3

    He is 3 blue 1 brown

  • @puneetsharma2135
    @puneetsharma2135 3 роки тому

    value of jacobian matrix ( its determinant ) should be high or low, what is its ideal value

  • @Tntpker
    @Tntpker 6 років тому

    I still don't understand why this relates to the Jacobian pointing in the direction of steepest ascent? So it's basically the gradient, but for functions that output vectors?

  • @geophysicsadvancedseismice7542
    @geophysicsadvancedseismice7542 4 роки тому

    Sir, what is the difference between Newton-Raphson and Gauss-Newton Methods... any video link regarding these methods?

  • @venkatachaitanyayadla1794
    @venkatachaitanyayadla1794 4 роки тому +1

    what is the name of this playlist?

  • @yksnimus
    @yksnimus 4 роки тому +2

    wheres the next vid...the vids should have previous and next on the description

  • @SohamChakraborty42069
    @SohamChakraborty42069 4 роки тому

    I have a question here, which kind of seems to be self-explanatory, but it would still be nice to get some confirmation. Is local linearity a property of every point in every transformation? The reason is ask this is that due to non-differentiability at some points we may not be able to calculate the value of some of the partial derivatives for certain kind of functions. How should this be interpreted ?

    • @Cessedilha
      @Cessedilha 4 роки тому +1

      Locally linear = differentiable. If it's not differentiable at a certain point, this means that it can't be locally approximated by a linear transformation, and vice versa.

    • @SohamChakraborty42069
      @SohamChakraborty42069 4 роки тому

      @@Cessedilha Thanks a lot!

  • @donlansdonlans3363
    @donlansdonlans3363 4 роки тому +2

    How does he make all those animations? Such as bending the plane

    • @vishank7
      @vishank7 4 роки тому +1

      He has made a project named "manim" for these animations. Check it out!

  • @kadirbasol82
    @kadirbasol82 5 років тому +1

    is this extracting axis of rotation ? so this seems like an "eigenvector" ? is there any relationship between Jacobian matrix and eigenvector ?

  • @상추-o1f
    @상추-o1f 2 роки тому

    what a nice lacture

  • @solsticetwo3476
    @solsticetwo3476 5 років тому +1

    So, how to know when is a transformation and when is a vector field?

    • @douglasmangini8744
      @douglasmangini8744 5 років тому +2

      It depends on how you see the input space, that is, if it's filled with vectors (things that can be added to each other and scaled by numbers) or dots (simple pairs of numbers that cannot be added or scaled). If you think about vectores, then it's a transformation, like those you see in Linear Algebra, but this time they are not necessarily linear. If you think the function is mapping dots to vectors, then it's a vector space. But I think that Grant's point in this course is that those are two complementary ways of seeing the same thing, it's just that the transformation has this agile nature of taking vectors from one place to another, while vector spaces are more static.

  • @joshace5988
    @joshace5988 3 роки тому

    Legend! you explained that so well

  • @samsmusichub
    @samsmusichub 7 місяців тому

    Hey thanks.

  • @rajibsarmah6744
    @rajibsarmah6744 4 роки тому +1

    Change of variable in double intregral

  • @Krishnajha20101
    @Krishnajha20101 6 років тому +1

    Are there functions that are not even locally linear? What can be an example of that function?

    • @PfropfNo1
      @PfropfNo1 5 років тому +2

      Abs(x) at 0; 1/x at 0 etc.

    • @chandankar5032
      @chandankar5032 5 років тому +1

      @@PfropfNo1 can you help a bit, I have a doubt, is all the entries in jacobian matrix represent :
      Change in output space divide by change in input space ?
      Considering the jacobian matrix the mapping of basis in input space must be transformed in to the entries of jacobian matrix.
      But I did not get how
      delf1/delx,dlef1/dy.... are obtained ?

  • @randomstuff9960
    @randomstuff9960 11 місяців тому

    Wow! He is mathematical wizard...

  • @robmarks6800
    @robmarks6800 4 роки тому +1

    What bothers me is that the non linear transformation translates a point to another. But this is never captured by the jacobian matrix. Why isnt the translation important?

    • @joluju2375
      @joluju2375 3 роки тому +1

      Same problem here. I hoped the next video "Computing a Jacobian matrix" would clarify that and answer this question, but nope.
      What is missing here is a real example of how the use of the jacobian matrix would give a satisfying solution to a problem otherwise too complicated.
      So far, the best I can understand is that the Jacobian matrix can simplify determining what's happening to the *neighborhood* of the point by using only linear functions, but I can't imagine a situation where I would need that.
      Finally, my best bet is that I missed something important.

  • @antonienewman9379
    @antonienewman9379 5 років тому

    Why do you divide Partial f by dx i dont understand

  • @thefacelessmen2101
    @thefacelessmen2101 7 років тому +1

    Can you please add a link to the software you are using for this and perhaps the code.

  • @dr_ingenium
    @dr_ingenium 7 років тому

    Does the local linearity have to be at (-2,1)? or Every point is locally linear after the transformation if you zoom closely enough?

  • @eStalker42
    @eStalker42 7 років тому +3

    what is 1dimensiona jacobian matrix? just df/dx ?

    • @nathanielsaxe3049
      @nathanielsaxe3049 7 років тому +4

      sounds right

    • @ozzyfromspace
      @ozzyfromspace 7 років тому

      Yup, 1x1 Jacobian Matrix is essentially a derivative of a univariate scalar function, 1 x m Jacobian Matrix is the transposed gradient vector of a multivariate scalar function. Cool beans.

  • @harishd37
    @harishd37 5 років тому

    Shouldn't the origin also remain fixed? Won't we also need the information of where the origin movess? Just recording the information in a 2 X 2 matrix seems insufficient. So

    • @doaby3979
      @doaby3979 5 років тому

      Harish D Keep in mind that when using this matrix, we’re only focusing on local points surrounding the point we originally focused on, not the grid as a whole. The fact that we’re taking partial derivatives automatically encapsulates this idea of locality. Also, the origin in this example moves because the matrix transformation isn’t linear.

  • @inaamilahi5094
    @inaamilahi5094 5 років тому

    Simple and to the point (Y)

  • @snehasishchowdhury6900
    @snehasishchowdhury6900 6 років тому +1

    Does Jacobian matrix has some sence for a linear transformation because there we don't need to zoom?

    • @jamesedwards6173
      @jamesedwards6173 6 років тому +1

      Any possible linear transformation of x and y can be conceptually represented as shown in the video by the matrix (with a-f being constants):
      [ ax+by+e]
      [ cx+dy+f ]
      (As should be expected, these are just equations for lines.)
      What happens if you apply the Jacobian to this matrix? It reduces to precisely the linear transformation matrix that's normally used to transform (x,y) points:
      [ a b ]
      [ c d ]
      Why is this so? Why is it just constants? ... Because the Jacobian expresses how much a transformation is "changing things locally", and a _linear_ transformation changes the entire transformation space in exactly the same way (which is why lines stay parallel, and whatnot). In other words, it does not vary; it stays constant. It is comprised entirely of uniform scaling and shearing (and potentially translating).
      In short, the reason the (general, i.e., unevaluated) Jacobian shown in the video varies from point to point is _because_ the functions selected for the transformation were *not* linear (sine and cosine). If they were linear, the resulting matrix would have simply been full of constants.

    • @Cessedilha
      @Cessedilha 4 роки тому

      The Jacobian matrix is a linear approximation. For a linear transformation (matrix multiplication), the Jacobian would be the linear transformation itself. Kind of what happens in 1-d derivation, when multiplying a constant by x the derivative is the constant itself.

  • @emfournet
    @emfournet 2 роки тому

    3Blue1Brown? You take my hand in the darkness and lead me through perdition.

  • @marcovillalobos5177
    @marcovillalobos5177 5 років тому +1

    Hold on, so gradient is a jacobian matrix of only 1 column because there is only f1(x)??

  • @TBadalov
    @TBadalov 7 років тому +4

    Confused after transformation of the graphics :(

    • @aSeaofTroubles
      @aSeaofTroubles 7 років тому +7

      Perhaps my explanation to Daneil C above may help!
      We are mapping small changes (dx, 0) and (0, dy) to small changes of f using the chain rule!
      J (dx, 0) = df_1/dx + df_2/dx
      But this is just the total derivative of f with respect to x by the chain rule.
      Likewise for (0, dy) = total derivative of f with respect to y.
      So, locally, we know how far we would move from the point we are evaluating if we took small steps.

  • @MrBemnet1
    @MrBemnet1 4 роки тому

    send me a message if any one doesn't understand the concept

  • @GOODBOY-vt1cf
    @GOODBOY-vt1cf 4 роки тому

    thank you so much

  • @bevel1702
    @bevel1702 2 роки тому

    Grant is on khan??

  • @matejamartin2199
    @matejamartin2199 2 роки тому

    Is this from precalculus?

  • @martovify
    @martovify 4 роки тому

    what is the name of this series!?!?

  • @콘충이
    @콘충이 4 роки тому

    Thank you!

  • @charmatataa5858
    @charmatataa5858 2 роки тому

    I love you, Grant

  • @antonienewman9379
    @antonienewman9379 5 років тому

    Partial derivatives represent rate , but i dont really get it. The values in the matrix should represent coordinates of where basis vectors land. Can someone make this clear

    • @jordanjacobson6046
      @jordanjacobson6046 4 роки тому

      Well, each of the partial derivatives will give you a function that tells you the rate of change of one function with respect to another, and when we evaluate it at a specific point, its going to tell us what that change was. That's the important part, and he said it, that we have to evaluate it and it will just turn into a matrix with numbers in it instead of functions.

  • @Tara-li4hl
    @Tara-li4hl Рік тому

    You're awesome❤

  • @ameyislive2843
    @ameyislive2843 4 роки тому +4

    Wait is this The Talking Pi

  • @hr2441
    @hr2441 6 років тому +9

    I am totally confused......

  • @Melkboer38
    @Melkboer38 6 років тому +11

    the channel name is khan something but he sounds suspiciously like 3b1b

    • @RandyFortier
      @RandyFortier 6 років тому +4

      Same guy. Khan academy has different teachers, one of whom is the same guy as from 3b1b.

  • @crane8035
    @crane8035 2 роки тому

    it’s grant !

  • @ruralmetropolitan
    @ruralmetropolitan 7 років тому

    This is nice.

  • @exarkk
    @exarkk 5 років тому

    excellent

  • @youzhisun3651
    @youzhisun3651 5 років тому +2

    This definitely sounds like 3blue1brown.

    • @x0cx102
      @x0cx102 5 років тому +2

      it is 3b1b! :)

  • @nicholasandrzejkiewicz
    @nicholasandrzejkiewicz 7 років тому +4

    Howto & Style? How is this not education?

    • @That_One_Guy...
      @That_One_Guy... 4 роки тому

      What do you mean howto ? This isn't some simple cooking tutorial or something this is academic teaching

    • @nicholasandrzejkiewicz
      @nicholasandrzejkiewicz 4 роки тому +1

      @@That_One_Guy... I didn't mean anything, that is how UA-cam categorized the video.

  • @Joshua-c
    @Joshua-c 5 років тому

    Brilliant

  • @mrscherryhuggypoinks9438
    @mrscherryhuggypoinks9438 7 років тому +3

    notif squad wer u at? no one? mkay....

  • @zeyad544
    @zeyad544 6 років тому +1

    what

  • @mohamedusaid456
    @mohamedusaid456 4 роки тому

    Super maa more than super.

  • @justin.t.mcclung
    @justin.t.mcclung 2 роки тому

    No disrespect, but your symbol for the partial derivative needs work. It looks like a g

  • @lullubi5957
    @lullubi5957 4 роки тому

    I don't understand nothing 🤦‍♀️

    • @aashsyed1277
      @aashsyed1277 3 роки тому

      You don't understand anything or if u say nothing it means u understand :P

  • @尾崎元恒-j2u
    @尾崎元恒-j2u 3 роки тому

    男優と数学者以外に仕事をしたくないです。

  • @AntonyDesigns
    @AntonyDesigns 3 години тому

    * watches video and recognizes the narrator's voice *
    * looks down at comment *
    Grant Sanderson! I knew it! @3blue1brown