e to a matrix

Поділитися
Вставка
  • Опубліковано 15 вер 2024
  • Calculating e^A for a matrix A, explaining what this has to do with diagonalization, and solving systems of differential equations
    Check out my Eigenvalues playlist: • Diagonalize 2x2 matrix
    Subscribe to my channel: / @drpeyam

КОМЕНТАРІ • 170

  • @gast128
    @gast128 4 роки тому +141

    I stumbled upon this 25 years ago in a dissertation and was puzzled by what it meant a matrix as exponent. Unfortunately there was no UA-cam back then to give me the answer. Thanks for clearing another impediment before I can die.

    • @Titurel
      @Titurel 3 роки тому +1

      Thank the math gods for UA-cam!

    • @xenumi
      @xenumi 2 роки тому

      Shorter of breath and one proof closer to death.

    • @johnsmithsu310
      @johnsmithsu310 8 місяців тому +1

      now you can die in peace brother 😸 /j

  • @dougr.2398
    @dougr.2398 5 років тому +87

    Exponential matrices are useful in solving relativistic wave equations. (Dirac equation for Hydrogen atom).

    • @TheSandkastenverbot
      @TheSandkastenverbot 2 роки тому +1

      They actually occur in any textbook on ordinary differential equations that deals with systems of linear ode's

  • @sebastiancuentasjimenez3231
    @sebastiancuentasjimenez3231 4 роки тому +7

    I speak very little English, but I understood most of the explanation, seriously thank you very much from Bolivia

  • @daddymuggle
    @daddymuggle 5 років тому +60

    Very nice presentation. You might want to do something about a minor error at 8:07, where you said (correctly) "e^A = P e^D P^-1" but wrote "e^A = P e^D e^-1".

    • @ldx748
      @ldx748 4 роки тому +4

      pee xd

  • @rand2person320
    @rand2person320 5 років тому +39

    Love the linear algebra. Keep it up Dr. P.

    • @rand2person320
      @rand2person320 5 років тому +1

      Also Dr. P, you should definitely do a video on general matrix exponentiation. I would find such a video very satisfying.

  • @aaronsmith6632
    @aaronsmith6632 4 роки тому +6

    This is so cool, thanks! This is an idea I stumbled upon in college, and it's neat to see that it has real application in mathematics.

  • @TheKiltman31
    @TheKiltman31 5 років тому +2

    In general, you can always find a polynomial cancelled out by your matrix A (the characteristic polynomial, or the minimal polynomial if you're lucky) and then you can do 2 things to make the computation of the matrix powers easier:
    1) naive : use that to get a recursion relation between A^n and smaller powers of A, which you have already computed
    2)smart: for all integer p, compute an Euclidean division of X^p by said polynomial: when remplacing X by A, the term which is a multiple of said polynomial just cancels out and you are left with a simple polynomial in A of degree < N if N is the size of your matrix; this works nicely if you have an explicit expression of your matrix A and the polynomial cancelled out by A

  • @lukehatcher98
    @lukehatcher98 4 роки тому +2

    This is an absolute GOD TIER explanation. Thank you

  • @benjaminbrady2385
    @benjaminbrady2385 5 років тому +30

    You don't need it to be diagonalizable though since you can always raise to a power by hand, it's just really hard to calculate without eigenvectors

    • @sugarfrosted2005
      @sugarfrosted2005 5 років тому +5

      Using Jordan blocks can make it easier too.

    • @nathanisbored
      @nathanisbored 5 років тому +1

      i dont understand how you would find A^n directly, since the direct method depends on knowing what n is

    • @shayanmoosavi9139
      @shayanmoosavi9139 4 роки тому +2

      You can't do it because it's an infinite series. For example how would you find exp([1 2; 3 4])? There's no way that you can multiply this matrix infinitely many times by itself. The series would be divergent. Unless you want your matrix to become [inf inf; inf inf]).

    • @josephtraverso2700
      @josephtraverso2700 2 роки тому

      @@nathanisbored I’m replying even though it’s two years too late. In the formula A^N = P D^N P^-1, the D matrix is diagonal, so raising it to N merely means raising its elements to N. So D^N is simple to calculate and you just stick it between P and P^-1 to get A^N

  • @LeandroDelgadoVentrue_O_Lobo
    @LeandroDelgadoVentrue_O_Lobo 2 роки тому

    Hi Dr. Peyam! I started my Master Degree in the last tusday and because of it, I stayed with my eyes in papers and books in the last four days to undestand this kind of exercise and I found a way to solve it, but I did not completly sure about my solution until I find your video! Your video confirm all my conclusions about this topic, thank you very much!

    • @drpeyam
      @drpeyam  2 роки тому +1

      Thank you!!!!

  • @varunalur3988
    @varunalur3988 4 роки тому +4

    Thank you so much professor, this helped me a lot.

  • @RaniLink
    @RaniLink 5 років тому +4

    Thank you so much! from a Tel-Aviv University mechanical engineering student!

    • @tamarpeer261
      @tamarpeer261 3 роки тому +2

      היי! Open University Computer Science student

  • @Eric-xh9ee
    @Eric-xh9ee 2 роки тому

    I forgot about this then I found your video after confused about how to solve for the matrix power of an exponential in a quantum computing function. I love your channel keep up the great work!

  • @rishabhpandey12aroll7.5
    @rishabhpandey12aroll7.5 Рік тому

    I love this enthusiasm for maths, you earned a sub sir. Keep up the good work.

  • @omegadelta493
    @omegadelta493 5 років тому

    this was a great video to help me in times of need, mainly right before the exam, thanks you for breaking down everything so adequately.

  • @antoniocampos9721
    @antoniocampos9721 3 роки тому

    Man, you answer all the questions that I've been looking for in books and , as always, I never find them .

  • @PuzzleQodec
    @PuzzleQodec 5 років тому +7

    I remember that these matrix exponentials were very useful in computer science for modeling dynamic systems with one of more feedback loops. I wonder where I can find that mysterious Feigenbaum constant in here.

  • @hkhj139
    @hkhj139 6 місяців тому

    Thank you so much
    Internet is a blessing

  • @rifatsiddique9643
    @rifatsiddique9643 4 роки тому +1

    Osadharon koriye6en sir,khub valo laglo

  • @thelife8836
    @thelife8836 Місяць тому +1

    sir you are a gem❤

  • @animeshsarkar7717
    @animeshsarkar7717 2 роки тому +1

    Your videos are really great ! Thank you sir ❤️

  • @kamilbizon8317
    @kamilbizon8317 5 років тому +10

    Soon we will have matrix derivatives on this channel

    • @drpeyam
      @drpeyam  5 років тому +11

      Hahaha, great idea!!!

  • @Fightclub1995
    @Fightclub1995 5 років тому +4

    In general you can use the Jordan normal form.

  • @siddharthjoshi6811
    @siddharthjoshi6811 4 роки тому

    Dr Peyam, we love your videos and they are quite informative and good refresher. I have one suggestion, when you refer to some previous video for some concept, you can add the link in info button or in the description.
    Thank you for the lovely content.

  • @슈슈-b3z
    @슈슈-b3z 3 роки тому +1

    What a great video! Thank you for the wonderful explanation :)

  • @basilpp3810
    @basilpp3810 3 роки тому

    That was a fantastic explanation....Dr .P !!

  • @gunjansharma4789
    @gunjansharma4789 Рік тому

    Amazing video.. Thank you so much Dr. Peyam

  • @vigneshsaravanan3305
    @vigneshsaravanan3305 3 роки тому

    Thanks for clearing this out, Keep up the great work mate.

  • @xsli2876
    @xsli2876 4 роки тому

    Thank you very much! I am always a big fan of your teaching you-tube videos. Here, at 10:50, I am puzzled. You wrote: x(t)=e^(At) * c , however, my textbook says: x(t)=e^(At)*x(0) while x(0)=P*c
    The final answer should be x=c1*e^t*(5,1) + c2*e^(5t)*(1,1)
    Again, thank you so much for teaching us! I greatly appreciate it.

    • @drpeyam
      @drpeyam  4 роки тому

      I mean c is an arbitrary constant, so if c is arbitrary, so is P*c. The second formula is a bit more specific

  • @MiffyNichen
    @MiffyNichen 5 місяців тому

    Stright to the point. Thanks.

  • @PowerandControlUFU
    @PowerandControlUFU 5 років тому +1

    I just use Laplace transform. We use this all the time in Control Systems.

  • @smoua4588
    @smoua4588 Рік тому

    Thank you so much. I understand everything you explained.

  • @Handelsbilanzdefizit
    @Handelsbilanzdefizit 5 років тому +1

    How do you define the factorial of a matrix?: A!
    I would say, A_5! = A_5 * A_4 * A_3 * A_2 * I, where I is identity matrix, and A_n has 1 lower determinant than A_n-1
    Or use Gamma(A_5). Something like that. But what interests me more, is there a quick rule how the result changes if you permute the factors of a Matrixproduct?
    Result1 = A*B*C*D*E*F*G --> Result2 = F*A*G*D*B*E*C without recalculating the entire thing. The determinants of the results have to be same, I think.

    • @drpeyam
      @drpeyam  5 років тому

      Not sure :) I mean what is even f(x) !

    • @Handelsbilanzdefizit
      @Handelsbilanzdefizit 5 років тому

      @@drpeyam
      Greetings from Germany, by the way.
      My english is not the best.

    • @Royvan7
      @Royvan7 4 роки тому

      @@Handelsbilanzdefizit pretty sure you could use the gamma function. not certain how you would go about calculating the integral tho.
      as for the rearranging thing. yep, the determinant of result 1 and 2 has to be identical. the only rearranging rule i know is A*B = C => A^t*B^t = C^t (^t is transpose)
      let's see...
      A*B*C*D*E*F*G = R
      G^t*[A*B*C*D*E*F]^t = R^t
      G^t*[D*E*F]^t*[A*B*C]^t = R^t
      G^t*F^t*[D*E]^t*C^t*[A*B]^t = R^t
      G^t*F^t*E^t*D^t*C^t*B^t*A^t = R^t
      huh... so you can just reverse the order no matter how many. don't think this will be useful for any other permutation then just reverse transpose.

  • @Tomaplen
    @Tomaplen 5 років тому +3

    I didin't get why e^D is what you showed :( I feel like you used what we are doing, the ides was to know how to solve e^matrix but you just showed the answer of e^D

    • @Poultrypoundcake
      @Poultrypoundcake 3 роки тому +1

      Yeah he said he wasn't gonna explain it I thought that was the point

    • @Titurel
      @Titurel 3 роки тому

      I know. I feel I need to know why. 😩

  • @dhrubajyotipaul8204
    @dhrubajyotipaul8204 3 роки тому

    Your sweet video made my day ♥

  • @michalbotor
    @michalbotor 5 років тому

    here's a question that came to my mind:
    set β := { [1, 0; 0, 0], [0, 1; 0, 0], [0, 0; 1, 0], [0, 0; 0, 1] } (*) of 2x2 matrices is a basis for the linear space of 2x2 matrices.
    i've checked, that the set
    exp(β) := { exp(B) | B in β } = { [e, 0; 0, 1], [1, 1; 0, 1], [1, 0; 1, 1], [1, 0; 0, e] } is a basis for this space as well.
    is that true for any basis of linear space of NxN matrices?
    (*) here: comma means 'in the same row', and semicolon means 'in the next row'.

  • @astha192
    @astha192 3 роки тому +1

    Please tell me how P and P^(-1 ) act as scalars wrt sum

    • @joshuakaufman6787
      @joshuakaufman6787 3 роки тому

      I believe it’s that they aren’t affected by N, so by the properties of a series, you can take them out. Constant factor and whatnot.

  • @kyleeagar2423
    @kyleeagar2423 Рік тому

    Great Explanation!

  • @sandygham1032
    @sandygham1032 2 роки тому

    thx for the video 😀really simple and exact examples !!

  • @MathMisty
    @MathMisty 3 роки тому

    I have an exam tomorrow and stumbled upon an example: calculate e^(At) if A=[[3,0],[2,4]]. I've read that if the matrix is upper triangle or already diagonal, we only rise it to the power of e on the diagonal and derivate it (d/dt) on non diagonal places. That would make this be e^(At)=[[e^3t,0],[2te^2t,e^4t]]. However, I haven't found the information for down triangle matrix such as this one in the example, is it the case same as for upper triangle matrices?

  • @khanster
    @khanster 5 років тому

    Nice. Gonna catch up to this whole linear alg series soon.

  • @mecha_engineer
    @mecha_engineer 2 роки тому

    I like ur passion

  • @cbbuntz
    @cbbuntz 2 роки тому

    I'm still trying to wrap my head around the concept. I know this is used for solving differential equations and I'm trying to figure out what it actually means in an intuitive way, but it's not intuitive. If you plug in a sequence of integers on a superdiagonal or subdiagonal and the rest is zero, you get a triangular matrix of binomial coefficients. How? I'm guessing it's probably something about the factorials in the power series reducing to n choose k, but I don't get how it works.
    Another interesting property is if your matrix is skew symmetric, the matrix is orthogonal. I tried sticking the same sequence on the superdiagnals and a negative copy on the subdiagonal and I got something that looked pretty random. Then I tried A'*A and got back the identity. (that's matlab for A transpose dot A). I tried a bunch of different sequences and it seemed like as long as there was nothing on the main diagonal and everything else was anti-symmetric, the resulting matrix was even orthonormalized.
    I'm having a hard time seeing a pattern in what was happening.
    I'm probably gonna try some even weirder stuff. Like what does the gamma function do to a matrix? Hell, maybe I'll even try some bessel functions just because i can.

  • @ApplepieFTW
    @ApplepieFTW 4 роки тому +3

    7:42 ... but why?? why does this make sense to do, why is it "allowed"

    • @MuffinsAPlenty
      @MuffinsAPlenty 4 роки тому +3

      Use the formula for exponentials.
      Suppose D is a diagonal matrix. Then e^D = ∑[n=0,∞] D^n/n!.
      Remember that sums of matrices are computed component-wise, so when doing an infinite series of matrices, we are really doing an infinite series of each component of these matrices.
      Let's take the sum of the (i,j)-entries of all of the matrices where i≠j. But for any n, the (i,j)-entry of D^n is 0. So we have that the sum of the (i,j)-entries is ∑[n=0,∞] 0/n! = 0.
      Now, for i=j, suppose d is the (i,i)-entry of D. Then the (i,i)-entry of D^n is d^n. So the sum of the (i,i)-entries of these matrices are ∑[n=0,∞] d^n/n! = e^d.

  • @elixpo
    @elixpo Рік тому

    I'm here from BPRP (BlackPen_RedPen)... I'm a programmer and this explanation was stunning

  • @muluegebreslasie5954
    @muluegebreslasie5954 Рік тому

    Hi Dr Peyam: Thank you for the helpful video on how to compute the exponential matrices. I need your help, on how to find a matrix D (to be found) such that: e^(D ) = C (given square matrix).

    • @drpeyam
      @drpeyam  Рік тому

      D = ln(C) so take ln of all the eigenvalues

    • @muluegebreslasie5954
      @muluegebreslasie5954 Рік тому

      @@drpeyam : Thank you for the prompt reply Dear Dr. I have tried it for D=(a,b;c,d) such that d11=a,d12=b,d21=c and d22=d and also, where C=(-1,0;0,-4) such that c11=-1,c12=0,c21=0 and c22=-4 which is a diagonal matrix. If I used your method I couldn't find a matrix, but I have seen the question as find a matrix that satisfies that. Thanks.i.e ln(-1) is not defined.

  • @afrolichesmain777
    @afrolichesmain777 5 років тому

    Thank you so much! Great video as always!!

  • @user-fy5tn7sy3t
    @user-fy5tn7sy3t 3 роки тому

    Thanks Doctor ... it’s perfect 👌

  • @lydiaromero466
    @lydiaromero466 3 роки тому

    thank you for the video! It really helped me :)

  • @sagargour2024
    @sagargour2024 2 роки тому

    omfg this is insane, tyssm sir

  • @tauhid9983
    @tauhid9983 5 років тому +1

    why not use caley hamilton theorem to do it instead???

  • @stuffing1747
    @stuffing1747 Рік тому +1

  • @WerIstWieJesus
    @WerIstWieJesus 4 роки тому +1

    With your presentations everything is so easy! Is there any video on the Jordan form?

    • @drpeyam
      @drpeyam  4 роки тому +1

      Yeah

    • @drpeyam
      @drpeyam  4 роки тому +1

      Check out my video on the Jordan Canonical Form

  • @shreyashchacharkar8323
    @shreyashchacharkar8323 2 роки тому

    hi can we replace matrix A by its trace to solve e to power A when A is Pauli matrix?

  • @jayjayf9699
    @jayjayf9699 3 роки тому

    I don’t get why the diagonal matrix can be written as a exponential of the entries of that diagonal matrix , yet an exponential of a ordinary matrix cannot be written as the exponential of the the entries, in other words why do we have to diagonalise it ?

    • @drpeyam
      @drpeyam  3 роки тому +1

      Basically because in order to take A^2 you don’t just take the squares of its components, but this is true for diagonal matrices. Matrix multiplication is weird

  • @umbraemilitos
    @umbraemilitos 5 років тому +2

    It is time to introduce Dirac notation, and the Einstein-Reimann notation.

  • @volkankacagan2585
    @volkankacagan2585 4 роки тому +1

    thanks mate u helped a lot

  • @axenicyt8996
    @axenicyt8996 5 років тому +1

    I need your help! Please integrate e^x/x with no limits or domains like 0,inf,etc.

    • @drpeyam
      @drpeyam  5 років тому +1

      Not possible, I think

    • @axenicyt8996
      @axenicyt8996 5 років тому

      That's why I need your help because impossible itself says I'm possible

    • @kanurisrilakshmi
      @kanurisrilakshmi 5 років тому

      If limits are there we can use Simpsons rule etc otherwise it is not possible

  • @alexlin6088
    @alexlin6088 3 роки тому +1

    thank you docter Pe^De^-1

  • @tracyh5751
    @tracyh5751 4 роки тому

    I have heard that diagonalizable matrices are dense in the matrices. Could you use continuity of the matrix exponential and this fact to calculate the matrix exponential of a non-diagonalizable matrix?

    • @drpeyam
      @drpeyam  4 роки тому

      Well, except the P in PDP^-1 might change as well! The Jordan form is much better in those situations

  • @hadisehfallah1125
    @hadisehfallah1125 2 роки тому

    Thanks. How can we comut A^(1.5)?

  • @RandomDays906
    @RandomDays906 5 років тому

    If all you need is a power series of a matrix to use PDP^-1 (and the fact that it's diagonalizable), would it be possible to find 1/(1-A) for a matrix, since f(x) = 1/(1-x) is represented by Sum from 0 to inf x^n, |x|

    • @drpeyam
      @drpeyam  5 років тому +1

      Yes, of course!

    • @drpeyam
      @drpeyam  5 років тому +1

      Imagine, in particular, how easy it would be to find that inverse if A^k = 0 for some k

  • @xzy7196
    @xzy7196 4 роки тому +1

    Now do a triple integral of a matrix.

  • @blueblackpenn6368
    @blueblackpenn6368 5 років тому +1

    Make videos on matrix derivatives

  • @danielwigh1
    @danielwigh1 2 роки тому

    Thank you for your video! I was wondering if you could explain why the answer you get in your video doesn't seem to match the one calculated by WolframAlpha?

    • @drpeyam
      @drpeyam  2 роки тому +1

      Believe in the math, not wolframalpha 😁 I’m definitely right here

    • @danielwigh1
      @danielwigh1 2 роки тому

      @@drpeyam So WolframAlpha is wrong about something!? :o I'll submit a ticket and lyk if they say something interesting!

    • @drpeyam
      @drpeyam  2 роки тому

      Hahaha

  • @universum-theuniverseexpla6565
    @universum-theuniverseexpla6565 3 роки тому

    Taylor series are wonderful!

  • @ShubhayanKabir
    @ShubhayanKabir 4 роки тому

    This possibly means you can find sines and cosines of matrices? With presumably a similar argument.

    • @drpeyam
      @drpeyam  4 роки тому +1

      Yes and there’s a video on that actually

  • @rolfdoets
    @rolfdoets 4 роки тому

    Nice result!

  • @algonte
    @algonte 4 роки тому

    Thank you, great explenation, but I don't understand why you multiply at the end by the C vector, I guess the solution of x'(t) = A x(t) is just x(t) =x0 e(exp(At)). Hmm... ok, x(t) is not an escalar function but a vector function

    • @drpeyam
      @drpeyam  4 роки тому +1

      x0 is your C, except xo exp(At) doesn’t make sense matrix multiplication wise, so you do the opposite

    • @algonte
      @algonte 4 роки тому

      @@drpeyam Thanks, yes x0 goes on the right. By the way there is a very important instance of this problem: the Schrödinger equation :-)

  • @sslelgamal5206
    @sslelgamal5206 2 роки тому

    Thanks for this nice explanation, but to consider the general case, all matrices are not diagonalizable. What should we do then?! Use Jordan Form? But calculating the powers of J is not that easy. Linear control systems and in general Dynamic systems which can be described using state-space equations as you also mentioned in video can use this trick, if the problem of diagonalizable system matrix be solved.

    • @drpeyam
      @drpeyam  2 роки тому

      Yep Jordan form

    • @sslelgamal5206
      @sslelgamal5206 2 роки тому

      @@drpeyam Thank you! And nice reply speed 😅👌👍

  • @pilover314159
    @pilover314159 3 роки тому

    god bless you so much

  • @sebastianreichling1286
    @sebastianreichling1286 2 роки тому

    Amazing!

  • @douglasstrother6584
    @douglasstrother6584 4 роки тому +1

    I first saw this in Quantum Mechanics in the context of the time-evolution operator:
    en.wikipedia.org/wiki/Hamiltonian_(quantum_mechanics)

  • @smita3255
    @smita3255 Рік тому

    Thanks

  • @shayanmoosavi9139
    @shayanmoosavi9139 4 роки тому

    amazing video. one question though : is X'=AX the matrix form of a differential equation because it looks identical to dx/dt=ax.

    • @drpeyam
      @drpeyam  4 роки тому

      Yeah

    • @shayanmoosavi9139
      @shayanmoosavi9139 4 роки тому

      @@drpeyam cool, so the entries of X are functions and the entries of X' are their respective derivatives, correct?

    • @drpeyam
      @drpeyam  4 роки тому

      Yeah

  • @antonyeyidi3965
    @antonyeyidi3965 2 роки тому +1

    merci merci

  • @chonchjohnch
    @chonchjohnch 5 років тому +1

    Could you do the logarithm of a matrix? Or the log of a matrix with a matrix base?

    • @drpeyam
      @drpeyam  5 років тому +2

      Yeah, probably, at least log(1+A) using power series, and you can get log_A (B) using ln(A) (ln(B))^-1

    • @hOREP245
      @hOREP245 5 років тому

      @@drpeyam Matrix factorial? Maybe we could approximate it using sterlings formula anyway.

    • @dougr.2398
      @dougr.2398 5 років тому

      hOREP Stirling’s

    • @hOREP245
      @hOREP245 5 років тому

      @@dougr.2398 If you want to go grammar Nazi on me, please ensure that you use a full stop at the end of your sentence. Thanks.

  • @kanurisrilakshmi
    @kanurisrilakshmi 5 років тому

    Can you explain gamma matrix functions

  • @wooyoungkim2925
    @wooyoungkim2925 4 роки тому +1

    oh my god ~!! this is.... mathermatics. i got it !!!!!

  • @tomatrix7525
    @tomatrix7525 3 роки тому

    So nice

  • @yoavwilliamson3378
    @yoavwilliamson3378 4 роки тому

    thank you!

  • @sexyyoda7567
    @sexyyoda7567 4 роки тому +1

    Is it just me?
    5:40 "PooP"
    8:09 "Pee"

  • @user-tg6ot1mb2i
    @user-tg6ot1mb2i 4 роки тому

    So clearly!!!

  • @warispark7733
    @warispark7733 2 роки тому

    why e^D at 7.45 [e 0;0 e^5] I don't know but in my mind [e 1;1 e^5]

  • @mathadventuress
    @mathadventuress 2 роки тому

    What about A^A dr peyam?

    • @drpeyam
      @drpeyam  2 роки тому +1

      I did a video on A^B

    • @mathadventuress
      @mathadventuress 2 роки тому

      @@drpeyam you gave me a heart at 4:20 my life is complete

  • @TheAcujlGamer
    @TheAcujlGamer 4 роки тому

    Thank you!!!!!!!!!

  • @mariomuysensual
    @mariomuysensual 4 роки тому

    A year ago I didn't think I will use this...

  • @TectonicMovie17
    @TectonicMovie17 3 місяці тому

    5:37 It reads POOP lmao 😂
    (I'm going to fail exams)

  • @jabkowy6107
    @jabkowy6107 5 років тому

    What about a B^A?

    • @drpeyam
      @drpeyam  5 років тому +2

      e^(ln(B) A)

  • @abdullahalmasri612
    @abdullahalmasri612 5 років тому +1

    Wait that's -not- illegal

  • @profefernandoo4400
    @profefernandoo4400 Рік тому

    Ummm ... the e^x series that he wrote is the Maclaurin series....It is only true in the vecinity of cero, not for all values...It is comun mistake

    • @fhdxbdh1272
      @fhdxbdh1272 Місяць тому

      @profefernandoo4400 thats wrong dont take my word for it you can use any convergence test or ask the internet.

  • @pearl1353
    @pearl1353 4 роки тому

    golden

  • @ssdd9911
    @ssdd9911 5 років тому

    8:07 e^-1?

    • @MichaGasecki
      @MichaGasecki 5 років тому +1

      Ofc P^(-1) :)

    • @kinshukdua
      @kinshukdua 5 років тому +3

      How did you post a month before?

  • @tom13king
    @tom13king 5 років тому

    60 fps is orgasmic.

  • @mohammadghanatian114
    @mohammadghanatian114 4 роки тому

    برو بچ تبریز و عشقه

  • @minhhainguyen2671
    @minhhainguyen2671 7 місяців тому

    ❤❤❤❤❤.

  • @sebmata135
    @sebmata135 5 років тому

    Haha 5:35 looks like POOP^-1

    • @moaydsparklug8311
      @moaydsparklug8311 5 років тому

      sebmata did you the transpose of POOP is POOP cool!

    • @sebmata135
      @sebmata135 5 років тому

      Moayd Sparklug POOP^T is
      People
      Order
      Our
      Patties

  • @HAL-oj4jb
    @HAL-oj4jb 5 років тому +2

    A = PDP = PewDiePie?

    • @drpeyam
      @drpeyam  5 років тому +1

      Hahahahaha

    • @PowerandControlUFU
      @PowerandControlUFU 5 років тому

      Are you what? 9 years old?

    • @Royvan7
      @Royvan7 4 роки тому +1

      so does that make pew the inverse of pie

  • @TheCasperpoot
    @TheCasperpoot 4 роки тому

    groetjes van karel

  • @to-grt
    @to-grt 3 роки тому

    wtf