A unique approach to the half-derivative.

Поділитися
Вставка
  • Опубліковано 20 січ 2025

КОМЕНТАРІ • 118

  • @MichaelPennMath
    @MichaelPennMath  Рік тому +10

    Head to squarespace.com/michaelpenn to save 10% off your first purchase of a website or domain using code michaelpenn

    • @35571113
      @35571113 Рік тому

      That SquareSpace advertisement is simply genius. Did you and your team make it yourselves???

  • @plutothetutor1660
    @plutothetutor1660 Рік тому +120

    I always use laplace transforms to calculate half derivatives of polynomials with positive degree, quickest way i know of.

    • @milkwater1204
      @milkwater1204 Рік тому +15

      I've been using Laplace transforms to compute complex order derivatives for fun, it's super interesting and a really easy way of conceptualizing what's actually going on.

    • @matematicacommarcospaulo
      @matematicacommarcospaulo Рік тому +1

      Any suggestions of bibliography about it?

    • @plutothetutor1660
      @plutothetutor1660 Рік тому +3

      @@matematicacommarcospaulo There are many books available about laplace transforms but not many about their applications to half derivatives

    • @ingolifs
      @ingolifs Рік тому +3

      I convert the function into its representation as a hypergeometric function, add alpha to all the coefficients, then go to lunch.

    • @matematicacommarcospaulo
      @matematicacommarcospaulo Рік тому

      @@plutothetutor1660 I'm looking for those with half derivatives

  • @kkanden
    @kkanden Рік тому +56

    i've been watching your (nearly) daily videos for like 3+ years now and it amazes me so much that you still manage to come up with new, interesting and refreshing math content! i really can't express how much i value your videos in my day-to-day life, they always make a kind of fixed point of my daily routine and i just love and appreciate it a lot! thanks a lot michael!!!

    • @MyOneFiftiethOfADollar
      @MyOneFiftiethOfADollar Рік тому +3

      Agreed! He is a devoted father, engages in dangerous hobbies, teaches full time face to face AND produces at least one exemplary video a day!!
      Conclusion: He doesn’t get much sleep.

  • @nguyenquangkiet2103
    @nguyenquangkiet2103 Рік тому +27

    This coincides with Riemann-Liouville definition of fractional derivative

  • @MrAkindos
    @MrAkindos Рік тому +9

    Shouldn't the integral on 17:56 be equal to 1/2 actually? if we expand the integral from -\infty to +\infty and assume \int_{-\infty}^{+\infty }exp(-|st|)\delta(t) dt we get 1 and the needed result is a half of that due to the function being even. Moreover, if we approximate the delta function using a normal distribution with sigma approaching 0 we will get the same result(the issue is that we are integrating only on the right side of the "neighbourhood" of zero. Or is it the case that the dirac delta function is defined differently for Laplace transform(something like the Positive normal distribution, the distribution density function for x^2 where x is N(0,\sigma^2). Thanks for your answer.

  • @scottmiller2591
    @scottmiller2591 Рік тому +3

    The same process also works with the Fourier transform - essentially, the alpha derivative is F^{-1}(1/omega^(alpha) F(f(t)), with a multiplier depending on how you defined the Fourier transform (math, physics, or computer science Fourier formula normalization).

  • @laurensiusfabianussteven6518
    @laurensiusfabianussteven6518 Рік тому +35

    This looks similar to diagonalization in linear algebra

    • @aidanhennessey5586
      @aidanhennessey5586 Рік тому +10

      The f^-1(g(f(x))) pattern is super common throughout math and is generally referred to as conjugation. It plays a very central role in group theory.

    • @jasperday9020
      @jasperday9020 Рік тому +3

      Great connection! The laplace and fourier transforms are in fact very similar to diagonalizations. Basically, what you're doing is transforming from one domain (set of basis vectors) into a more convenient set of basis vectors. The choice of basis vectors is informed by the *choice of linear operators you wish to apply to it*.
      Let's start discrete. Say you're doing linear algebra and you have some symmetric matrix S. Any symmetric matrix can be decomposed into orthogonal eigenvectors: S = Q Λ QT. Now say you have some vector x, and you want to repeatedly apply S to it. S^n(x) = Q Λ^n QT x, since diagonalization provides a convenient way to take powers of a matrix. Look at the structure:
      (QT x) = Q transpose times x, this line transforms x from its original basis to the basis of eigenvectors of S.
      Λ^n is some simple operation applied to x in the eigenvector basis
      Q transforms back into the original basis.
      So Q Λ^n QT x transforms into eigenvector basis, does some simple operation in eigenspace, and then transforms back into the original basis.
      Now let's see the parallel with the Fourier transform of a function. S, the symmetric linear operator, will be the *second derivative of x*. The eigenfunctions of the second derivative are sin(kt) and cos(kt) or generally e^(ikt). As you may recall from math, sines and cosines are orthogonal functions. Inner products in function space (what we wrote as QT x for the discrete analogy) are the same as integrating the product of functions across their range:
      prod(f, g) = ∫ fg dt
      ∫ e^(ikt) e^(ipt) dt = δ_k,p (= 1 if k==p, 0 otherwise, you can prove this yourself)
      The Fourier transform then transforms some function f(t) from its original, cartesian basis into a basis of orthogonal eigenfunctions, just like QT x in the discrete case. You take the inner product of f(t) with every different frequency e^(ikt), and you get the frequency content F(k).
      The power of the Fourier and Laplace transforms is that when you transform into the eigenbasis of some operator, it becomes *very easy* to compose that operator with itself on the original function. For our discrete case, it makes it possible to take some arbitrary power of the matrix S. In the continuous case, the transformation makes it possible to take some arbitrary power of the derivative operator. And that's what this video is showing.
      For more information, check out Gil Strang's Computational Science and Engineering, section about Fourier.

    • @rmayer4086
      @rmayer4086 Рік тому +2

      @aidanhennessey5586 , And it's also an idea accessible to kids, who use it all the time rotating objects in Zelda TotK (where you can rotate a 3d shape in 2 axis, and need this trick to effectively do it in the third one)

  • @cosmic_sky_mountain
    @cosmic_sky_mountain 3 місяці тому

    love the colours.. makes it so much easier to digest

  • @ChefSalad
    @ChefSalad Рік тому +6

    Doing this method for general non-negative powers of t, by linearity, gives you the fractional derivatives of any function expressible as a McLaurin series for free, and as a Taylor series with a bit of work, which is about as general as you can get using these methods. There are a number of functions that are really difficult to find fractional derivatives of, like, for example, f(t)=1/t.

    • @MagicGonads
      @MagicGonads Рік тому

      can you use analytic continuation on annuli to express a Laurent series as a Taylor series on a punctured domain to obtain the fractional derivative there?

    • @ZieWeaver
      @ZieWeaver Рік тому +1

      Now I'm curious what happens when you use this approach for the fractional derivatives of the exponential. We'd expect it to still be an identity, but it might break down in an interesting way since one of the hidden assumptions is that f(t) is sub-exponential. The offending expression is still valid and equals f(0) = e^0 equals one as long as s ≥ 1, so who knows 🤷‍♀️

    • @ChefSalad
      @ChefSalad Рік тому

      @@ZieWeaver I'm pretty sure that Laplace transforms only require that the function to be transformed be of exponential order or less, not of sub-exponential order or less, so this method should still work for exponentials. Also, the exponential function's series expansion converges absolutely, so doing the fractional derivative term-by-term ought to work just fine. I'm not 100% sure on that though.

    • @ZieWeaver
      @ZieWeaver Рік тому

      @@ChefSalad I'm referring to 16:05 when we make the sub-exponential assumption to evaluate e^{-st}f(t) on (0, inf). We can substitute e^{-st}e^t=e^{(1-s)t} which always converges at t=0 but conditionally diverges at t=inf as discussed earlier.

  • @a52productions
    @a52productions Рік тому +2

    Using the Fourier transform instead of the Laplace transform makes the definition even nicer and faster to derive. Because derivatives in frequency space are just multiplying by the frequency and a constant, you don't have to worry about getting rid of the delta terms first.
    That being said, the Laplace transform is generally easier to work with, so this approach is valuable! It's nice that they have an almost identical form too.

  • @Anonymous-zp4hb
    @Anonymous-zp4hb Рік тому

    Great video on an abstract idea, taking the square-root of an operator.
    Tricky questions need powerful tools to solve, so this Laplace transform thing is probably pretty important.
    Gonna check your channel for more vids on it.

  • @javierabati9124
    @javierabati9124 Рік тому +4

    How can you define the value of the inverse laplace function of a constant to be the constant times the delta function (for which you need 0 to be in the domain) and then suppose that t>0? Is there not an incongruence there?

  • @UltraMaXAtAXX
    @UltraMaXAtAXX Рік тому +9

    Learned about this in a mathematical modeling class. It has applications to engineering.

  • @Waffle_6
    @Waffle_6 9 місяців тому

    a professor at my school, who my current professor(prof waterpeace) was best friends with is basically the grandfather of fractional calculus. He, Thomas Osler, was the one who really made fractional calculus possible(it was already a thing, but he made it actually usable)

  • @robertknetsch2715
    @robertknetsch2715 Рік тому +1

    I like how you set the limits of the integral to 0 and t-->infinity rather than just infinity. So many people think infinity is a number, whereas it is actually just a concept that means many things.

    • @victor1978100
      @victor1978100 Рік тому

      t-->infinity is a concept too, not a number.

  • @_DD_15
    @_DD_15 7 місяців тому

    Awesome video! Took me a while to find it!

  • @frfr1022
    @frfr1022 Рік тому +7

    One of my favorite topics!!!
    One interesting question is can we apply this technique to functions other than polynomials?
    And if yes, does the result always match one acquired from Cauchy's formula?

    • @TheEternalVortex42
      @TheEternalVortex42 Рік тому +2

      It's hard because Laplace transforms are only 'nice' for certain specific functions: en.wikipedia.org/wiki/List_of_Laplace_transforms

  • @depressedguy9467
    @depressedguy9467 Рік тому

    Recently done a course on this , where i found grunwald letnikov approach more relatable, and your Laplace transform approach is useful if your lower terminal is 0 .

  • @firemaniac100
    @firemaniac100 Рік тому

    I think the substitution x = st can be used with complex s with positive real part. We just need to use complex contour integration. First we substitute t by the complex variable z, then due to the exponential decay we have that the integrals over z_1(x) = x and z_2(x) = x/s give the same result.

    • @firemaniac100
      @firemaniac100 Рік тому

      the integral over the arc goes to zero in the limit of large radius

  • @ЛевЯрков-е1ж
    @ЛевЯрков-е1ж Рік тому +1

    Thanks for video! 17:25 I think that here have to be 1/2 instead of 1.

  • @qschroed
    @qschroed Рік тому

    This somehow feels like how you exponentiate matricies with arbitrary exponents by first diagonalizing them as in L-1 S^a L is kind of like P-1 D^a P and I am almost certain while writing this that we can formalized this notion, since the laplace transform is an infinite dimensional linear transformation you would expect these kind of properties to pop up

  • @robblerouser5657
    @robblerouser5657 Рік тому +9

    I am simply as amazed and confused as I was in high school.

  • @artsmith1347
    @artsmith1347 Рік тому

    12:28 This is the first time I noticed hearing this construction: "so in particular what I mean is that this *_person_* is equal to ..."
    I recall someone in the comments being hot about the east-coast-ism "this guy" when referring to an object. This seems to be a decent workaround.

    • @JCCyC
      @JCCyC Рік тому

      I can see the incels shrieking "WOKE MATH!" already.

  • @Calcprof
    @Calcprof Рік тому

    Heaviside used the 1/2 partial derivative to solve a PDE. (Involving cooling of an infinite half space)

  • @gene51231356
    @gene51231356 Рік тому +18

    Can there be a complex derivative? And is a negative derivative the same thing as an antiderivative, or something else?

    • @frfr1022
      @frfr1022 Рік тому +10

      Idea of "integrodifferential" combines integration and differentiation into one operation, where antiderivatives are treated as derivatives applied negative number of times.

    • @joansgf7515
      @joansgf7515 Рік тому

      Dr Peyam made a video about complex derivatives: ua-cam.com/video/tMalym_n8zM/v-deo.html

    • @Gears_AndGrinds
      @Gears_AndGrinds Рік тому +3

      I've seen a bunch of videos where they plug in i into the fractional derivative and I guess it works

    • @oddlyspecificmath
      @oddlyspecificmath Рік тому +1

      ​@@frfr1022First I've seen that term; it piqued my interest because I have an operation (in my research) that combines integration and differentiation (but after looking at your term, in another way). Do you know of any other terms that describe combining these two operations?

    • @the_cheese_cultist
      @the_cheese_cultist Рік тому +1

      sure! you can even put a matrix as the derivative, since you can define algebraic functions on matrices

  • @shacharh5470
    @shacharh5470 Рік тому

    The fact that the same operators can be defined equivalently by the Cauchy integral formula and the Laplace transform - does that suggest some deeper link between the Cauchy formula and the Laplace transform? This could be an interesting follow up video

  • @pahom2
    @pahom2 4 місяці тому

    It seams that this fractional derivative also non-local. I.e. depends on all values of f even those located far away from the point in which we calculate the derivative. This lack of locality for fractional derivatives is what bothers the most.

  • @DavidRybka
    @DavidRybka Рік тому

    isn't the square root of any linear transformation technically a class of linear transformations? and is there a way to determine if we have found all of them? The differintegrals do qualify as a set of transformations that function as a set of half derivatives. is there a way to determine if we know all of them, or is that an open problem?

  • @danielmilyutin9914
    @danielmilyutin9914 Рік тому +3

    Also there's convolution theorem. So all derivatives can be expressed as convolutions with kernels of form L_inv(s^alpha).

    • @pahom2
      @pahom2 4 місяці тому

      Where can I read about it? I can't find a general name for a partial derivative method described in the video. Yet to find how can it be expressed as convolution.

    • @danielmilyutin9914
      @danielmilyutin9914 4 місяці тому

      @@pahom2 Caputo fractional derivative.
      It's on Wikipedia.

  • @wolfmanjacksaid
    @wolfmanjacksaid Рік тому +1

    Question, could you take the derivative with a transcendental power on the Derivative operator? Like the Pi-th derivative or e-th derivative? And would you get anything interesting?

    • @schweinmachtbree1013
      @schweinmachtbree1013 Рік тому +4

      Yes the definition of the "fractional" derivative given in this video works for α=pi and α=e - "fractional" here is a bit of a misnomer like in "fractional part"; it doesn't actually mean a fraction, just a non-integer.

  • @rumbleinthejungle3358
    @rumbleinthejungle3358 Рік тому

    good stuff. thanks

  • @rafaelgcpp
    @rafaelgcpp Рік тому +1

    Being strict, those Laplace definitions are for functions defined on them domain x>=0.
    Nonetheless, very interesting.

  • @zakiabg845
    @zakiabg845 Рік тому

    What's the generalisation of that formula?

  • @KarthikMRKarthi
    @KarthikMRKarthi 4 місяці тому

    is this work in general every example or only for hafe diff of hafe diff(t)

  • @aymathconcoursprep
    @aymathconcoursprep Рік тому

    3:46 Lovley bprp part AKA Lovley bprp Method " DI "

  • @JCCyC
    @JCCyC Рік тому

    I find it amusingly intriguing that all the youtubers I like are buddies with each other.

  • @ranshen1486
    @ranshen1486 Рік тому

    Is there anything special when f(t) is itself the Laplace transform of another function?

  • @marylinebentzinger7378
    @marylinebentzinger7378 Рік тому

    The concept of derivative is very intuitive for me, I'm kinda on my Calc 2 year. Yet I can't manage to get my intuition working on why and in which domain a half derivative would be useful and what it actually represents, can somebody help me out or indicate me another video or article talking about it?

  • @moshecallen
    @moshecallen Рік тому

    I'm not sure of about non-integer derivatives, but couldn't one reasonably define a negative integer n order derivative as an -n-th order anti-derivative at least in cases where an anti-derivative (as opposed to a definite integral) remains well defined?

  • @mathhack8647
    @mathhack8647 Рік тому +1

    Intersting.
    Question : What's about Negative Derivative. (df^(-1/2/dt)). Does it have any meaning ?

    • @Gears_AndGrinds
      @Gears_AndGrinds Рік тому +2

      By definition it turns into antiderivatives since D^a D^b is the same as D^(a+b)

    • @kappasphere
      @kappasphere Рік тому +1

      I think I can prove that this would work, although I'm not sure if I'm doing anything wrong:
      Given the regular D^a, show that D^-a D^a f(t) = f(t), i.e. if D^a forms the a-th derivative of f, then D^-a forms the a-th antiderivative.
      D^-a D^a f(t) = L^-1(s^-a L(L^-1(s^a L(f(t))))) = L^-1(s^-a s^a L(f(t))) = L^-1(L(f(t))) = f(t)
      If I'm going wrong with this anywhere, then I'd assume that this is because L probably isn't injective (and therefore L^-1 would actually be multi-valued or lose some possible results) but I don't really know anything about that because Michael Penn kind of glossed over how L^-1 may or may not be well-defined in this video.

  • @jimallysonnevado3973
    @jimallysonnevado3973 Рік тому

    So, this works only for functions defined for t>0 and sub-exponential?

  • @fakecreeper9645
    @fakecreeper9645 Рік тому

    cool video :) making math fun and teaching us laplace transform

  • @Handelsbilanzdefizit
    @Handelsbilanzdefizit Рік тому +1

    Ok, that's the half derivative. But what's the complex derivative, and complex integral?
    Maybe in physics, it would be more accurate to write dx^(3+i) = dx³dx^i instead of: dx⁴
    Perhaps the results of relativity are approximately very close to reality, but not 100% perfect.

  • @dfcastro
    @dfcastro Рік тому

    What is the geometrical representation of a half derivatives and it's application?

  • @stevengigrich
    @stevengigrich Рік тому +4

    michael glitches at 22:22 lol

    • @DrR0BERT
      @DrR0BERT Рік тому +1

      The Matrix is resetting itself.

  • @Nikolas_Davis
    @Nikolas_Davis Рік тому

    At 21:13, your calculation of D^2 f(t) doesn't check out:
    D^2 f(t) = D (D f(t)) = D f'(t) = L^(-1) {s L { f'(t) } },
    and if we replace L { f'(t) } with s L{ f(t) } - f(0), we get
    D^2 f(t) = L^(-1) { s^2 L { f(t) } } - L^(-1) { s f(0) }
    It is not clear how to get rid of L^(-1) { s f(0) }. Or am I missing something?

    • @r.maelstrom4810
      @r.maelstrom4810 Рік тому

      You can make the same assumption Michael does for the case Df(t), that is L^(-1)(sf(0)) = sf(0) δ(t) which is 0 as δ(t)=0 for t>0.

    • @Nikolas_Davis
      @Nikolas_Davis Рік тому

      ​@@r.maelstrom4810
      Thanks; I hadn't realized that the inverse Laplace transform of 's' is also a δ(t). I think this should have been explicitly pointed out, though, because following the previously shown transforms, s = s^1 = 1 / s^(-2+1), so the inverse should be t^(-2) / Γ(-1), which is ill-defined due to Γ(-1). Apparently, one needs to do a complex contour integration to obtain δ(t).
      *Update:*
      I mistakenly stated that L(-1)(s) = δ(t), when in fact it should be δ'(t) (the derivative of δ(t), which can be defined by an integration by parts). In that case, it is not at all clear to me how we end up with a zero term, and in any case the issue is involved enough that it should merit a brief explanation in the video.

  • @okieinexile
    @okieinexile Рік тому

    Okay. Derivatives are linear operators on function spaces and we can map the real numbers into the space of linear operators in a way that preserves addition. Cool.

  • @kappasphere
    @kappasphere Рік тому +1

    Can't you do D^a D^b f(t) = L^-1(s^a L(L^-1(s^b L(f(t))))) = L^(s^a s^b L(f(t))) = L^(s^(a+b) L(f(t))) = D^(a+b) f(t)?
    This proves D^1/2 D^1/2 = D^1 which was asked for in the end.
    In fact, I believe that this same approach can be used to prove that D^n f(t) = L^-1(s^n L(f(t))) for all non-negative integers n, which was only stated as a conjecture at 20:26, and all of this just by using the D f(t) = L^-1(s L(f(t))) identity that was proved just before that:
    The base case D^0 f(t) = L^-1(s^0 L(f(t))) is given by the definition of L^-1 being the inverse of L, since s^0=1.
    Now for the induction step, assume that for a non-negative integer n, D^n f(t) = L^-1(s^n L(f(t))).
    Therefore, D^(n+1) f(t) = D (D^n f(t)) = L^-1(s L(D^n f(t))) = L^-1(s L(L^-1(s^n L(f(t))))) = L^-1(s s^n L(f(t))) = L^-1(s^(n+1) L(f(t))
    This proves that if n is non-negative, the definition that is given afterwards is identical to the usual derivative, assuming that the function really is sub-exponential and t>0, which just were the assumptions for the original proof of D^1 f(t) = L^-1(s L(f(t))).

  • @GreenMeansGOF
    @GreenMeansGOF Рік тому

    13:40 what if f(0) is not defined?

    • @kju-uu8me
      @kju-uu8me Рік тому

      L(f(t)) will not be defined then bc the Integral has its bounds at 0 and infinity. That being said he later assumes that t>0 which is probably the only case where f(0) not being defined works bc you can do workarounds with limits.

    • @GreenMeansGOF
      @GreenMeansGOF Рік тому

      @@kju-uu8me is it possible that the limit as t approaches 0 does not exist?

  • @manucitomx
    @manucitomx Рік тому +1

    Thank you professor.
    The pesky root of π did not go unnoticed.

  • @nbooth
    @nbooth Рік тому +1

    Wouldn't this let you define a fractional n-th derivative where n isn't constant? Then you could have (integro-)differential equations where even the order of the derivatives are unknown. Lovely.

  • @drew96
    @drew96 Рік тому

    I guess the mainstream approach to fractional differentiation would be Cauchy's formula? Proof of Cauchy's fractional integration approach might be interesting.

  • @cienciayeducacionparatodos9958

    A similar video with the Abelian groups that can be generated with some fractional operators would be interesting (DOI: 10.1016/j.amc.2022.127231)

  • @psychoh13
    @psychoh13 Рік тому

    So what about the pi derivative or the e derivative? What about… the i derivative?

  • @TheDannyAwesome
    @TheDannyAwesome Рік тому

    "You've probably heard of the derivative before, well we're going to take a LaPlace transform approach."

  • @pahom2
    @pahom2 4 місяці тому

    Results are identical to Riemann-Liouville formula

  • @anestismoutafidis4575
    @anestismoutafidis4575 Рік тому

    d/dx f(x) Intg dx^-1= Intg d Intg dx^0 = Intg dx Intg [dx -1] f(x)

  • @SorryWhatChannel
    @SorryWhatChannel Рік тому

    Hey, hey, hey! And what about half-function application?

  • @nz_gamer
    @nz_gamer Рік тому +5

    What about disproportionate fractional derivative : d^a/dx^b
    Is it even valid ?

    • @proxagonal5954
      @proxagonal5954 Рік тому +1

      I don't think so. The d/dx serves purely as notation in this case (but of course there are times where it's useful to think about it as an actual fraction). Most of the times half derivatives are written as f^(1/2) or D^1/2 f

    • @deinauge7894
      @deinauge7894 Рік тому +1

      You can write derivatives in an implicit way, eg:
      y=x^2
      dy = 2 x dx
      (d^2)y = 2 dx^2
      and then conclude
      (d^2/dx) y = 2 dx
      So yes, it does make sense. But there is no use for it - the only part that matters is the exponent in the numerator.

    • @pyropulseIXXI
      @pyropulseIXXI Рік тому

      @@proxagonal5954 dy/(dx dx) = N, where N is greater than all natural numbers

    • @proxagonal5954
      @proxagonal5954 Рік тому

      @@pyropulseIXXI I don't think that's a well defined proposition

    • @bjornfeuerbacher5514
      @bjornfeuerbacher5514 Рік тому

      As long as a is a natural number, one could define this quite easily by generalizing the usual limit definition of the derivative. But I think in most cases, that limit would not exist.

  • @APaleDot
    @APaleDot Рік тому

    I must have missed it: what is s?

  • @victor1978100
    @victor1978100 Рік тому

    What is a half-derivative? What is it useful for?

  • @maxvangulik1988
    @maxvangulik1988 Рік тому

    22:22 👋

  • @Bjowolf2
    @Bjowolf2 Рік тому

    I have heard that this half derivative actually does have practical aoplications, when solving certain mechanical problems ( differential equations ) - does anyone here know anything about this? 🙂

  • @gp-ht7ug
    @gp-ht7ug Рік тому +1

    Michael I think you never explained the factorial of complex numbers in your second channel.
    Would you please make a video on gamma function where x is a complex number? Thanks

  • @slowfreq
    @slowfreq Рік тому

    The most complicated way to differentiate f(t)=t

  • @augustodutra3839
    @augustodutra3839 Рік тому

    That is absurd. Next step are complex and matrix derivatives

    • @frfr1022
      @frfr1022 Рік тому +1

      I think i have seen complex derivatives, but not on UA-cam. Matrix, or quaternion derivatives i haven't heard of, but maybe Cauchy formula wouldn't fail in this context

    • @Gears_AndGrinds
      @Gears_AndGrinds Рік тому +4

      Complex derivatives are already a thing look up "ith derivative" you'll probably find a few videos about it
      Matrix derivatives might work since we did find way to evaluate e to the power of a matrix but I'm not too sure about that yet

    • @TheEternalVortex42
      @TheEternalVortex42 Рік тому +2

      It probably has no practical value but you could define matrix derivatives as D^M = exp(M log D) and then use the series definition to get "something". At this point we are just making stuff up though so we can do whatever we want ;)

  • @n8cantor
    @n8cantor Рік тому

    alphath

  • @marc-andredesrosiers523
    @marc-andredesrosiers523 Рік тому

    Interesting