The craziest definition of the derivative you have ever seen!

Поділитися
Вставка
  • Опубліковано 18 сер 2024
  • 🌟🌟To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/.... The first 200 of you will get 20% off Brilliant's annual premium subscription.🌟🌟
    🌟Support the channel🌟
    Patreon: / michaelpennmath
    Channel Membership: / @michaelpennmath
    Merch: teespring.com/...
    My amazon shop: www.amazon.com...
    🟢 Discord: / discord
    🌟my other channels🌟
    mathmajor: / @mathmajor
    pennpav podcast: / @thepennpavpodcast7878
    🌟My Links🌟
    Personal Website: www.michael-pen...
    Instagram: / melp2718
    Twitter: / michaelpennmath
    Randolph College Math: www.randolphcol...
    Research Gate profile: www.researchga...
    Google Scholar profile: scholar.google...
    🌟How I make Thumbnails🌟
    Canva: partner.canva....
    Color Pallet: coolors.co/?re...
    🌟Suggest a problem🌟
    forms.gle/ea7P...

КОМЕНТАРІ • 155

  • @MichaelPennMath
    @MichaelPennMath  Рік тому

    🌟🌟To try everything Brilliant has to offer-free-for a full 30 days, visit brilliant.org/michaelpenn. The first 200 of you will get 20% off Brilliant's annual premium subscription.🌟🌟

  • @Nikolas_Davis
    @Nikolas_Davis Рік тому +82

    This reminds me of Charles Babbage famously explaining the operation of his Difference Engine as
    "Δ^7 U_z = 0". His Engine worked by calculating up to the 7th order finite difference (derivative) on an initial "seed" of tabulated values of any given function, then worked backwards to produce more tabulated values at fixed intervals - essentially a finite element mechanical integrator.

    • @rururu5877
      @rururu5877 Рік тому +1

      ​@pyropulse7932can you explain pls!?

  • @carstenmeyer7786
    @carstenmeyer7786 Рік тому +26

    10:49 There is a big assumption hidden -- for that expression to make sense, *"f(x)"* must be represented by a power series around "x = 0" that still converges at "x = 1".
    While that is a quite a large function set in practice, it does not include all infinitly smooth functions, like e.g. bump functions.

    • @user-oe5eg5qx4c
      @user-oe5eg5qx4c Рік тому +3

      I think it requires the radius of convergence to be ∞ instead of 1 or ½. Did I miss something?
      edit: I realised you were talking about e^(d/dx)f(x)=f(x+1) not d/dx=ln(1+Δ) and you're right.

  • @CraigNull
    @CraigNull Рік тому +13

    Sampling f(x) at only integer differences from x simply cannot work as a universal definition of f '(x) due to examples like f(x) = cos(x\*pi). An analytic function no less! So in this derivation where did assumptions about f(x) slip by unnoticed, and what are those assumptions?

    • @NagyTeglaPeter
      @NagyTeglaPeter Рік тому +3

      I assume it breaks down at the step where you take the log of both sides at "Δ = e^d/dx -1". I"m not sure that can be done if the right side is 0. And then you get the exception "f(x) can't have a periodicity of 1"

    • @CraigNull
      @CraigNull Рік тому

      @@NagyTeglaPeter I see. In other words, the edges of the radius of convergence of the log series comes into play when Δ^n f(x) are all of similar magnitude and alternating signs

    • @francoisleyvraz3920
      @francoisleyvraz3920 Рік тому +2

      It can be shown, though it is reasonably advanced, that sampling a function at the integers will determine the function uniquely if it is analytic, *and* if it grows less fast than exp(pi*|z|) as z->infinity throughout the complex plane. That takes care of the cos(pi*x) counterexample.

  • @Maths_3.1415
    @Maths_3.1415 Рік тому +126

    Finally a good place to Stop

    • @roberttelarket4934
      @roberttelarket4934 Рік тому +4

      Srinivasan Ramanujan: Are you near a determination of the Riemann hypothesis?! We are anxiously and breathlessly waiting!!!!!

    • @HoSza1
      @HoSza1 Рік тому +2

      Why are you impersonating other people?

    • @jongyon7192p
      @jongyon7192p Рік тому +2

      yes but where are the crazy video descriptions?

    • @GrifGrey
      @GrifGrey Рік тому +1

      ​@@HoSza1 comedic effect probably

    • @deananderson7714
      @deananderson7714 Рік тому +4

      @@HoSza1it’s not really impersonation if the person is dead. If someone names an account George Washington no one actually thinks they are the first president of the USA

  • @the-avid-engineer
    @the-avid-engineer Рік тому +37

    12:18 There’s a mistake.. you start with (-1)^n and switch to (-1)^(n+1) with no explanation

    • @glumbortango7182
      @glumbortango7182 Рік тому +7

      everything that follows is accidentally negative, but besides that error everything checks out.

    • @krisbrandenberger544
      @krisbrandenberger544 Рік тому +1

      That is correct.

    • @ScienceTalkwithJimMassa
      @ScienceTalkwithJimMassa Рік тому +2

      Working that out, I get -ln(2), not ln 2. I am not seeing how this arrives to ln 2.

    • @glumbortango7182
      @glumbortango7182 Рік тому +1

      @@ScienceTalkwithJimMassa He wrote (-1)^n+1 by accident, switch it to (-1)^n as the formula would imply and the result is ln(2) as you'd expect

  • @atreidesson
    @atreidesson Рік тому +8

    "If you go high, you always get zero" © Michael Penn

  • @joshuagrumski7459
    @joshuagrumski7459 Рік тому +21

    15:03 Isn’t this is very, very roundabout way of doing this proof? Like you already had to know the ln(x+1) series expansion centered at 0 to do the proof that this works, but that means you already have proven this result. The only way that I can imagine that you would get something new out of this would maybe to prove the one result that he challenged the rest of us do because he couldn’t find a nice form for the proof, but even then, I still don’t know if that proof is proving anything new or is circular

    • @cantcommute
      @cantcommute Рік тому +3

      technically ln 2 requires x=1 so its at the radius of convergence of the taylor series, so you'd need to know that the alternating harmonic series converges. his method uses a formal power series of operators and i think that bypasses the radius of convergence problem.

  • @ianrobinson8518
    @ianrobinson8518 Рік тому +6

    This type of relationship of operators is actually the starting point for the traditional subject of Finite Calculus. Elements of it can be found in its modern and broader subject of Numerical Analysis most commonly forming the basis of computer science methods of approximations. It was well developed by the mid-1850s in texts of that time, eg Boole but all but “forgotten” these days.

    • @mathephilia
      @mathephilia Рік тому +1

      Do you have resources that talk about this ? I'm interested in the links between the continuous and the discrete in various higher branches of math

  • @dyld921
    @dyld921 Рік тому +2

    Here's the outline of my proof for the challenge:
    1. Show by induction that
    D^n(x f(x)) = x D^n f(x) + n (D^n f(x) - D^(n-1) f(x))
    2. Using the above result, we can show that
    ln(1 + D) (x f(x)) = x ln(1+D) f(x) + f(x) - (-1)^m D^m f(x)
    3. Let f(x) = x^(m-1). Then D^m x^(m-1) = 0. Finish by induction over m.

  • @diribigal
    @diribigal Рік тому +3

    The ln(2) expansion is kind of circular. You use the general ln(1+x) series to set up these operator calculations, but that series includes the x=1 case that would have immediately given the ln(2) expansion.

  • @dalek1099
    @dalek1099 Рік тому +4

    I'm confused shouldn't the (-1)^(n+k+1)=(-1)^(k+m) and the (x+k)^m=(x+n+1)^m. That's what you get when applying the formula.

    • @yoav613
      @yoav613 Рік тому

      The formula should be (-1)^(k+r) and not (k+m) ,and (x+k)^m and not(x+r)^m.
      Try the formula for r=1 the result should be (x+1)^m-x^m, so the formula works for (-1)^(k+r) and (x+k)^m.

  • @Hhsksjbdl
    @Hhsksjbdl Рік тому +14

    Here is a formal proof that ln(1+Δ) is the derivative for polynomials:
    Denote for short L = ln(1+Δ), D = d/dx the derivative operator (both are linear).
    we’ll use induction on the degree of the polynomial.
    for constant polynomials the result is trivial.
    Assume it is true for polynomials of degree at most n-1.
    note that DΔ = ΔD and so (on the space of polynomials:) DL = LD
    the space of polynomials p of degree at most n for
    which Dp = Lp is a vector space and we assumed it contains
    all polynomials with degree

    • @ianrobinson8518
      @ianrobinson8518 Рік тому +1

      It thereby shows it applies to any function which can be expressed as a (infinite series) polynomial, including trigonometric, hyperbolic, exponential etc

    • @Hhsksjbdl
      @Hhsksjbdl Рік тому

      @@ianrobinson8518 I don't think it's so simple, even for analytic functions with an infinite radius of convergence there can be problems - take for example f(x) = cos(pi*x). it satisfies Δf = -2f, and so the series defining ln(1+Δ)f doesn't converge

    • @ianrobinson8518
      @ianrobinson8518 Рік тому

      @@Hhsksjbdl Sorry, my maths is not that sophisticated to understand your point or example. None of the texts on the subject that I’m familiar with raise this issue. The function set is generally limited to the common well behaved ones we’re all familiar with.
      In any case, as I recall, the usual proof in texts makes use of the Taylor series which is infinite. I’d have to check.

    • @Hhsksjbdl
      @Hhsksjbdl Рік тому

      The point is that it doesn't work for the function cos(pi * x), which is usually considered a well behaved function. 'analytic function with an infinite radius of convergence' just means a function that can be expressed as a power series

    • @ianrobinson8518
      @ianrobinson8518 Рік тому

      Why doesn’t it work? Perhaps you’re assuming that x must be a positive integer, (inc 0)? x can be any real number.

  • @agytjax
    @agytjax Рік тому +2

    At 1:54, how did you transform to geometric series sum ? The sum 1/(1+t) is applicable only when |t| < 1. There is no assumption that |t| < 1

    • @JamesLewis2
      @JamesLewis2 Рік тому

      He was playing fast and loose with operator theory here, and the expansion only works out for a space of functions on which Δ is a bounded operator with operator norm less than 1, and this may well be the space of differentiable functions.

  • @slowfreq
    @slowfreq Рік тому +7

    Do you have a video on operators like delta and how you manipulate them? I've never really understood that at all, and manipulation of operators like that has always bewildered me.

    • @abebuckingham8198
      @abebuckingham8198 Рік тому +3

      Most of these operators are linear so they basically work like matrices. Given an abelian group the endomorphisms on that group will form a ring. If the group is nice enough you essentially get a vector space. So for example if the group is R^2 then the endomorphism ring is represented by the set of 2x2 matrices. These operators are often acting on infinite dimensional vector spaces but that doesn't cause too many complications. Sometimes it's useful since it means each iteration of the operation will be linearly independent of the others, unlike in the finite dimensional case. That makes Taylor series and other such analytic techniques available.

    • @marcomoreno6748
      @marcomoreno6748 Рік тому +1

      ​​​@@abebuckingham8198when you say "represented by the set of 2x2 matrices" do you mean its basis is the set of all 2x2 real-element matrices?

  • @yoav613
    @yoav613 Рік тому +2

    In the exercize it should be (-1)^(k+r) not(k+m),and also it should be (x+k)^m not (x+r)^m.

  • @carloseliasmartinez6221
    @carloseliasmartinez6221 Рік тому +3

    I love Michael's videos. I use to watch them with a notebook and pen in hand since they always inspire me to explore what the concepts he's presenting. I've applied this definition of derivative to the exponential function and found a quite... "weird" result: a series that should diverge gives a finite result. I've analysed the same series by a couple of different methods and got the same result, but by employing "forbidden" methods (like applying the equation of the geometric series to a reason that's larger than one in absolute value) and found the definition of polylogarithmic functions. I still cannot fully believe Li1(1-e) = -1, but I guess it's some sort of analytic continuation.
    Is there any way I could contact Michael about this result in case he might consider interesting to make a video on this topic?

  • @yoavboaz1078
    @yoavboaz1078 Рік тому +6

    I never formally kearned pretty much everything in this video yet you managed to make me understand it. Thank you.

  • @clearnightsky
    @clearnightsky Рік тому +2

    It's a dream! A neet way to compute the derivative of functions that are not continuos but for which you know the values at an infinite no. of points. :D

  • @Calcprof
    @Calcprof 9 місяців тому

    I like this better as e^D = 1 + Δ . Which gives translation in terms of derivative, which is how you prove this anyway.

  • @mMaximus56789
    @mMaximus56789 Рік тому +27

    It would be nice knowing how does convergence work with operators, the expansion of the logarithm has a limited radius of convergence

    • @hadrienlondon4990
      @hadrienlondon4990 Рік тому +3

      It depends on the norm you use on the operators. Except if your operators operate on a finite-dimensional vector space, then the choice of the norm doesn't matter.

    • @MasterHigure
      @MasterHigure Рік тому +16

      Or you could just throw convergence out the window and consider it purely algebraically. What is often referred to as a "formal power series".

    • @mMaximus56789
      @mMaximus56789 Рік тому +1

      @pyropulse7932 but to replace a formal power series with it's closed form(?) (In this case the geometric series) don't we require a notion of convergence? If not it seems like any representation could be valid so why choose the 1/1-x one. Pretty interesting, would love to be pointed out to some resources as it seems pretty useful

    • @divisix024
      @divisix024 Рік тому +2

      ln(1+Δ) is undefined for Δ an operator, so we just define it by replacing the variable in the power series for ln(1+x) by Δ.

    • @MasterHigure
      @MasterHigure Рік тому +1

      If we define the exponential of an operator through the standard series expansion, and we define the logarithm through this series expansion, and we insert one into the other, and expand all the resulting terms, we do find that they are indeed inverses of one another (as in exp(ln(1+∆))=1+∆). There is no real need to consider convergence at all until after you've applied these "functionalized" operators.

  • @BenfanichAbderrahmane
    @BenfanichAbderrahmane Рік тому +2

    But for what norme because for the 1 norme, norme of delta is 1 so ln(1+delta) is it always defined ?

  • @il_caos_deterministico
    @il_caos_deterministico Рік тому

    It can be shown that
    Δ^n f(x) = sum{k=0}^n (-1)^k (n over k) f(x+n-k)
    So if r=n-k then it is equal to
    sum{r=0}^n (-1)^(n+r) (n over r) f(x+r)
    (we used (n over k)=(n over n-k) and (-1)^(n-r)=(-1)^(n+r) )
    Reindexing this is
    sum{k=0}^r (-1)^(k+r) (r over k) f(x+k)
    The last formula of the exercise has probably a minimal error in indexes.

  • @yigalzemach9083
    @yigalzemach9083 Рік тому

    I must be missing something very basic about this definition. Here are 2 apparent contradictions, extremy simple ones:
    1. Take f(x) = (x-0.5)^2. Then clearly f(1) = f(0) ==> delat(0) = 0 ==> ln(1+delta) = ln 1= 0 ==> f'(0) = 0; but this contradicts the regular calculus result f'(0) = -1.
    2. More generally, f'(x) depends according to this definition only on the values f(x+1) and f(x). e.g. f'(0) depends only on f(1) and f(0). But clearly you can define infinite number of fuctions that have the same f(0) and f(1), but different slopes at zero.
    So what did I miss?

    • @ianrobinson8518
      @ianrobinson8518 Рік тому

      The expansion of ln(1+Δ)f(x) introduces powers of Δ and, when fully expanded, an infinite expression involving f(x), f(x+1), f(x+2), f(x+3)…
      The power of operator algebra is we don’t need to write out that series. We can manipulate the operators albeit within some constraints (it’s not always communitive). The fundamental identity is e^D = 1+Δ = E, where D is d/dx, Δ = f(x+1) - f(x) and f(x+1) = Ef(x). This connects infinitesimal with finite calculus. Also the operator identities exist ΣΔ = 1 and ID = 1 (where I is integration)

  • @user-en5vj6vr2u
    @user-en5vj6vr2u Рік тому +2

    Isn’t that ln2 identity derivation circular since it relies on using the talyor expansion of ln(1+x), from which you could directly write down the ln2 identity

    • @abebuckingham8198
      @abebuckingham8198 Рік тому

      You can't use the ln2 identity to prove that ln(1+x) converges so it's not circular. It's just a special case with a non-standard derivation.

    • @cantcommute
      @cantcommute Рік тому

      technically ln 2 requires x=1 so its at the radius of convergence of the taylor series, so you'd need to know that the alternating harmonic series converges. his method uses a formal power series of operators so it's fine.

  • @19divide53
    @19divide53 Рік тому

    15:44 exercise solution attempt- I was actually trying to do the challenge but ended up accidentally solving this one instead.
    First, we claim that Δ^n(x^m)=Σ(-1)^k*C(n,k)*(x+n-k)^m, the sum being taken over k=0,...,n, as long as n is a positive integer not exceeding m.
    We may prove this by induction. The base case n=1 is trivial. Expanding Δ^(n+1)(x^m)=Δ^n((x+1)^m)-Δ^n(x^m) via the above equation gives
    (x+1+n)^m+Σ[(-1)^(k+1)*C(n+1,k+1)(x+n-k)^m]-(-1)^n*x^m, where the middle sum is taken over k=0,..., n-1 and has been simplified via Pascal's identity and re-grouping. We may notice that (-1)^0*C(n+1,0)*(x+n+1-0)^m=(x+1+n)^m, and (-1)^(n+1)*C(n+1,n+1)*(x+n+1-(n+1))^m=-(-1)^n*x^m, so reindexing the above sum gives us the desired result. By induction, the claim is proven.
    Reindexing n-k to k, Δ^n(x^m)=Σ(-1)^(n-k)*C(n,k)*(x+k)^m=Σ(-1)^(n+k)*C(n,k)*(x+k)^m, the sum being taken over k=0,...,n.
    Since Δ(x^m)=(x+1)^m-x^m=1+x+...+x^(m-1) is a degree m-1 polynomial, by linearity along with induction, Δ^n(x^m) has degree m-n for nm, Δ^n(x^m)=0.

  • @ScienceTalkwithJimMassa
    @ScienceTalkwithJimMassa Рік тому +1

    It would seem that applying the delta operator to any general (ln a) a^x will yield a^x(a-1)

  • @polyhistorphilomath
    @polyhistorphilomath Рік тому

    Here's a direct approach to the problem posed at 9:24 .
    Δ (f)(x) = f(x+1) - f(x) by definition.
    By definition the shift operator is e^tD (f)(x) = f(x+t).
    Substitute a shift by t=1 into the definition of the forward difference.
    e^D-1= Δ. (add 1 to both sides)
    e^D = Δ+1. (take logarithms)
    ln(e^D) = ln(Δ+1). Now we have the LHS of the equation provided (on the RHS here).
    Take down the power on the left, then remove the logarithm of the natural number (=1).
    D ln(e) = RHS. D = RHS.
    The polynomial rule is Dx^m = mx^(m-1).
    ln(Δ+1)x^m = mx^(m-1).
    [Edit (added f where applying the t-shift)]
    I just watched the rest of the video. Looks like I wasn't the only one with this idea.

  • @matthewpublikum3114
    @matthewpublikum3114 Рік тому

    At 5:30, is there a textbook that explains the delta**2, **3, expansion?

  • @PiXeLSn1p3r
    @PiXeLSn1p3r Рік тому +2

    Great video!
    Another cool way to define the derivative is by using the symmetric difference:
    Δs f(x) = ( f(x+1) - f(x-1) ) / 2 = ( ( exp(d/dx) - exp(- d/dx) ) / 2 ) f(x) = Sinh(d/dx) f(x)
    d/dx = ArcSinh(Δs)
    These formal expansions are intuitive and systematic ways of finding finite difference coefficients at k-th order.
    en.wikipedia.org/wiki/Finite_difference_coefficient

  • @elormavevor6504
    @elormavevor6504 Рік тому +1

    If i understand this correctly, that derivative operator calculates f'(x) by sampling the function f(x) with integer spacing.
    Surely there are functions where this doesn't work. For example, f(x) = sin(2nπx). f'(x) = 2nπ cos(2nπx) but ln(1+∆) f(x) = 0 since ∆ⁿf(x) is always 0 (this follows from ∆sin(2nπx) = 0).

    • @elormavevor6504
      @elormavevor6504 Рік тому

      After a little more thought, this operator can never guaranteeably find the derivative because I could take any function and add to it a bunch of sinusoids with an integer frequency without changing the output of the ∆ operator. ∆ⁿ{f(x) + sin(2πnx)} = ∆ⁿf(x).
      Maybe it only works for signals which are limited to 0.5Hz (the Nyquist frequency) in the Fourier domain. This would include polynomials.

    • @glumbortango7182
      @glumbortango7182 Рік тому

      Any functions of period 1 function similarly to constants in usual calculus. Note then that if you only focus on the integer values of the functions, 2nπ*cos(2nπx) and 0 are only off by a constant.

  • @Noam_.Menashe
    @Noam_.Menashe Рік тому +4

    Does this have to do with e^d/dx(f(x))=f(x+1)?

    • @Nolord_
      @Nolord_ Рік тому +2

      yes

    • @glumbortango7182
      @glumbortango7182 Рік тому

      You can use that to write (e^D - 1) = Δ, which you can then reverse to get the operator form shown in the video, D = ln(1 + Δ)

  • @txikitofandango
    @txikitofandango Рік тому +2

    For the "direct" way, why not use induction? You already proved it for the x^3 case, so it shouldn't be too tricky to show it works for higher powers. Especially because there's a Pascal relation going on. I'll try it.

    • @txikitofandango
      @txikitofandango Рік тому +2

      This is way above my skill but to find a closed form for Delta^n(x^m) you need something like an m x m Pascal's matrix or maybe an inverted Pascal's matrix raised to the nth power.

    • @txikitofandango
      @txikitofandango Рік тому

      Did you try Pascal's identity? So Delta(x^n) = sum from k=0 to n-1 of (n choose k)*x^k = sum of (n-1 choose k-1)x^k + sum of (n-1 choose k)x^k, then work in some reindexing, then apply that to the ln(1+Delta) operator...

  • @Xeroxias
    @Xeroxias Рік тому

    9:06 I took you up on it. It isn't terrible, but it isn't easy, and I needed to use induction more than once. To give an idea, first I expressed the nth forward difference on x^(m+1) in terms of the nth forward difference on x^m (which introduced a factor of (x + n) and an additional term), and then I had to show that an alternating sum of powers of forward difference operators all acting on (x + 1)^m is precisely x^m, which took the bulk of the effort.
    I thought that was interesting, since you've got this relatively simple operator composed of a finite sum of powers of the forward difference, and its operation on a polynomial shifts its argument by -1, so the operator is in fact e^{- d/dx). It turns out that the nth derivative of x^m introduces the coefficient n! times m choose n, whereas the coefficients of the nth power of the forward difference operator applied to x^m are also m choose n.

  • @denki2558
    @denki2558 Рік тому +4

    You forgot the most important application, which is for deriving the formulas for numerical calculation of derivative much easier.
    You can easily derive Newton's forward difference formula from ln(1+delta).
    For the backward and central difference formulas, you can try to derive one by defining d/dx in terms of either delta f(x) = f(x) - f(x-1), or delta f(x) = (f(x+1)-f(x-1))/2.

  • @leokoko77
    @leokoko77 Рік тому +8

    Can delta be used to define a fractional derivative? Will it give different results from previous video technique with the Laplace transform (ua-cam.com/video/LfcVxbJzpRw/v-deo.html)?

    • @schlega2
      @schlega2 Рік тому +4

      You could use e^(½ln(ln(1+Δ))) and see if you can manipulate that into a nice form.

  • @markusklyver6277
    @markusklyver6277 3 місяці тому

    How do you take ln of e^(d/dx)? I am rusty on my operator theory, but ln does not have a valid expansion around x=0.

  • @krisbrandenberger544
    @krisbrandenberger544 Рік тому +1

    The double sum should have (n+1)^m not (k+1)^m.

  • @driesvanheeswijk1633
    @driesvanheeswijk1633 Рік тому +2

    What area of Mathematics does this have to do with? I'm a 2nd year bachelor student and I have never seen this come up, I'm wondering if I'll get to see more of this type of thing!

  • @user-gy7hc6pi7h
    @user-gy7hc6pi7h Рік тому +5

    Greetings, Michael! Did you think about finding inverse of it? Look, if ln(1+delta) defines derivative, then (ln(1+delta))^(-1) defines anti-derivative, right? If the inverse operator is found, will integration be easier? How many new integrals can be found in closed form? I think, it's a cool concept, it can simplify computations much, what do you think?

    • @masonskiekonto590
      @masonskiekonto590 Рік тому

      Try and prove it! Even if it amounts to nothing useful, you've still done something great.

    • @farfa2937
      @farfa2937 Рік тому

      Well e^x-1 leaves you only with delta, then you'd need a discrete integral or smtg like that I guess.

    • @glumbortango7182
      @glumbortango7182 Рік тому

      There actually does exist a Discrete Integral formula, it's fairly similar looking to the Euler-Maclaurin Sum from what I can see.
      It involves the antidifference, the sum from i=0 up to x-1, and then adds a series of nth order differences adjusted by some fractional coefficient. (The first few are 1/2 for the first difference, -1/12 for the second, 1/24, -19/720, 3/160, -863/60480, 275/24192. These are specifically the nth Cauchy number of the first type divided by the nth factorial, if you want more terms the numerators of the Cauchy numbers are listed under OEIS A006232, the denominators are OEIS A006233) Adjust by a constant, and the result seems to approach the actual value fairly quickly for integer inputs.

  • @CM63_France
    @CM63_France Рік тому

    Hi,
    On shorts (where there is no comment box), when I click on "Playing two gammas", I get "the how and why of set exponents" 😁

  • @nizogos
    @nizogos 11 місяців тому

    The application of this result to calculate the series representation of ln(2) uses circular logic since we already used the Taylor expansion of ln(1+x) as a given.We could just plug x=1 from the beginning.Other than that,great insight on a topic I've never heard of before!

  • @shruggzdastr8-facedclown
    @shruggzdastr8-facedclown Рік тому +3

    Q&A @Michael:
    Is this limited to whole integers, only?
    If not, then why is delta^m(x^3)=0, m>/=4 rather than >3 (e.g.: any rational or irrational number greater than, but not equal to, 3)

    • @ere4t4t4rrrrr4
      @ere4t4t4rrrrr4 Рік тому

      n (or m in your example) is always an integer, because this is a power series expansion

  • @hyperplastic
    @hyperplastic Рік тому +10

    Your editor keeps missing more and more times when you start a new take

  • @mcumer
    @mcumer Рік тому +1

    It's a definition without limits.. incredible.. the derivative becomes a pure algebric operator

    • @yoavboaz1078
      @yoavboaz1078 Рік тому +1

      What about the sum's upper bound?

    • @viliml2763
      @viliml2763 Рік тому +4

      An infinite sum is the limit of partial sums.
      Also, there's already a much simpler limit-free algebraic definition of the derivative: d/dx x^n = n x^(n-1). It expands to all analytic functions by linearity. It doesn't work for non-analytic functions, but neither does this logarithm definition. They're really the same thing written in different ways.

  • @skylardeslypere9909
    @skylardeslypere9909 Рік тому +1

    19:15 this should be (k+2) since x is approaching 1

    • @khoozu7802
      @khoozu7802 Рік тому

      The correct formula is (x+k)^m not (x+k+1)^m
      So when u put x=1, u will get (k+1)^m

  • @dotcom1235
    @dotcom1235 Рік тому

    Are there any type of functions where it is easier to solve/compute the derivative using ln(1+delta) definition than using the 😂conventional d/dx operator?

  • @DmitriStarostin
    @DmitriStarostin Рік тому

    One needs to credit Leonard Euler for contributing to developing this formula.

  • @patrickhickey7673
    @patrickhickey7673 Рік тому

    This is fantastic

  • @mathboy8188
    @mathboy8188 Рік тому

    I don't think that "Formal Taylor Theorem" link appeared on the end of the video as was intended.

  • @HagenvonEitzen
    @HagenvonEitzen Рік тому

    Now consider f(x)=\begin{cases} exp(-1/x^2) if x>0 \\ 0 if x

  • @MrWorshipMe
    @MrWorshipMe 28 днів тому

    I didn't understand the equality e^d/dx f(x) = f(x + 1)

  • @ikarienator
    @ikarienator Рік тому +3

    Delta = exp(D)-1; ln(1+Delta)=ln(exp(D)) =D

    • @adiaphoros6842
      @adiaphoros6842 Рік тому

      Then you need to prove Δ = exp(D)-1

    • @ikarienator
      @ikarienator Рік тому

      @@adiaphoros6842 exp(D)f(x) = sum(D^nf(x)/n!) = f(x+1) from Taylor expansion. The hard part I think is to prove ln(exp(M))=M works in operator space.

  • @tonyhaddad1394
    @tonyhaddad1394 Рік тому +5

    Wow , i dont think anyone dealing with math gets bored

  • @digxx
    @digxx Рік тому

    Just wondering. This is only true if f(x) is infinitely differentiable, or?

  • @clearnightsky
    @clearnightsky Рік тому +2

    That's crazy amazing! I didn't think you could compute the derivative at one point by combining values from allover the function domain!

    • @TJStellmach
      @TJStellmach Рік тому +5

      In a way it's analogous to how a series expansion expresses values all over the function domain based only on the derivatives at a single point. Only, the reverse of that.

  • @soyoltoi
    @soyoltoi Рік тому +1

    Is there an area of operator equations where instead of solving for a function like in differential or functional equation you solve for an operator like the derivative operator?

    • @jellymath
      @jellymath Рік тому

      Imagine
      2 op 4 = 8
      one solution for op being scalar multiplication
      That sounds very cool, but it'd probably be so absurd and unpractical that it wouldn't become an actual branch of mathematics... that's just a guess though!

    • @tayranates3611
      @tayranates3611 Рік тому

      interesting nice thought. Maybe grup theory is doing that, since it relates elements in side of a grup to each other, and an operator is actually a function that relates the elements. (I don't know much about grup theory)

  • @jamesfortune243
    @jamesfortune243 Рік тому

    That was fresh.

  • @insouciantFox
    @insouciantFox Рік тому +4

    How does the spectral theory of functions account for power series with finite radii of convergence?

    • @abebuckingham8198
      @abebuckingham8198 Рік тому +1

      Typically when you move to functional spaces you'll need to impose topological conditions, commonly compactness. For the spectral theorem the function must map bounded sets to relatively compact sets meaning they have compact closure.

  • @glacagnina
    @glacagnina Рік тому

    what if f(x)=f(x+1) for some x?

  • @lucasdepetris5896
    @lucasdepetris5896 Рік тому +1

    Hi! could you recommend me literature about this topic of finite calculus? Im really interested. In a previous video you mentioned there is a stoke’s theorem for finite calculus. I’m struggling to find that online. Also, if you know about some book that covers the relation between finite and infinitesimal calculus, as this video is about in some extent, i would really appreciate it. Greetings from Argentina!

    • @knivesoutcatchdamouse2137
      @knivesoutcatchdamouse2137 Рік тому +2

      @pyropulse7932 Sure, entire branches of mathematics are simple for anyone to just figure out for themselves. Personally, I discovered a new proof for the Prime Number Theorem when I was 10 by just messing around during recess. what's that? You want to learn general relativity? You don't need a book, it's easy: just mess around with some equations!

    • @glumbortango7182
      @glumbortango7182 Рік тому

      @pyropulse7932 I got most of my formulas for finite calculus by asking the right questions on my own, but I'd still want to know about any resources to get a more refined approach to the topic. You can learn a lot on your own, and it does help to understand the more obtuse formulas, but it's not exactly a straightforward process.

  • @vicentepoblete6627
    @vicentepoblete6627 Рік тому

    10:00 cursed notation warning haha

  • @redwan7323
    @redwan7323 Рік тому +1

    thats cool

  • @MrWorshipMe
    @MrWorshipMe 28 днів тому

    t must be smaller than 1, right?

  • @user-nl7mn4rr6s
    @user-nl7mn4rr6s Рік тому +1

    I want to know the integral of floor x.

    • @yoavboaz1078
      @yoavboaz1078 Рік тому +2

      floor(x)·(x-(floor(x)+1)/2)+c

    • @user-nl7mn4rr6s
      @user-nl7mn4rr6s Рік тому

      @@yoavboaz1078 Since differential of floor(x)=0, why not x*floor(x)+c?

    • @yoavboaz1078
      @yoavboaz1078 Рік тому

      @@user-nl7mn4rr6s it would work but won't be continuous. My answer was equivalent i just shifted all the segments to be continuous

    • @mathboy8188
      @mathboy8188 Рік тому +1

      Break the definite integral down into a sum of several definite integrals, each over an interval like [n, n+1) (n an integer)... except maybe for the last one which is like [n, x), where n

  • @Geenimetsuri
    @Geenimetsuri Рік тому

    Whoa! BUT if that is the derivative, what is the anti-derivative?

  • @zetadroid
    @zetadroid Рік тому

    The power of -1 should be n instead of n+1 in the first application I think

  • @trendakim
    @trendakim Рік тому

    Surprising! could it be in anyway usefull to compute the derivative of discrete data ? If so would it be relevent to use it instead of existing discrete derivative algorithms ?

  • @camilocagliolo
    @camilocagliolo Рік тому

    13:18 and from inspection, 2 = 1 + Δ
    application concluded

  • @JamesLewis2
    @JamesLewis2 Рік тому

    Your first application is not interesting, because the expansion of ln(2) as a rational series follows directly from the same expansion necessary to calculate ln(1+Δ) to begin with.

  • @__hannibaalbarca__
    @__hannibaalbarca__ Рік тому

    Yeah I made very beautiful formulas, still I don't published, already pass 23 years.

  • @elgefe5442
    @elgefe5442 Рік тому

    log(1+Δ) exp( ik x ) = ik exp( ik x ).

  • @user-oi3iv7oo4z
    @user-oi3iv7oo4z Рік тому +1

    Then derivative becomes not local character of function. And this is problem for many cases. It uses nonlocal operator Δ.

  • @ahakimbello9936
    @ahakimbello9936 Рік тому

    I think yu don't have to put one but thé identité of derivative

  • @levicaptain321
    @levicaptain321 Рік тому

    what the heck is that

  • @fangjiunnewe3634
    @fangjiunnewe3634 Рік тому +2

    If for some reason we dont know the value of m but do know the value of m-1, this formula will not be helpful to find it because it requires the binomial expansion of (k+1)^m so the definition is circular and non constructive

    • @user-oe5eg5qx4c
      @user-oe5eg5qx4c Рік тому +7

      He didn't said definition but identity.

    • @xizar0rg
      @xizar0rg Рік тому +1

      If you allow peano, knowing m-1 forces m. Also, this is an identity, not a method of calculation.
      afterthought: Cos^2+Sin^2 = 1 is another example of an identity that is useless for calculation. (You cannot extract a value for theta out of it, even if the identity is useful in other calculations.)

  • @clearnightsky
    @clearnightsky Рік тому +1

    This settles it then. A particle's momentum really depends on its future behavior! Therefore everything in the universe is predetermined. Unless maybe uncertanty principle?

  • @RSLT
    @RSLT Рік тому

    Crazy!!!!!!!!

  • @user-lu8vz9du3q
    @user-lu8vz9du3q Рік тому +1

    Nice identity!

  • @2kchallengewith4video
    @2kchallengewith4video Рік тому

    Never been this early

    • @xl000
      @xl000 Рік тому

      how long do you last usually ?

  • @fabiopilnik827
    @fabiopilnik827 Рік тому +2

    You gotta love any connection between discreteness and continuity - particularly in view of modern science which is begging the scale.