The Subtle Reason Taylor Series Work | Smooth vs. Analytic Functions

Поділитися
Вставка
  • Опубліковано 26 кві 2024
  • Get Surfshark VPN at surfshark.deals/MORPHOCULAR and enter promo code
    MORPHOCULAR for a Holiday Special offer of 5 extra months for free with the Surfshark One
    package.
    Taylor series are an incredibly powerful tool for representing, analyzing, and computing many important mathematical functions like sine, cosine, exponentials, and so on, but in many ways, Taylor series really shouldn't work as well as they do, and there are functions out there that can't be represented with them. What are these functions? And what's so special about so many of our familiar functions that we can compute them with Taylor series?
    =Chapters=
    0:00 - How to calculate e^x
    4:16 - Surfshark ad
    5:15 - Why Taylor series shouldn't work
    6:54 - A pathological function
    8:25 - Taylor's Theorem
    10:48 - Analytic functions vs. smooth functions
    12:53 - The simplicity of complex functions
    14:10 - The uses of non-analytic smooth functions
    14:53 - See you next time!
    ===============================
    This video was generously supported in part by these patrons on Patreon:
    Marshall Harrison, Michael OConnor, Mfriend.
    ===============================
    CREDITS
    The music tracks used in this video are (in order of first appearance): Icelandic Arpeggios, Checkmate, Ascending, Rubix Cube, Orient
    The track "Rubix Cube" comes courtesy of Audionautix.com
    ===============================
    The animations in this video were mostly made with a homemade Python library called "Morpho". It's mostly a personal project, but if you want to play with it, you can find it here:
    github.com/morpho-matters/mor...

КОМЕНТАРІ • 432

  • @morphocular
    @morphocular  4 місяці тому +198

    Hey all. Just a few clarifications I'd like to make in response to comments I've seen. It seems I've had to do this a lot lately, huh? Nothing gets past you guys :)
    7:04 - Many have taken issue with the piecewise function g(x) being a satisfying example of a function that fails to equal its Taylor series because it's a "gluing" of two completely different functions, so it's natural to expect its Taylor series to behave incorrectly at the join. But the critical trait with this particular piecewise function that makes it different from most others is that the join is truly "seamless": despite being a "gluing" of two "different" functions, it's perfectly smooth (i.e. has derivatives of all orders) at the join point, which is not usually true of most piecewise functions you could construct. Because certainly if a function fails to be smooth at a point, its Taylor series will break there.
    0:05 - I was aware that many calculators do not actually employ Taylor series directly to compute sin, cos, and e^x. My intent there was just to make it as clear as possible that I'm talking about computing these functions at a truly arbitrary input (like a calculator can). In my defense, that's why I used the word "might" in that line, but I was probably asking that word to do too much work, so if I could go back, I'd rewrite that line. Apologies for any confusion!

    • @Kapomafioso
      @Kapomafioso 4 місяці тому +6

      On a related note, a video on how these functions are actually implemented, would be awesome. But also that's more like computer engineering, so maybe not quite suitable for this channel. Anyway, I love your videos!

    • @patrolin
      @patrolin 4 місяці тому +1

      @@Kapomafioso they are implemented by polynomials - see Handmade Hero Day 440

    • @Sean-of9rs
      @Sean-of9rs 4 місяці тому

      Well the first point isn't really an issue, as there is the famous Fabius function, which is smooth everywhere and analytic nowhere. I don't know how, but I trust the math it is.

    • @gigantopithecus8254
      @gigantopithecus8254 3 місяці тому

      @@Kapomafiososomtimes they use the agm

    • @typo691
      @typo691 3 місяці тому

      Did you mean to write unsatisfying?

  • @maxwellhunt3732
    @maxwellhunt3732 4 місяці тому +758

    I love Taylor's Theorem. It's one of those results that is so incredibly important, but is not at all obvious at first sight.

    • @anggalol
      @anggalol 4 місяці тому

      ​@@deltapi8859 Engineer use it a lot for approximation. Maybe you already heard about sin(x) ≈ x. That is based on Taylor Series

    • @leif1075
      @leif1075 4 місяці тому +6

      Isn't what eh said around 6:40 not right..if you "contort " the function it no longer is the same.function anymore its no longer e^× except maybe in some.small.subsection..so isn't thst wrong?

    • @joeyshi2114
      @joeyshi2114 4 місяці тому

      @@leif1075 what do you mean? He wanted to look at a different function with the property that a neighbourhood of points around x = 0 is the same as e^x. It illustrates that not all functions are analytic

    • @ExplosiveBrohoof
      @ExplosiveBrohoof 4 місяці тому +3

      Yeah, I was never taught it when I took calc in high school. I wonder if there's a nice visual proof of it somewhere on UA-cam.

    • @sakshamsingh1778
      @sakshamsingh1778 4 місяці тому

      ​@@ExplosiveBrohoofthere is a UA-cam video titled "geometric interpretation of sinx= ...
      ..." From mathemaniac UA-cam channel you should check it out

  • @angelofdeth94
    @angelofdeth94 4 місяці тому +254

    One interesting thing about analytic functions is they behave more like "infinite degree polynomials" than a general smooth function. Polynomials are very rigid. If you know the value at n+1 points of a degree-n polynomial, then you know the whole polynomial. So even though it might seem like you could express a lot of different shapes with a degree-4 polynomial, it only takes 5 points to completely pin it down. There's a theorem in complex analysis that says if you know the value of an analytic function at a sequence of points and at a limit point of the sequence, then you know the analytic function everywhere. For example, if you know the values at 1/n for every natural number n, and at 0, then you uniquely determine the analytic function. In retrospect, it's kind of "obvious" that analytic functions would act like infinite-degree polynomials, because that's basically what a power series is.

    • @pyropulseIXXI
      @pyropulseIXXI 4 місяці тому +3

      f(x)= ax^2+bx+c
      Pretty obvious you need n+1|n=2 = 3; I fail to see the 'insight' here; and these two statements you made are not related at all:
      (1) One might think you could express a lot of different shapes with a degree-4 polynomial;
      (2) but it only takes 5 points to completely pin it down.
      You can express lots of shapes and having 5 points to give a unique curve for those 5 points does not limit the number of shapes one can generate with a degree-4 polynomial; The limit in the shapes one can generate is due to the nature of the polynomial itself, not the fact that having n+1 points gives a unique curve for those given points.
      That is, you could have a unique curve for any set of given points, yet still be able to draw literally any shape, which is why the two statements are litearlly irrelevant when taken together.

    • @cparks1000000
      @cparks1000000 4 місяці тому +18

      ​@@pyropulseIXXINot sure what you're trying to say.

    • @nielskorpel8860
      @nielskorpel8860 3 місяці тому +3

      nice that the theorem existsin the complex plane, but does it also exist on the real line?
      complex derivatives are much more deanding objects, making complex definitions and theorms much stronger and narrower than their real analogs.

    • @scalesconfrey5739
      @scalesconfrey5739 3 місяці тому +9

      @@pyropulseIXXI
      "That is, you could have a unique curve for any set of given points, yet still be able to draw literally any shape"
      Your statement is patently false. A unique curve is a unique set of points, and any shape is defined by its points.
      Even if what you meant was that you can define any region with said curve as a boundary, that still means that you can't draw those regions that have a different boundary using that curve.

    • @simenjorissen5357
      @simenjorissen5357 3 місяці тому

      Wow that's actually a really cool result, what's the name of this theorem?
      However, aren't there some conditions that need to be placed on the sequence of points you're evaluating in? Because the way you phrased it I could pick the sequence (aₙ)ₙ with aₙ=0 for all n, obviously the limit is also 0. So that would mean that knowing a function in 0 is enough to know the whole function

  • @user-ln2ri9nx8u
    @user-ln2ri9nx8u 4 місяці тому +224

    It's honestly wild how well he can explain these things using the visuals.

  • @billcook4768
    @billcook4768 4 місяці тому +140

    The crazy thing about analytic functions is that if you know everything that is going on in a “small” region around a point, you understand the entire function.

    • @pyropulseIXXI
      @pyropulseIXXI 4 місяці тому +2

      Knowing infinite derivatives is not a 'small' region; derivatives tell you how the function changes, so knowing infinite derivatives will obviously give you the exact function
      This is obvious and not crazy at all
      In fact, the moment I learned about derivatives and linear approximations, I instantly knew, via pure intuition, that if I took an 'infinite' amount of derivatives to 'approximate' the function, I would get the exact function but in polynomial infinite series form.

    • @marekkryspin8712
      @marekkryspin8712 4 місяці тому +28

      ​@@pyropulseIXXI Imho @billcook4768 discusses something different. What is surprising is the amount of information needed for a complete description of an analytic function. Indeed, it is sufficient to know "only" all the derivatives at a point to determine the entire function potentially over the entire real line. This means that a countable amount of local information (focused at a single point) provides a complete description of a function defined on a potentially big domain.

    • @freyc1
      @freyc1 4 місяці тому +33

      It's so obvious it's only true for a very particular kind of functions... As the video explains perfectly. "Pure intuition" is just hasty reasoning, in that case, I'm afraid.@@pyropulseIXXI

    • @scalesconfrey5739
      @scalesconfrey5739 3 місяці тому +24

      @@pyropulseIXXI
      "In fact, the moment I learned about derivatives and linear approximations, I instantly knew, via pure intuition, that if I took an 'infinite' amount of derivatives to 'approximate' the function, I would get the exact function but in polynomial infinite series form."
      In that case, how do you explain bump functions? The existence of functions which have derivatives of all orders at the origin and yet fail to be analytic shoots your "intuition" out of the water. That's why mathematics relies on proof to determine truth, rather than insight and assumption.

    • @Czeckie
      @Czeckie 3 місяці тому +1

      it's not crazy. Analytic function is given as the taylor series, which is described precisely by the countably many numbers. Analytic functions are simple, that's why it works. What's crazy is that holomorphic = analytic. Usual proofs don't offer an intuitive reason for it.

  • @jacob_90s
    @jacob_90s 4 місяці тому +92

    I know this wasn't the primary point of the video, but I just wanted to note this because it's something I was very interested in when I first started programming, and I had a hard time learning this because every first year calc student would just copy and paste the same damn explanation about taylor series in every online forum.. Most programming math libraries DO NOT use infinite series of continued fractions to calculate elementary functions (the exceptions generally being arbitrary precision libraries) The issue with them is that they are in general too slow, and often times require the intermediate calculations to be computed at a greater precision than the final results needs to be.
    Instead, when writing the math library, the developers will curve fit either a polynomial or rational function, which can compute the function within a certain range to the required level of precision. Additionally, identities are often used to reduce the input to a smaller range so that you don't have to try and compute the values for all possible floating point values; the trig functions are probably the best example of this; sin and cos are defined for all x values from -infinity to +infinity, but since it just repeats, you can reduce the input value into the range -2pi to +2pi (depending upon the library sometimes it will be reduced even further). Similar tricks can be used for exponential and logarithmic functions using the layout of floating point numbers.
    For anyone who wants to read up more on this, I would suggest
    * Approximations for Digital Computers by Hastings (1955)
    * Computer Approximations by Hart (1968)

    • @ryanpitasky487
      @ryanpitasky487 4 місяці тому +6

      CORDIC is another commonly used algorithm.

    • @Kapomafioso
      @Kapomafioso 4 місяці тому +6

      Wouldn't it be enough to only consider the interval [0, pi/2] for trig functions? For sin, for example, if the value is between pi/2 to pi, the values are reflected. From pi to 2pi, the values are negative of those between 0 and pi. So the whole curve can be reconstructed from just the interval 0 to pi/2.

    • @johannbauer2863
      @johannbauer2863 3 місяці тому +3

      If the square root operation has its own instruction you can extend this further: you only need [0, pi/4] and use sin(x)^2 + cos(x)^2 = 1 to fill in the rest
      This was used for example by mario64 modders iirc

    • @TheFrewah
      @TheFrewah 2 місяці тому

      Fast inverse square root os pretty clever.

    • @samsamson6070
      @samsamson6070 2 місяці тому

      @@Kapomafioso just [0, pi/4] is enough! (Through reflection of that region of the circle over x = y)

  • @B_u_L_i
    @B_u_L_i 4 місяці тому +32

    THANK YOU. When I first heard about the Taylor expansions of e, sin and cos, the fact that they can be described by a polynomial exactly was so confusing to me. Like it's so random. But it makes a lot more sense now.

    • @ciceron-6366
      @ciceron-6366 4 місяці тому +5

      In fact it’s not really a polynomial because it has an infinite number of non-zero coefficient
      But the infinite sum is equal

    • @B_u_L_i
      @B_u_L_i 4 місяці тому +3

      @@ciceron-6366 I really, really don't give a damn.

    • @prod_EYES
      @prod_EYES 25 днів тому

      @@B_u_L_i😭

  • @tylershepard4269
    @tylershepard4269 4 місяці тому +35

    This is a great video. Sadly we don’t use this method anymore. Bit-shifting and a very accurate representation of log(2) is the efficient route. Extend this concept and add an extra register for complex numbers. It’s been a while (5 years) since I’ve done any assembly in an x86, but if I recall these functions are built right into the hardware essentially.

    • @trogdorbu
      @trogdorbu 4 місяці тому +3

      I'm not making the connection between this and bit-shifting, although I am familiar with the latter. Can you expand on this?

    • @nuke_clear
      @nuke_clear 3 місяці тому +5

      ​​@@trogdorbuI am guessing its about how calculators calculate values of e^x and other such functions at any x

    • @dkosolobov
      @dkosolobov 3 місяці тому +3

      This method is more relevant than it seems: Intel made a mistake in their fsin function and programmers had to implement the sine by hand usually using the Taylor expansion and a few tricks. The backward compatibility prevents a simple patch to the issue. See the article "Intel Underestimates Error Bounds by 1.3 quintillion" that explains the problem.

  • @kevj8708
    @kevj8708 4 місяці тому +19

    Just another casual banger video. Very quickly becoming one of my favorite UA-cam channels (not just math). Keep it up, you're killing it unbelievably hard.

  • @MisterTutor2010
    @MisterTutor2010 3 місяці тому +17

    If anyone tells you that math is boring, just shake it off.

  • @kingbeauregard
    @kingbeauregard 4 місяці тому +8

    Taylor Expansions are great. For concept, I recommend this: a given function f(x) is actually built out of a bunch of polynomial terms (ax, bx^2, cx^3, etc) but it does not readily admit to what the coefficients a, b, c, etc are for the various terms. So we need to torture the function into confessing each coefficient. The method of torture that works is taking the derivative the appropriate number of times for a given polynomial term, and then setting x equal to zero. It's brutal and harrowing work, but it's also brutally efficient.

  • @exotic_sphere
    @exotic_sphere 4 місяці тому +30

    What I personally find surprising is the effectiveness of compactly supported smooth (or continuous) functions, which are not "good" functions if you have the naive idea that the "best" possible functions are the real-analytic ones. From being used to show all sorts of approximations in function spaces, while having a topological vector space structure that is extremely non-trivial to having a dual that in the end is big enough to contain all kinds of weird "functions" people got as solutions of linear PDE via heuristic methods. Such a beautiful theory. Also, a good show of how weird and different is the world of complex calculus.

  • @calmkat9032
    @calmkat9032 4 місяці тому +37

    This is my #1 favorite subject! All of calculus feels like a narrative, but none moreso than Taylor series. The way it starts with something plain with approximating functions, to turning irrational, even transcendental, functions into these weird work-around ratios, it's just such a cool story!
    It even ends what I consider a years-long story arc in math. Since algebra 1, we learned about functions. Then we steered into the seemingly unrelated geometry. Then we alternate with algebra 2 and trigonometry. And it all comes together here at the end of calculus 2, when you turn sin(x) and cos(x) into plain algebra, and vice versa.
    And as a bonus, you learn that the taylor series of cos(x) + i*sin(x) is the same as e^x. Meaning trigonometry, algebra, and calculus all meet here. Add the cornerstone of geometry, pi, by making x=(i*pi), and boom. The one and only e^(i*pi)=-1

    • @stephenbeck7222
      @stephenbeck7222 4 місяці тому +2

      In a sense, Calc 1 and 2 is an adventure in approximating functions. Tangent line approximations are learned early in the course, which is a first order technique. Euler’s method is typically introduced with basic differential equations (which may be reserved for the separate course of differential equations, but the AP Calc BC curriculum does cover it), which is iterating on the first order approach. Then Taylor series come along and extend the first order tangent lines into polynomials of however many degrees you’d care to find. Then you can take more advanced courses and blow it all up with Fourier transforms.

    • @pyropulseIXXI
      @pyropulseIXXI 4 місяці тому +2

      This is an insane comment, which means I love it
      For one, pi is not the cornerstone of geometry; and Euler's identity/formula is not the meeting of trig, calculus and algebra.
      You are not turning sin(x) or cos(x) into "plain algebra," and the fact you think an infinite polynomial is "plain algebra" shows that you do not understand what algebra is
      Algebra, in its most basic sense, is balance and restoration; just having a variable being _x_ does not make something algebra, or an algebra. Just having variables does not make something an algebra.
      Lastly, cos(x) + i sin(x) is NOT equal to e^x; it is equal to e^(ix)
      This should be utterly obvious, as the graph of e^x is in no way the same as the graph of cos(x) + i sin(x); one isn't even in the same realm as the other, being in the complex plain whilst e^x is obviously in the 'real' plane.

    • @pyropulseIXXI
      @pyropulseIXXI 4 місяці тому +1

      @@stephenbeck7222 Fourier transforms and Fourier series are not 'more advanced.' It is literally just calculus I material
      I went to college thinking I would be around smart people; turns out, I was the only one that self taught myself calculus before taking the course and everyone else was an absolute oaf that struggled
      I was literally coming up with Taylor series on my own in Calc I, and it is utterly obvious and 100% intuitive, so I'm sick and tired of people saying this super obvious stuff is not intuitive.
      I also came up with Fourier series, as it is also utterly obvious.
      In fact, you can literally create any function with any arbitrary functions, provided you can choose the coefficients of those arbitrary functions and have an infinite amount of them.

    • @jasoncampbell1464
      @jasoncampbell1464 4 місяці тому +1

      @@pyropulseIXXI Yeah you absolutely have no idea what you’re talking about. You’re probably a good at crunching and memorising rules, and you can get somewhere with that, but that’s not what you need to discover knowledge except by accident. If you’ve been exposed to even 32 people in your age group and have any intuition at all that’s required of basic statistics, you’ll understand how to set proper expectations of people in your age. And yet you came to college expecting that everyone self-studied calculus, as if it’s the only intelligent endeavour. You’ll get to funny places for sure, but I’ll never be impressed.

    • @pyropulseIXXI
      @pyropulseIXXI 4 місяці тому

      @@jasoncampbell1464
      People like you are so insecure; I went to UC Berkeley and double majored in physics and mathematics.
      I tell a story of how I derived Taylor series on my own and your reply is "I bet you just memorize rules."

  • @user-yb9ol8sz7o
    @user-yb9ol8sz7o 4 місяці тому +5

    Brilliant video of the Error term in Taylor Theorem.
    It's NEVER assumed that a Taylor series will converges for all x tho.
    IF it converges to F(x) at a point the next question that's asked is on what interval (what neighborhood of x) does it converge to F(x) and that's as you point out, IS there an open interval around x such that the series converges to F(x) ?
    As you point out in the video derivatives are providing LOCAL information. Now Polyas Theorem is a physical way of looking at complex functions using DIV and CURL operators and can allow you to decide if a function has a valid Taylor series(that means Analytic) in some region/neighborhood using NONE LOCAL information. It's a physical (physics way) way of doing Cauchy Theorem. Amazing really.
    Brilliant video, thank you.

  • @NoNTr1v1aL
    @NoNTr1v1aL 4 місяці тому +7

    Thought you were gonna dive into Schwartz's Theory of Distributions at the end there after you mentioned the bump function and its uses, then I remembered the video title and duration. Maybe it could be the topic of another video. Absolutely brilliant video! Can't wait for the next one.

  • @latarte3931
    @latarte3931 4 місяці тому +1

    A gem amongst all mathematical channels, thank you for the insights

  • @Audio_noodle
    @Audio_noodle 4 місяці тому +3

    Fantastic video, filled the gaps in understand I had with taylor expansion, and kinda explained why taylor series are such a powerful tool in physics :D

  • @francescololiva5826
    @francescololiva5826 4 місяці тому +2

    Yes you're back with a new video! I'm going to watch it now, I know it will be great❤

  • @jojo_125
    @jojo_125 3 місяці тому +2

    My jaw just dropped when I saw the g(x) function! I didn't know there were smooth but not analytic functions and I'm glad to hear that this isn't possible in the complex plane. This shows once again that the complex plane is much nicer and it's just incomplete to work only within real numbers.

  • @mrtthepianoman
    @mrtthepianoman 4 місяці тому

    Thank you for making this video! I learned the concepts of smooth and analytic in the context of complex analysis where they are equivalent. As a result, I have always had a hard time remembering what the distinction is. This makes it clear by outlining where they diverge in the real numbers. Well done!

  • @nikkatalnikov
    @nikkatalnikov 4 місяці тому +2

    Great video, thank you!
    Bump functions are really important in studies of weak solutions / weak derivatives as support functions for distributions.

  • @billgatesharmikropenls
    @billgatesharmikropenls 4 місяці тому

    This is quickly becoming my favorite channel

  • @Kram1032
    @Kram1032 4 місяці тому +7

    another neat one (similar to the bump function) is the fabius function which has the property that its derivative is two rescaled copies of itself. Normally defined on the unit interval, it's also possible to extend it into a pseudoperiodic form that is positive or negative according to the Thue-Morse sequence. If you don't do so though, it's constantly zero for negative values and constantly one for any value beyond 1, and in between it takes rational values for any dyadic rational input.

  • @ciCCapROSTi
    @ciCCapROSTi 3 місяці тому +2

    Thanks mate, I was fascinated by Taylor series since the first semester of calculus, but forgot a lot since then. Good, concise, informative video.

  • @ericdculver
    @ericdculver 3 місяці тому

    Great video! I have know about the examples of smooth but not analytic functions for a long time, but I did not know why they failed to be analytic. This was very illuminating.

  • @Steindium
    @Steindium 4 місяці тому +3

    Awesome video. I always had my doubts with the Taylor series, so it's nice to see a video addressing them. In fact, coincidentally, I was just watching a video on Euler's identity and grunted when it was another proof using the Taylor expansion.

    • @wumbo_dot_net
      @wumbo_dot_net 4 місяці тому

      I also struggled with them in school, something about the *why* was always missing. I also made a video about Taylor series recently if you’re interested!

  • @TheJara123
    @TheJara123 4 місяці тому

    Again wonderful man!! Helps me out of my personal health pain!! Reminding to the wonderful world of math!! Please keep posting often....you have real unique gift to explain complex math concepts!!

  • @erikb.celsing4496
    @erikb.celsing4496 17 днів тому

    This video is completely AMAZING I am so thankful you made it!!

  • @mauisstepsis5524
    @mauisstepsis5524 Місяць тому

    This is the most insightful discussion of Taylor series I have seen. Thanks a lot!

  • @jonathanbeeson8614
    @jonathanbeeson8614 3 місяці тому

    Just wanted to add my thanks and appreciation. My level of mathematical sophistication was well matched by your level of explanation !

  • @General12th
    @General12th 4 місяці тому

    Hi Morph!
    Nice writing!

  • @blitzkringe
    @blitzkringe 4 місяці тому

    Thanks, my struggle with the concept of complex analytic functions seemed almost hopeless until youtube recommended me this video

  • @ashie.official
    @ashie.official 4 місяці тому

    that last line made me chuckle :) good video!!

  • @v_i_e_w_e_r_405
    @v_i_e_w_e_r_405 4 місяці тому

    great xmass gift! thank you!

  • @GhostyOcean
    @GhostyOcean 4 місяці тому +86

    Complex analysis is probably my favorite subject to study. All the nasty things from real analysis get smoothed away.

    • @budderman3rd
      @budderman3rd 4 місяці тому +5

      Well complex is more complete than just reals.

    • @Tutor-i
      @Tutor-i 4 місяці тому +1

      What did you take first complex analysis or real? I can choose to take complex or real next year but don’t know which one to choose.

    • @ryanh7167
      @ryanh7167 4 місяці тому

      ​@@Tutor-i if you are in undergrad, your real analysis course and introductory complex analysis course (probably called something like "functions of complex variables") will be very different courses with a different focus.
      Introductory real analysis courses tend to focus on the basics of set theory, topology of metric spaces, and sequences/series of real numbers/vectors in metric spaces. Sometimes you'll get to derivatives and the beginnings of Riemann integration.
      Introductory complex functions courses tend to focus on the parts of complex analysis which can be handled with standard multivariable calculus. They'll walk you through the standard exponentials of complex functions, the basics of complex polynomials/the fundamental theorem of algebra, and then usually go towards talking about how to handle derivatives and integrals of well-behaved complex functions (functions who are equivalent to rotation and scaling in R2).
      Sorry for the novel, but I don't think you should really think of them as being in sequence for each other, because they tend to have a different purpose and focus.

    • @GhostyOcean
      @GhostyOcean 4 місяці тому +2

      @@Tutor-i I took complex first, but actually I took it concurrently with my intro to proofs class. Guess you could say I was smart enough to still be in the top of the class while learning proofs

    • @6funnys
      @6funnys 4 місяці тому +3

      Real is definitely more fundamental and will change the way you think about math, but it still kind of depends on your institution. Where I go to school, they offered a functions of a complex variable course that was in between the levels of a calculus course and an analysis course - we did a fair mix of proofs and computations. I absolutely loved that class, and it definitely came before real in the difficulty progression. But if you’ve got a lot of room in your schedule next semester, real is pretty awesome.

  • @JourneyThroughMath
    @JourneyThroughMath 2 місяці тому

    As a teacher, Im familiar with Taylor series, but I have never stopped to consider how they form. Thank you for this video!

  • @_unkown8652
    @_unkown8652 4 місяці тому +2

    Hey morphocular! Huge fan here! I would recommend using a darker colour palette for your vids, because the pink here is a little agressive 😅

  • @giacomocasartelli5503
    @giacomocasartelli5503 3 місяці тому

    Great video, complete and sound explanation of a profound concept

  • @AllemandInstable
    @AllemandInstable 4 місяці тому

    great video, would have loved to watch these when i was a student doing its first steps in analysis

  • @MagicGonads
    @MagicGonads 4 місяці тому +4

    The way it is shown to construct analytic functions from other analytic functions is a bit too vague.
    It is true that they form a ring, so addition, subtraction, and multiplication work.
    It's also true that composition works, however you have to carefully consider what happens to the domain where you can easily puncture it making it only piecewise-analytic on the original domain, and it's not as obvious as for the ring where we have the open intersection of the domains as the resulting domain.
    And specifically for division and inversion there are special conditions that need to be met and of course for inversion the domain totally changes.

  • @mr_underscore7681
    @mr_underscore7681 4 місяці тому +1

    This is great, love this stuff. Keep it up!

  • @joaopedrodiniz7067
    @joaopedrodiniz7067 4 місяці тому +1

    Wow, and once again you succeed to amaze me. Congratulations on the amazing video!

  • @coaster1235
    @coaster1235 4 місяці тому +3

    would also be fun to learn about pade approximants, and compare the priorities of the two approximations (at an arbitrary close neighborhood of a point vs over an interval), and why pade does better at the latter

  • @05degrees
    @05degrees 4 місяці тому +8

    Also, analytic functions are quit rigid, usually you can’t arbitrarily define them on several intervals at once (like the case with exp, sin and cos which automatically define themselves on all of ℝ!)-but non-analytic functions allow more freedom.
    Though analytic functions are still not the worst when trying to have one that’s _very much like_ zero even if not exactly zero outside a region: take the ever-present gaussian exp(−x²/2), for many purposes it’s very much zero outside, say, x ∈ [−10; +10]. Taking a larger power of x will make it even better at this, though then we’ll get a function that is less useful in many fields.
    Analytic functions are like “infinite-order polynomials” in a sense. Plain finite-order polynomials, on the other hand, are the top candidate for being the worst rigid class of functions that seems like a nice and large class at first. Has its upsides because of that, though.

  • @Procyon50
    @Procyon50 4 місяці тому +16

    The fact that combinations of analytic functions are also analytic is so cool. This reminds me of how elements of groups stay in the group, when you multiply them together. Are these concepts related?

    • @epicwalrus1262
      @epicwalrus1262 4 місяці тому +17

      Yes, analytic functions on a given domain form a ring, which is a group with multiplication (but not necessarily division)

    • @schweinmachtbree1013
      @schweinmachtbree1013 4 місяці тому +4

      Things staying in a given set when applying an operation (your 2 examples - 1: the set of analytic functions mapping into itself when applying addition, subtraction, multiplication, composition, etc., and 2: the set of elements of a group mapping into itself when applying the group multiplication) is called "the set being _closed_ under the operation". With this terminology your 2 examples are "(the set of) analytic functions being closed under +, -, *, ∘, etc." and "(the underlying set of) a group being closed under the group multiplication".

    • @schweinmachtbree1013
      @schweinmachtbree1013 4 місяці тому

      ​@@epicwalrus1262 To clarify a little, a ring _R_ is an abelian group _A_ = ( _A_ ; +, 0, -) (0 and - being the identity element and inverse operation for +) together with an associative multiplication × distributing over it (that is, (a×b)×c = a×(b×c) for all a,b,c in _A_ and also a×(b+c) = a×b + a×c and (a+b)×c = a×c + b×c for all a,b,c in _A_ ). Depending on where rings are being used, × is sometimes required to have an identity element, denoted 1, such rings being called "rings with 1", "rings with identity", or "unital rings".
      An example of a unital ring is yours, analytic functions on a domain _D_ (or any subset of *C* ), for which the standard notation is _R_ = C^ω( _D_ ), with multiplicative identity being the constant function _f_ ( _x_ ) = 1, and an example of a non-unital ring being the analytic functions on a domain _D_ with compact support (using the precise definition of a "domain" in complex analysis: a non-empty connected open subset of *C* ), denoted _R_ = C^ω_K( _D_ ): now the constant function _f_ ( _x_ ) = 1 is excluded by definition (since the support of _f_ is all of _D_ , but _D_ is open so not compact) and _R_ has no multiplicative identity.

    • @drdca8263
      @drdca8263 4 місяці тому

      @@epicwalrus1262a commutative group under addition, and closed under a multiplication operation, where that multiplication distributes over the addition, and is associative.
      Typically one also requires that there be a multiplicative identity, but I think some people don’t require that? But most people give a different name to the version of the idea without that requirement.

    • @jorgenharmse4752
      @jorgenharmse4752 2 місяці тому

      @@epicwalrus1262: Extend to meromorphic functions, and then you can do division. (Weierstrass factorisation implies that the field of meromorphic functions on a connected open set 'is' the field of fractions of the ring of holomorphic functions.)

  • @yds6268
    @yds6268 4 місяці тому +25

    Most calculators or computers don't use Taylor's theorem for trig functions and the exponential. It's very inefficient.

    • @ryanpitasky487
      @ryanpitasky487 4 місяці тому +3

      CORDIC!

    • @justafanoftheguywithamoust5594
      @justafanoftheguywithamoust5594 4 місяці тому +1

      Then what do they use ?

    • @fullfungo4476
      @fullfungo4476 4 місяці тому +3

      @@justafanoftheguywithamoust5594 Some use lookup tables with further approximation techniques like Newton’s method.

    • @yds6268
      @yds6268 4 місяці тому

      @@ryanpitasky487 exactly, the CORDIC algorithm, amazing invention

  • @that_guy4690
    @that_guy4690 4 місяці тому

    Thank you for your video. It made me view the Taylor series from a new perspective

  • @yanceyward3689
    @yanceyward3689 8 днів тому

    An absolutely wonderful video of a concept I was wrestling with just a few weeks ago.

  • @cdenn016
    @cdenn016 4 місяці тому +6

    As a PhD physicist, I greatly appreciate this point

  • @Waffle_6
    @Waffle_6 4 місяці тому

    please never stop making videos

  • @mustafizurrahman5699
    @mustafizurrahman5699 3 місяці тому

    Superb mesmerising....cannot thank you more for such lucid explanation

  • @dariomartinezmartinez5422
    @dariomartinezmartinez5422 2 місяці тому

    I absolutely love your videos, I'm studying maths and I find your videos crystal clear, thank you very much for making such a good content!

  • @remekstepaniuk7820
    @remekstepaniuk7820 2 місяці тому

    I could listen to this guy explain all of mathemathics to me. From axioms to complex functions, to topology and geometrics ❤

  • @tangentfox4677
    @tangentfox4677 3 місяці тому

    I love that you describe the bump function as useful because it's not too bumpy.

  • @jackpisso1761
    @jackpisso1761 4 місяці тому

    Very nice presentation. Thank you!

  • @1timoasif
    @1timoasif Місяць тому

    Need a Complex Analysis video after this one 🙏

  • @danielbautista7
    @danielbautista7 4 місяці тому

    Thanks for refining my intuition

  • @Jaylooker
    @Jaylooker 4 місяці тому +3

    Holomorphic functions under more conditions are automorphic forms like modular forms.

    • @MagicGonads
      @MagicGonads 4 місяці тому +1

      they don't call them conformal maps for nothing

  • @jimi02468
    @jimi02468 4 місяці тому +1

    This channel is like 3b1b with a different voice and I love it

  • @Calcprof
    @Calcprof 3 місяці тому

    The numerical evaluation of transcendental functions is a fascinating field, and there are many subtleties. I particularly like the use use assymptotic but divergent series. Also rational function approximations can be used and can converge "past" (on the other side) of singularities (poles).

  • @ElchiKing
    @ElchiKing 3 місяці тому

    An addition to 12:00 This does not only hold for composition by concatenation, inversion, division, multiplication, addition and subtraction, but I think also for "most" solutions to equations:
    Suppose we have some equation of the form f(x,y)=0 and let f be analytic if we fix either x or y. Suppose further, that we have some solution (x0,y0) such that df/dy(x0,y0) is nonzero. Then the implicit function theorem tells us, that we can locally describe the set of solutions around (x0,y0) as the graph of a function g, i.e. near (x0,y0), all points satisfying the equation f(x,y)=0 have the form (x,g(x)). Furthermore, the same theorem also tells us that g is differentiable near x0 and if f is analytic, then so is g (at least in a small neighborhood of x0).

  • @tryingintrovert1239
    @tryingintrovert1239 4 місяці тому

    We learnt to approximate the value of the sine function in our engineering programming class using Taylor series. It was a very interesting experience

  • @matteovassallo568
    @matteovassallo568 Місяць тому

    Great work!

  • @anmoldesai6022
    @anmoldesai6022 2 місяці тому

    Hey! Love the video. I was also studying Laurent series and wanted to know if it solves exactly the problem you explained. Thanks!

  • @michaelthompson5396
    @michaelthompson5396 3 місяці тому

    great explanation. very helpful.

  • @fireballman31
    @fireballman31 4 місяці тому

    Great video: presented at the perfect level and reminded me of some of your best videos.

  • @ffdm
    @ffdm 4 місяці тому

    Wow. You're channel is FANTASTIC

  • @anketmohadikar8767
    @anketmohadikar8767 22 дні тому

    Very well explained,great video;)

  • @deanrumsby
    @deanrumsby 4 місяці тому

    Great video and such great animations! :)

  • @anvayjain4100
    @anvayjain4100 11 днів тому

    Gonna watch every second of sponsored second cuz the rest of the video is worth it.

  • @blueheartorangeheart3768
    @blueheartorangeheart3768 3 місяці тому +1

    I was just explaining to a student how the Taylor series worked, and I realized I had no idea WHY it worked. Then I remembered this video showing up on my timeline

  • @haidarhaidar9092
    @haidarhaidar9092 3 місяці тому

    Thanks man that's a lot of real wisdom ❤

  • @droidcelestial
    @droidcelestial 2 місяці тому

    Morphocular... more like, spectacular!

  • @frankreashore
    @frankreashore 3 місяці тому

    Fantastic video. Loved it!

  • @bradzoltick6465
    @bradzoltick6465 4 місяці тому

    Wonderful presentation.

  • @TheFrewah
    @TheFrewah 2 місяці тому

    The mathologer channel has a good video showing how these series work. It’s beautiful!. Maybe 3 blue 1 brown as well. Animations are beautiful and open source

  • @FredericoKlein
    @FredericoKlein 4 місяці тому +1

    i remember getting really spooked about this in college thinking that if you could know every derivative of a path, that you could calculate it in the future and how future information would be somehow hidden in higher order derivatives, I kinda forgot about this, but i think it has to do with my incomplete understanding of the taylor theorem and the limitations of taylor explansions

  • @cloud_222
    @cloud_222 4 місяці тому +1

    Amazing video!

  • @GrandMoffTarkinsTeaDispenser
    @GrandMoffTarkinsTeaDispenser 4 місяці тому +1

    Great video thank you.

  • @tomkerruish2982
    @tomkerruish2982 4 місяці тому +2

    I'd speculate that the reason analytic functions show up so much is because we're using them to solve (relatively simple) differential equations.

  • @xTriplexS
    @xTriplexS 4 місяці тому

    Needed to write the algorithm for this today. It's nice that google is listening to everything 🙃

  • @whatitmeans
    @whatitmeans 3 місяці тому

    Nice video, recently I found about the existence of smooth bump functions and how they aren't analytical. But there are much more that could be told about the failure of Taylor expansions:
    I learned that due the Identity Theorem, no non-piecewise defined power series could match a constant value in a non-zero measure interval, wich means that an analytic function just cannot represent a phenomena with a finite duration (so having a finite extinction time).
    As example, the differential eqn:
    x' = -sgn(x) sqrt(|x|), x(0) =1
    has as unique solution
    x(t)=1/4 (1-t/2 +|1-t/2|)^2
    that becomes exactly zero after t>=2
    No power series could approximate this simple solution for all t.
    This means that at least no 1st neither 2nd order linear ODE, and neither non-linear ODEs like Bessels' and others with power series solutions, just cannot represent a finite extinction time: for doing it so the ODE must have a non-lipchitz point in time were uniqueness could be broken (so, it must handle singular solutions).
    This could have deep meaning in physics, How you could describe accurately what "time" means if your models don't even know when the clock stopped ticking?
    Think about it. More on MathStackExchange tag [finite-duration]

  • @thejoojoo9999
    @thejoojoo9999 Місяць тому

    Great Video !

  • @patturnweaver
    @patturnweaver 4 місяці тому

    wonderful.
    i now see why analytic functions are such a big deal.
    makes it much easier to find an approximating function for one thing.
    i also see the advantages of working in the complex domain.
    if a function has derivatives for all n on an open domain, then the function is analytic.
    if a function is smooth, then its analytic.
    if a function has a first derivative, then it has a derivative for all n.

  • @hqTheToaster
    @hqTheToaster 3 місяці тому

    I have the same notion. I used Taylor Series to approximate how I figure characters should be scaled to each other in Dreams (a game) to mimic how characters are aligned in Smash Bros. Series from 'one canon height structure' to another, but eventually, I had to settle for making a table of possible canon non-Smash Bros. heights to convert to canon Smash Bros. heights simply because the measurements like to fudge each other. Great video!

  • @undisclosedmusic4969
    @undisclosedmusic4969 4 місяці тому

    Padé approximants next please! ❤

  • @LegendLength
    @LegendLength 4 місяці тому

    Glad to finally know what holomorphic means after seeing it so much on wiki!

  • @hitarthk
    @hitarthk 2 місяці тому

    It's so cute that you make sure people don't hate non analytic functions by showing their utility ❤

  • @Sofialovesmath
    @Sofialovesmath 4 місяці тому +1

    Amazing!

  • @arielwen8040
    @arielwen8040 3 місяці тому

    explain a lot!!!! thanks!

  • @tap9095
    @tap9095 4 місяці тому +2

    One smoot but not analytic function that I like is the Fabius function. It has the properties that f(0)=0, f(1)=1, and all derivatives at 0 and 1 = 0. So it works like the smoothest possible step function. But it's not analytic so you can't really compute values for arbitrary points. It also has a fun functional differential equation, f'(x)=2f(2x).

  • @JeanYvesBouguet
    @JeanYvesBouguet 4 місяці тому +1

    This is truly one awesome video. I have always loved complex analysis and this might be the best introduction video to analytic functions and its relationship with smooth functions (or C-infinity). I have always been fascinated by this concept of smooth but not analytic. Thank you!

  • @TheIllerX
    @TheIllerX 3 дні тому

    This information issue is part of a much larger general question about how much local information can be used to predict the behavior at other points.
    All information about an analytic function is contained at each point of the function.
    A function on the real line constructed where you just pick a random function value for each argument value would be the opposite. The information at each point says absolutely nothing about the value at other points. Most functions are in between those extreme cases.
    It would be interesting to quantify this information dependence between points in some general way.

  • @user-qz6zu6ir4e
    @user-qz6zu6ir4e 4 місяці тому

    thank you so much

  • @MattMcIrvin
    @MattMcIrvin 4 місяці тому

    Bump functions are important in differential geometry--they're what allow us to define maps between any smooth manifold and a set of coordinate charts that are like Euclidean space (like a set of flat maps covering the geography of the round Earth), which don't disturb any of the derivatives of functions on the manifold. The bump functions define the cross-fade from one coordinate chart to another. That's useful for proving things.

  • @kavinbala8885
    @kavinbala8885 4 місяці тому +1

    great video on taylor series
    though this isn't the method computers use to calculate sin x
    they use an approximation for sin x from 0 to pi/2 then just flip it into position for any other value of x

  • @user-ng3ps6vd6u
    @user-ng3ps6vd6u 2 місяці тому

    Analytic functions being a rarity reminds me of how among the ocean of all real numbers we can do pretty much fine with just the whole or rational ones even though calling them a rarity would be an understatement.

  • @KipIngram
    @KipIngram 19 днів тому

    Calculators and computers generally do trig functions using the Cordic algorithms, which are somewhat more specialized than plain Taylor series, and involve some tabulated "special values."

  • @AlessandroZir
    @AlessandroZir Місяць тому

    look, this is one of the best videos I found about this very fundamental topic, that it seems most mathematicians and engineers are just incapable of explaining conceptually; but it gets again confuse at some point (4'13")! the procedure shouldn't work, ok! but the point is that it actually works;

    • @gperm4941
      @gperm4941 Місяць тому +1

      The point is that it doesn't work for most functions, only a subset of functions

    • @AlessandroZir
      @AlessandroZir Місяць тому

      @@gperm4941 yes, but for this small subset it works quite well, and it is important to state this clearly enough;

  • @pfeilspitze
    @pfeilspitze 4 місяці тому +2

    TBH, I was expecting something about how we *compute* them -- like whether using the taylor series in the obvious way with IEEE floating-point numbers actually converges to the best representable answer (with ±½ULP error).
    This video didn't really have anything about computing them at all.

    • @lifthras11r
      @lifthras11r 4 місяці тому +1

      That is however much harder to explain and argubaly too computer-centric. I pretty much enjoyed the entire video even though I realized it's nothing to do with computation (and I forgot the Taylor expansion had the remainder term for a very long time).