Newton's Method: How to Compute Pretty much Anything

Поділитися
Вставка
  • Опубліковано 30 сер 2019
  • In this video we'll explore Newton's Method. It is a technique for solving equations of a single variable. It is very widely used, and extremely powerful.
    Support What's a Creel? on Patreon:
    / whatsacreel
    FaceBook:
    / whatsacreel
    Music Channel:
    / @creelsmusic5814
    Another channel with random things:
    / @ftlol6091
    Software used to make this vid:
    Visual Studio 2019 Community:
    www.visualstudio.com/downloads/
    Blender:
    www.blender.org/
    OBS:
    obsproject.com/
    Davinci Resolve 16:
    www.blackmagicdesign.com/prod...
    Meshroom:
    alicevision.github.io/

КОМЕНТАРІ • 145

  • @mike200017
    @mike200017 3 роки тому +119

    Speaking from about a decade of experience writing numerical code professionally, I can say that Newton's method is too unreliable and complex to be useful in practice.
    1) This method can very often diverge entirely (i.e., "x" bounces off to infinity) for a bad function or initial guess. So, that needs to be guarded against (generally, by falling back to a Secant method).
    2) The division comes with numerical stability issues, as any floating point division does, which has to be dealt with, leading to branches and alternatives to be used (generally, by falling back to a Secant method).
    3) In most practical use-cases, the precision of the solution that could be achieved is limited by the precision of the input data (the terms in the equation being solved), and that is rarely more than a few digits, making the quadratic convergence rate not that beneficial, really.
    4) For complex functions, the derivative can be much more expensive to compute than the function itself, meaning that you can afford more iterations and a slower convergence (linear vs quadratic) if it avoids computing the derivative.
    Long story short, a Secant method ended up being used in nearly every real-world application I've encountered, because it's numerically stable, guaranteed to converge, simple, efficient and has a completely predictable level of precision, which is incredibly important in down-stream calculations.

    • @julelemaitre
      @julelemaitre 3 роки тому +21

      very interestant but all i know is that the Fast inverse square root function in Quake uses the Newton Method, so i'll be keeping that one è_é
      Joke aside, thanks for your comment I didn't know about Secant Method, now I do

    • @SerBallister
      @SerBallister 3 роки тому +14

      @@julelemaitre Math's used for computer graphics can afford to be inaccurate, it often is (e.g. 16bit floating point is perfectly fine for storing colour).

    • @mrsudarshan6249
      @mrsudarshan6249 3 роки тому +5

      A decade of experience writing numerical code professionally - and no alternatives to Newtons method?

    • @Salien1999
      @Salien1999 2 роки тому +1

      Isn't the secant method just a finite-difference form of Newton's method? Why wouldn't it inherit all of the problems Newton's method has?

    • @wile9763
      @wile9763 2 роки тому +9

      Actually, Newton's method finds wide use in solving large systems of nonlinear equations which arise, e.g., in solving a semi-discretized PDE system for physics simulation. I'll address each concern individually:
      1. It is true that choosing a good (read: convergent) initial guess is in many cases non-trivial. However, it possible to sometimes lean on intuition to achieve good results in practice. For example, in using Newton's method in an implicit time integrator to solve for the solution at the current time step, it may be a good idea to choose the initial guess to be the current solution.
      2. It is true that for 1-D equations the division can cause problems, especially since in some applications the derivative tends to zero as the method converges. However, many applications of Newton's method consider finding roots of many equations simultaneously. In this scenario, there is no division as the derivative instead becomes a Jacobian matrix and the application of Newton's method produces a linear system which must be solved by some external algorithm.
      3. This problem is not unique to Newton's method and can't truly be considered a weakness of Newton's method. Any method which depends on the precision of its inputs can be expected to preform poorly when sufficient precision can't be supplied.
      4. This is actually often true of many functions - the gradient is often more computationally expensive to obtain. There may be some intuition as to why, especially for multi-dimensional functions: more outputs needs to be computed in for the gradient than the function. I might argue, however, that computing the gradient of a complex-valued black box function can be in general be more computationally stable than simply that of a real-valued function. For a black box function, a finite difference approximation of the gradient is usually required. However, this algorithm is subject to "subtractive cancelling" which makes choosing a delta in your inputs non-trivial (we find that determining the optimal delta depends on the second derivative). However, the cousin of finite difference for complex-valued functions, the complex step algorithm, can compute real gradients more efficiently and accurately than finite difference; there is no subtractive cancelling.
      Still, Newton's method is not perfect. It is based off a linearization of the function via a truncation of the Taylor series. It may very well be possible that the truncated terms are significant in the neighborhood of the current iterate, but we simply discard them. Another issue is the assumption that a (real-valued) solution to the function exists. Many nonlinear equation don't admit such solutions. Consider, e.g., finding the roots of f(x) = x^2 + 1; only complex roots exist.
      Finally, nonlinear equations are incredibly difficult to solve in general, especially when presented as a black box. Nearly every solution method will apply a similar approach as Newton's method: 1.) Assume the function is sufficiently flat (linear) in the neighborhood of an iterate, 2.) solve the linear equation and 3.) repeat until convergence.

  • @mike-barber
    @mike-barber 4 роки тому +13

    Worth giving a shout out to the good ol' Secant Method too -- pretty useful when you don't have a derivative handy.

  • @kpjVideo
    @kpjVideo 4 роки тому +27

    Fantastic video, thank you for the explanation! Totally forgot about how powerful this method was

    • @WhatsACreel
      @WhatsACreel  4 роки тому +1

      Cheers brus! Certainly is powerful, thanks for watching mate :)

  • @Videofiziert
    @Videofiziert 4 роки тому +12

    I learned about it in great detail in school but it faded a bit, this was a great refresher. Appreciate the info about other methods and converging performance at the end, very interesting

  • @clintgalendez
    @clintgalendez 4 роки тому +50

    "If we STD this bad boy..." *Sweats

  • @leonhrad
    @leonhrad 4 роки тому +21

    16:34 makes sense that it only works for continuous functions. The method requires a derivative of the function, so the function has to be differentiable which implies that it has to be continuous.

    • @WhatsACreel
      @WhatsACreel  4 роки тому +6

      Great mate! Cheers for watching :)

    • @zanzi8597
      @zanzi8597 2 роки тому +1

      It only works for differentiable functions, since almost all continuous function are not differentiable in any point

  • @ivi9502
    @ivi9502 4 роки тому +5

    I took Real Analysis this past spring semester and actually studied Newton's Method in depth and had to do a small presentation over it. I ended up writing a program to go along with the actual presentation. While writing the program I set made a few preset functions which changed depending on the option chosen by the user (option 1 is preset 1, etc). In doing so I had to find a way to compute the derivative for each function and ended up defining a variable h = 10^-5 and used [f(x-h) - f(x)]/h as the derivative. this actually made it a lot faster as I didn't have to calculate the derivative, but instead had the computer do it. It alos let me have 10 presets without much work other than setting the preset functions.

    • @WhiterockFTP
      @WhiterockFTP 4 роки тому +1

      have a look at automatic differentiation (AD) by use of e.g. dual numbers

  • @Hjominbonrun
    @Hjominbonrun 4 роки тому +17

    cos(phi)=0 solves to phi=Pi/2+2nPi and 3Pi/2+2nPi which is a sequence.
    Newtons method is great for finding solutions where there is one intersection with the x axis.

  •  4 роки тому +22

    Instead of doing a fixed number of iterations, you should simply check if "FF" is below some given threshold and then just break out of the loop. No point in calculating the 15th decimal place, if you're displaying only 6 or so.

    • @INT41O
      @INT41O 4 роки тому +9

      Better: Check the amount that g changes in each iteration instead of FF, because you are usually interested in the accuracy of the solution, not the residuum.

    •  4 роки тому +4

      Yeah, that's exactly what I said. FF is the change in g after each iteration.

    • @INT41O
      @INT41O 4 роки тому +1

      @ No it's not lol. Not according to 13:25

    • @WhatsACreel
      @WhatsACreel  4 роки тому +9

      Yes, that's very true! The only advantage to small fixed iterations I can think of is SIMD. We would almost always employ epsilon in practice :)

  • @ButtKickington
    @ButtKickington 4 роки тому +12

    I discovered this when I was playing with the gmp library and calculated the square root of two to a million places. When you're calculating the square root, the g variable you use should be an approximation or guess. A decent approximation for square roots is to bit shift half the bits off the number. It can be very useful for large numbers.
    13 = 0b1101.
    0b1101 >> 2 = 0b0011 = 3.
    3*3 = 0b1001 = 9.
    3*3 < sqrt(13) < 4*4

    • @joaq1999
      @joaq1999 3 роки тому

      I'm thinking, the other day I did a program to get the full list of prime factors of a number, if the algorithm is optimized it can actually be done very efficiently (the biggest prime factor will never go above the sqr of the number, etc. ). So, I just put the idea, not sure if it is, but for big numbers maybe getting prime factors could be a good way to get the approximation number... I think it makes sense

    • @ButtKickington
      @ButtKickington 3 роки тому +1

      @@joaq1999 I actually used this idea of getting the square root of large numbers so that I could generate large prime numbers by only checking previously found prime numbers below the sqrt as factors.

    • @arnoldvanhofwegen2255
      @arnoldvanhofwegen2255 3 роки тому

      @@ButtKickington You have to make sure that you found all primes below that sqrt. Example if you find a bigger prime using the method of multiplying all primes and add 1 you can easily skip a prime, this already happens with first 2 primes: 2 * 3 + 1 = 7, 7 is prime but you missed 5. Your big prime may not be prime after all if you did not test against all primes below the sqrt.

  • @manuelnovella39
    @manuelnovella39 3 роки тому +1

    Man, your enthusiasm has caught me! Now I wanna watch many more videos of yours. Congrats for the great work! Keep it up, dude, and have a good day too!

  • @jamisonr
    @jamisonr 3 роки тому +2

    Love Newton's method. I wrote a version in C# using expression trees to solve for complex rates of return a few years ago. The equations can sometimes be thousands of terms long, and it can find "x" in milliseconds on something like that.

  • @jonathanengwall2777
    @jonathanengwall2777 3 роки тому +1

    Very helpful. I am at the end of an excruciating video lecture series asking myself how to do anything at all. It is a terrible feeling! You have improved my entire afternoon. Thank you.

  • @handle1928
    @handle1928 3 роки тому +4

    Don't know why this was in my recommended but I feel smarter now👍

  • @higiniofuentes2551
    @higiniofuentes2551 2 роки тому

    Thank you for this very interesting video!

  • @migzleon4047
    @migzleon4047 2 роки тому

    Excellent content... Thank you for sharing...

  • @zlClutchy
    @zlClutchy 2 роки тому +4

    What a lot of people don’t know about Newton is that about 95% of his work was actually metaphysics, the other 5% being mathematics, physics, etc. He was more philosopher than mathematician.

    • @stroyed4229
      @stroyed4229 2 роки тому +1

      Fun fact, he was an alchemist as well!

    • @BreytnerNascimento
      @BreytnerNascimento 2 роки тому

      Also, his work on physics was more of natural philosophy than actual physics.

  • @IronTeddyBear
    @IronTeddyBear 3 роки тому +1

    Cos(7x^3 - 2*x) = 0 is very easy to solve. It means 7x^3 - 2*x = Math.PI (or -Math.PI), the only two answers that will give a cosine of zero. Also, cubic equations can have either one, two or three solutions, but Newton's method will only find one of the answers, the one closest to your initial guess.

  • @dnoordink
    @dnoordink 4 роки тому +1

    Great video, thanks for the refresher. Though with the trig functions, there are usually an infinite amount of answers. Eg for the cos(7x3) formula any value that gives cos(+/-PI*0.5) will equal 0. If you change your guess to any random number it should come out with a different result.

  • @chswin
    @chswin 2 роки тому +1

    Could you imagine showing newton how this is being used today. It would blow his mind…

  • @baconsledge
    @baconsledge 3 роки тому

    You should have way more subs....great content!

  • @kilroy8357
    @kilroy8357 4 роки тому +2

    You're the man!

  • @kngemeral
    @kngemeral 4 роки тому

    Great video

  • @tonyennis1787
    @tonyennis1787 3 роки тому

    I didn't know this existed. Thank you!

  • @nolanfaught6974
    @nolanfaught6974 3 роки тому

    I'm enrolled in an Applied Math course at my Uni and the number of times I've had to code up Newton's method to solve a problem is enormous

  • @NeilRoy
    @NeilRoy 4 роки тому +1

    Fascinating method.

    • @WhatsACreel
      @WhatsACreel  4 роки тому +1

      Certainly is! Cheers for watching brus :)

  • @monad_tcp
    @monad_tcp 4 роки тому +3

    It becomes more fun when you add more variables and now you have a PnP solver !

  • @ProjectPhysX
    @ProjectPhysX 4 роки тому +2

    Amazing video as always, very fascinating, thanks! Personally I prefer nested intervals since Newton's method can end up in Nirwana when the derivative goes close to 0. There can even be the case that the iteration jumps in between two points indefinitely for some 3rd order polynomials. Nested intertvals does not need a derivative and always works as long as you know in advance that in the specified interval there is exactly one root, although it converges slower. Also, if it is possible to find the inverse function analytically, it's usually faster to calculate it directly (even if the expression contains trigonometric functions) instead of with a root-finding algorithm.

    • @INT41O
      @INT41O 4 роки тому +3

      You can also combine the bisection method with the Newton iteration to get fast and guaranteed convergence. For the derivative you can use the complex-step derivative approximation, which is accurate up to machine precision (as opposed to finite differences), with low computational cost.

    • @WhatsACreel
      @WhatsACreel  4 роки тому +2

      Oh, yeah, Newton's method does some funny things! There's some really amazing algorithms out there! My title was a little clickbaitey :b. Cheers for watching mate :)

    • @WhatsACreel
      @WhatsACreel  4 роки тому +2

      This is awesome! Cheers for sharing mate :)

  • @elHosed
    @elHosed 3 роки тому +1

    I remember needing Sin/Cos/Tan in a scripting language that only had simple arithmetic way-back-when and using a couple of iterative series to create functions that approximated the answers. Ended up doing exactly what a calculator does in hardware (though I didn't even realize that's what I was doing at the time).

  • @marasmusine
    @marasmusine 4 роки тому +1

    I will actively try and shoehorn this method into my next project whether it needs it or not.

    • @marasmusine
      @marasmusine 4 роки тому

      If a function has more than one solution, can you get to those different solutions by varying your initial guess?

    • @aednil
      @aednil 4 роки тому +1

      @@marasmusine if you vary it in the complex number plane, sure. if you want all the solutions it makes sense to implement g as a complex number right away.
      any expression like: "x^n - a = 0" has n roots, but it can only have two roots or less on the real-number-line.
      in the case of "x^4 -78 = 0" he got 2.97183, the other solutions are -2.97183, i2.97183 and -i2.97183. if you had varied g but only on the real-number-line, you would never find the two imaginary solutions. you could take an initial guess and then vary the angle of that number to get to your other initial guesses.

  • @piotrlenarczyk5803
    @piotrlenarczyk5803 4 роки тому +2

    Thank you for video, it is very informative.
    Real stuff (f.e. any road curvature for finding optimal single driver path) can be expressed as set of (road) overlaping part polynomials. Each can be 33% overlaply computed in parallel giving some processing in easy way with usage of easy to implement method. Overlaping is essential, because each part is not fully independed from neighbours, in other case it is not necessary. To sum up: consider breakage of big problems into smaller ones. Polynomial sets are great - these could simplify a lot of things in default thinking way (space information / sequence must be retained).
    For small size problems Newton method will be fastest due to small cost of procedure call. For big problems only parallel, high order mentioned stuff will be efficient. Calculating whole road optimal path at once is exceeding simple math solving (it would not be a function in case of f.e. mountain serpentine).
    Post Scriptum: probably your calculator static core ASIC is few orders of magnitude more efficient, than your laptop system:)

    • @WhatsACreel
      @WhatsACreel  4 роки тому +1

      Hola! Title was a little clickbaitey :) Newton's method is certainly not always the most efficient. This is very interesting about roads! You are talking about a real-time path finding algorithm, no? They A*, or something similar, for maps. That's easy enough. But you're talking about ai-driven cars?
      My calculator is a CASIO fx-100. Best calculator ever!!!
      Cheers for watching mate :)

    • @tolkienfan1972
      @tolkienfan1972 4 роки тому

      There is no way any calculator ASIC is faster for any given single valued function than a modern superscalar fp cpu.

    • @piotrlenarczyk5803
      @piotrlenarczyk5803 4 роки тому

      ​@@tolkienfan1972 Ordinary calculator ASIC is quite efficient. Theirs architecture (calc ASIC and CPU) efficiency differs from some particular silicon organisation efficiency:)
      Making MROM ASICS, OTP PAL's and many other femto Joules per job stuff is very modern. Closed groups of experts, educated for dozen of years makes such things. Designing highly efficient, narrow-specialization electronics is not an official market employment. Direct access to silicon manufacturing infrastructure is not a problem within daily job.
      As for example of popular massively produced stuff:
      -there are only four places in the world, where optical fibres are designed and made. In case of electronics it is applicable to many other areas (motherboards, WiFi, network controllers and so on),
      -there are only three laboratories for whole world night-vision market, where very up-to-date image insifiers are made. The same come with thermovision - there are no modern devices on public market (with comparison to military devices),
      -highly efficient electronics are introduced into world market with significant delay. There could be quite funny numbers here:D
      -officially some specialized stuff probably will not be ever introduced into market (radars, radiolocation),
      -it is not easy-peasy stuff for finding some technology hidden drawbacks. As for example cellular telephony automated voice verification for mobile eavesdropping was introduced in 80's (!), whereas it is still considered as secret... Nowadays any clever student can make it on laptop with USB SDR,
      -theoretical, scientific divagations on universities are light years after already working practical implementations - in my personal opinion, there are areas, where this is true,
      -market general purpose electronics (phones hardware) is generally well-developed these sentences are only for some electronics narrow specializations,
      Post Scriptum: I guess that Intel 4004 was planned counterintelligence operation for F-14's CADC details protection.

    • @tolkienfan1972
      @tolkienfan1972 4 роки тому

      @@piotrlenarczyk5803 I just noticed you said "efficient", which can mean a number of things. If you mean power efficiency, then I have no argument - I just misunderstood your point

  • @GoCreateSomething
    @GoCreateSomething 3 роки тому

    Thanks for a terrific demonstration. It amazes me how long ago Newton came up with this stuff. So you could actually come up with a solution to the cos(7x^3 - 2x) = 0 with Newton's and a little trig knowledge without having to use the chain rule. When does the cosine of a number equal 0? At 90 degrees + y * 180 degrees where y is any integer. So you could use 7x^3-2x =90 and use Newton's on that instead.

  • @joemoulton1823
    @joemoulton1823 4 роки тому

    You should merch some Creel wrist rests I would buy those.

  • @lilyyy411
    @lilyyy411 3 роки тому

    11:25: Fun fact, a general form of all the roots exists. I don't know why I did this.
    Where n is an integer,
    x=\sqrt[3]{\frac{\left(\frac{\pi}{14}+\frac{n\pi}{7}
    ight)}{2}+\sqrt{\frac{\left(\frac{\pi}{14}+\frac{n\pi}{7}
    ight)^{2}}{4}-\frac{8}{9261}}}+\sqrt[3]{\frac{\left(\frac{\pi}{14}+\frac{n\pi}{7}
    ight)}{2}-\sqrt{\frac{\left(\frac{\pi}{14}+\frac{n\pi}{7}
    ight)^{2}}{4}-\frac{8}{9261}}}

  • @bohdanlashko875
    @bohdanlashko875 3 роки тому

    Wow! 👍👍👍👍👍

  • @1Maklak
    @1Maklak 3 роки тому

    For an arbitrary function f(x), that's not too wild, a derivative can be approximated by (f(x +dx) - f(x))/dx where dx is something small, like 10^-6

  • @delta-game
    @delta-game 3 роки тому +1

    I love how he just does this sitting on the lounge chair

  • @doltBmB
    @doltBmB 3 роки тому

    It occurs to me that you could use a while loop that checks whether the last guess is equal to the new guess, and stops then. That way it would iterate until it's accurate to within the limits of your chosen precision or if that's not needed until the answer is completely accurate.

    • @GrayBlood1331
      @GrayBlood1331 3 роки тому

      It might cycle between two (or more) nearly identical values forever though.

  • @WhiterockFTP
    @WhiterockFTP 4 роки тому +3

    cos(7x³ - 2x) = 0 is trivial algebraically. arccos ∃.

    • @dewsjievpdav6557
      @dewsjievpdav6557 3 роки тому +1

      WhiterockFTP I actually plugged in the number he got: 0.984602, and it gave me 0.9966196... so the answer he got was way off, plus it only gives 1 solution...

  • @ibrozdemir
    @ibrozdemir 2 роки тому

    i liked that method very much, but the part where you have to gues some initial estimated number isnt good for algorithms.. is there a way extracting this initial guessing number?

  • @zxuiji
    @zxuiji 3 роки тому

    I have a version that doesn't use guess work, results as around 30 to 40 percent faster than whatever is used in linux's sqrt() function, only works in floating mode, I'll be patenting it when I reveal it to the world (just gotta save up another £600 roughly)

  • @akosmohacsi8136
    @akosmohacsi8136 4 роки тому +2

    You did not linked the merchandise in your description.

    • @WhatsACreel
      @WhatsACreel  4 роки тому +1

      Cheers mate, added it now :) It's meant to show up as a shelf under the vid? Well, anywho, thanks for letting me know, and watching. Have a good one my friend :)

    • @bjordsvennson2726
      @bjordsvennson2726 4 роки тому +1

      @@WhatsACreel I see it under the video on mobile

  • @hurktang
    @hurktang 4 роки тому

    I just want to point out that the derivative is defined as (f(g)-f(g-k))/k where k is infinitesimal. but very small k is probably sufficient. So turns out, you don't even need to know how to derive and not even need to know the function itself.

  • @xponen
    @xponen 4 роки тому

    I don't remember how but I think we can find the value of the derivatives without actually hardcoding the formula of the derivatives, the idea is that the "derivatives" is a Slope of the curve we are solving, so we have to add x(guesses) and x(guesses)+dx(infinitesimal digit) to get y and y+dy, so we can calculate the Slope, which is the derivatives.. But I can't remember where the "y" come from....

    • @aaarrrggg3683
      @aaarrrggg3683 4 роки тому +2

      If you have a function f(x) the derivative is lim h→0 (f(x)-f(x-h))/h
      So you basically take a smaller and smaller interval (h getting closer to 0) between 2 points on a function (f(x) and f(x+h)), and find the slope of the line that goes through those 2 points (f(x) and f(x+h)).

  • @user-ug4ow1qq2h
    @user-ug4ow1qq2h 4 роки тому +1

    Are more iterations necessarily more accurate? There is always a floating point error. Is there a way to determine an optimal number of iterations?

    • @WhatsACreel
      @WhatsACreel  4 роки тому

      Really great question!! I don't know of one, no. You are right, there is a point where the accuracy of floating point plays a larger role in error than the iteration count. I would venture to say that there exists a function f(G), which for any G, if we Newton, it results in any other number being the next G. But I've never thought deeply about it. I think the limitlessness of functions and numbers would make that inevitable. And then, if that's true, perhaps there exists a function f(G) which returns any sequence of G's...?
      Seems plausible. Would be a great paper to read, I reckon! Using an epsilon value is maybe our best bet? Thank you for watching, have a good one :)

    • @user-ug4ow1qq2h
      @user-ug4ow1qq2h 4 роки тому

      @@WhatsACreel I think if the floating point error is a big enough concern, the program can keep track at which moment each following result becomes insignificantly different from the last one. Or we can intentionally keep iterating after that moment, then discard the values before stationary state and average out the values in stationary state. Either way, it will take paper, pencil and a lot of patience to see what's better. :)

    • @jamisonr
      @jamisonr 3 роки тому

      Depends on requirement. For my situation, I needed to come up with a value out to 4 decimal places. So I just check the current iteration value with last iteration value, if they match to 4 decimal places, exit. What is glossed over is that in practice, you will need to calculate the derivative and equation on the fly. That can be complex to say the least.

  • @openroomxyz
    @openroomxyz 4 роки тому

    Is there any real life practical need for faster convergence than one achieved by Newton Method?

    • @danieldorn2927
      @danieldorn2927 4 роки тому

      It is faster by changing to x = squrt(5) because it uses a lookup table for the square root instead of calculating it itself with a for loop, also you need to guess the nearest integer number at first

  • @higiniofuentes2551
    @higiniofuentes2551 2 роки тому

    Anyone knows a program to obtain the nth derivative of any expression?

  • @tolkienfan1972
    @tolkienfan1972 4 роки тому

    Can you compute exp (x) via Newton's method?

  • @quosswimblik4489
    @quosswimblik4489 3 роки тому

    b/root(n,b)=x
    b for base, n for the power of the root, x as the input . given b and x find n does not compute atm for some reason at least I could not find a formula on mathematica. it didn't seem to want to generate it.
    I was thinking a skidded cycloid cord. Where by the amount you are turning the cord in the first finite skid is in the infinitesimal range. And by the end of the skid you are rotating a final finite amount and skidding an infinitesimal step.
    How could you work with this type of super geometry discreetly quickly.

  • @RealRobotZer0
    @RealRobotZer0 3 роки тому

    Take a look at automatic differentiation
    and dual numbers.
    This way you don't have to compute the derivative.

  • @MrGringo466
    @MrGringo466 2 роки тому

    please lower your exposure just a bit for this type of lighting. your side of face is blown out. slightly lowering will fix that.

  • @eeetube1234
    @eeetube1234 11 місяців тому

    cos(7x^3 -2x) = 0 have infinite number of solutions. Newtons method will give you the closest solution to your guess? What method should I use if I want to get all solutions, if the number of solutions is finite?

  • @cmdlp4178
    @cmdlp4178 3 роки тому

    13:19 .'. therefore, unicode: ∴

  • @sevret313
    @sevret313 4 роки тому +1

    You forget to mention that it only finds one solution. For your cos(7x^3 - 2x) = 0 you should have potentially infinite solutions.

    • @krzysztofbandyk168
      @krzysztofbandyk168 4 роки тому +1

      10:00 not explicitly stated but mentioned.

    • @sevret313
      @sevret313 4 роки тому

      @@krzysztofbandyk168 Thanks, I must have missed it.

    • @dewsjievpdav6557
      @dewsjievpdav6557 3 роки тому

      The answer that was achieved for x was also really bad, if you plug it in it equals around 1... not close to 0 like is shown in FF...

    • @sevret313
      @sevret313 3 роки тому

      @@dewsjievpdav6557 He's using radians not degrees.

  • @victormendoza3295
    @victormendoza3295 4 роки тому

    So is Comp Eng better or Comp Sci? Not for a job, but for fun and maybe your own business?

  • @trolledwoods377
    @trolledwoods377 3 роки тому

    cos(7x^3-2x)=0 is really not that hard. You know that cos(x) = 0 is true when x = pi * n + pi*0.5 for any whole number n. Therefore, cos(7x^3-2x)=0 whenever 7x^3-2x=pi * n + pi*0.5 for any whole number n. That is still not that easy, but it's doable

  • @ozjuanpa
    @ozjuanpa 4 роки тому +1

    Now do a video for N variables uwu

  • @m.sierra5258
    @m.sierra5258 4 роки тому

    Why did you rename the q to g?? :D why not just name it x, as you keep talking about x...

  • @durragas4671
    @durragas4671 2 роки тому

    If Newton was alive today would he still be considered a genius or just a good programmer?

  • @BS-bv5sh
    @BS-bv5sh 2 роки тому

    Newton's method... Good times...

  • @bruinjim1
    @bruinjim1 3 роки тому

    But g*g-5 has 2 roots. And sqrt(10) = x is not the same as 10 = x*x. The first equation has a single solution while the second has two. Ack, he's making my inner mathematician go crazy.

    • @WhatsACreel
      @WhatsACreel  3 роки тому

      The root that Newton’s method converges to changes depending on the initial guess. There's a fractal called Newton Fractal based on this.
      X=3.16 and X=-3.16 will do for either sqrt(10)=X or 10=X*X. There’s 2 solutions for both equations. A computer or calculator will give the principal root with sqrt(x), but that's a limitation from the hardware, not the expression itself.

    • @bruinjim1
      @bruinjim1 3 роки тому

      @@WhatsACreel I didn't watch to the end of the video, so I don't know if you covered it. But sqrt(10) = x only has one solution, that is x = sqrt(10). It's already solved. Once you say x * x =10 then that's a different equation. At that point, +sqrt(10) or -sqrt(10) become solutions. I don't know, maybe I'm just crazy or something.

    • @WhatsACreel
      @WhatsACreel  3 роки тому

      @@bruinjim1 I’m not sure if I covered it either, ha! It was a while ago. It’s certainly fine to leave an expression as a sqrt if that’s what the context requires. You are not crazy. You’re talking about the implications of the dual roots of real numbers and the equality of algebraically equivalent expressions.
      There’s many useful interpretations for roots and algebra. Imho, they’re all good :)

    • @bruinjim1
      @bruinjim1 3 роки тому

      @@WhatsACreel :)

  • @worseize
    @worseize 3 роки тому

    Plug this in Deep Neural Network

  • @ancientelevator9
    @ancientelevator9 4 роки тому

    Just curious -- Anyone here used this in actual projects?

    • @blips7114
      @blips7114 4 роки тому

      3D modeling with physic engine often requires deterministic iterative calls like this (Newton and Secant are 2 good candidates). Reverse engineering is also a field where such technic has its use sometimes to test out polynomials guesses to findings.
      On a more personal level, I did use newton as well like few weeks ago in the implementation of a scoring system from weighted data for a « smart city » project. Very handy to check the accuracy needs, etc...

    • @ianmarteens
      @ianmarteens 3 роки тому

      Calculating par rates for complex swaps (fintech)

    • @jamisonr
      @jamisonr 3 роки тому +1

      Yes, in finance if you need to find a rate of return for many pieces of data. For instance your 401k, when you contribute it might go into N number of funds, those funds all have different rates of return, so what is the total return for the policy? Newton's method can figure it out very quickly. The trick is actually building the equation and finding the derivative programmatically.

  • @canyonblue7735
    @canyonblue7735 2 роки тому

    I had no idea Nicholas Cage started doing drugs and teaching CS

  • @v1Broadcaster
    @v1Broadcaster 3 роки тому

    wouldn't the more logical way to rewrite sqrt(10)=x be sqrt(10)-x=0 not x²-10=0

    • @WhatsACreel
      @WhatsACreel  3 роки тому

      Well, they're both equivalent, but we want the derivative, so x^2-10 is an easier way. The derivative of Sqrt(X) is 1/2X^-1/2, so that's no good to us here. When we Newton like this, we often want to find a way to represent the function so that we can easily employ the mechanisms we have at our disposal. Sometimes that means writing the function in a way that the derivative employs only very basic instructions. That's why we went with X^2-10=0. Hope this helps, and cheers for watching :)

  • @joaq1999
    @joaq1999 3 роки тому

    Now, seriously, WTF are the 5 guys who disliked this video?

    • @notoriusmaximus783
      @notoriusmaximus783 3 роки тому

      The explanation in this video is so oversimplified that some could considere it as a bad video. He presents the Newton's method as if it was kind of a silver bullet. But it's not! There are plenty of examples where it diverges, falls into an infinite loop, finds a false root, etc. He even ignores the fact that in his examples there are more roots. He just finds a random one with some unspecified accuracy and - voila, finished! (not to mention that, unlike what he says, the last example is quite easy to solve algebraically) He also tells that the accuracy is more a problem of a computer rather than the Newton's method itself. But that's not true, there are methods which are more stable, more accurate... Ignoring all of this stuff can lead to very dangerous errors because algorithmically everything is ok, only the behaviour or results could be sometimes wrong.

  • @Cici1kus
    @Cici1kus 4 роки тому +3

    At 10:42 13! not even close to 12.999999. Just saying.

    • @WhatsACreel
      @WhatsACreel  4 роки тому

      Hahaha, they're the same thing! Actually, calc output was only about 12.9999, since I only used 4 decimals. Cheers for watching, have a great day :)

    • @9069able
      @9069able 4 роки тому +2

      @@WhatsACreel 13 factorial lol

  • @eeetube1234
    @eeetube1234 11 місяців тому

    12.(9) = 13 , so the answer was correct. (You can't put any number between 12.(9) and 13)

  • @GhostEmblem
    @GhostEmblem 3 роки тому

    1) you do realize that newtons method is literally the old elementry school idea of trying a higher number then a lower number. You literally just subtract the difference and try again and since the difference flips between negative and positive you are just adding a little then subtracting a little less its not all the clever.
    2)why was your derivative of x^4 4x instead of 4x^3

  • @timm1328
    @timm1328 3 роки тому

    Creel vastly overstates the utility of Newton’s method. just try to find the negative root of x*x - x - 1. you will only ever find the positive root no matter how good your initial guess. this is merely the simplest example of a whole class of “pathological” roots.

    • @WhatsACreel
      @WhatsACreel  3 роки тому +1

      I certainly did overstate it, yes! It's also not great at anything that's not scalar. I guess the dimensions of the objects increase the required iterations exponentially. It would be cool if that didn't happen, then we could Newton Raphson a Neural Net!!
      A Creel can dream...
      Anywho, fantastic technique, but you are right, there's no silver bullets. Sorry I was misleading, thanks for watching :)

  • @timonpasslick
    @timonpasslick 4 роки тому +2

    Why not like this to get all the accuracy of float/double:
    while (true) {
    ...
    if (difference == 0.0)
    break;
    ...
    }

    • @ProjectPhysX
      @ProjectPhysX 4 роки тому +3

      Sure, variable number of iterations will give you every time the best possible precision. In most cases however the difference will never go to 0.0 and only to several times epsilon instead, epsilon being about 1E-7 for float and 2E-16 for double, so better check for difference

    • @timonpasslick
      @timonpasslick 4 роки тому +1

      @@ProjectPhysX Thanks for the correction

    • @timonpasslick
      @timonpasslick 4 роки тому

      @@ProjectPhysX Is epsilon just the smallest number that's not 0?

    • @WhatsACreel
      @WhatsACreel  4 роки тому

      Yes, that's a very good idea :)

    • @WhatsACreel
      @WhatsACreel  4 роки тому

      Nice one brus! Epsilon is the way to go :)

  • @fiziwig
    @fiziwig 4 роки тому +2

    I tried this to compute the meaning of life and it came out to be 42.

  • @kissarmin7130
    @kissarmin7130 2 роки тому

    Who else got this recommended after watching 3blue1brown's vid