Do you know for a fact that this is the way calculator calculated square roots? (Did you dig into the code?) There are much simpler ways without using heinous mathematical approximations! There is Newton's method (using division & addition only, but requires iterations until it converges), but there is one better! This one manipulates bits & & uses subtraction only! It is derivative of the hand method you show at the start, but done in binary. First, the number is adjusted (left/right shift of the mantissa) making exponent term even. Exponent term of the result will be half this. The new mantissa is calculated by the hand method, except in binary. The simplification going to binary is unbelievable! Pair the digits. Put 1 over the first non-zero pair (will be the left most pair if the number is normalized). Subtract the 1 to get remainder. Bring down the next pair. Now in hand method, you have in 10 options for the next digit, but only 9 need to be tested. In binary, you have 2 options, but only 1 need be tested. In hand method, you double the existing answer. You do the same in binary, but do that by simply appending 0 to the right. In hand method, you carry out multiplication, both appending test digit & using it for multiplier. In binary, you append a 1 to the right of the 0, then multiply that by 1. (But you don't actually need to multiply by 1 as that's just the same number.) If it is not larger than remainder, subtract from remainder & write 1 for the next digit of answer. If it is larger then remainder, discard the subtract & write 0 for next digit of answer. (This whole thing boils down to appending 0 1 to the right & testing against the remainder.) Even 64 bit math requires only 64 iterations, each little more complex than a divide iteration being only bit shifts & subtraction. Advantages: first, manipulating bits is much easier for computers than doing math. Second, there is no doubt when to stop the iterations (answer is accurate to the number of digits calculated).
Exactly, I wrote a C program for calculating square roots (using math library wasn't permitted), and the code was quick, accurate, and simple to read. No advanced mathematics is required, as these functions also eat into the run-time.
Thank you for your comment. To answer your question, I tried to research the code for my specific calculator but as far as I'm aware, CASIO doesn't release the code for specific calculators to members of the public. I wasn't aware of this method until you commented it so I'll pin it here.
Yup, my good old HP-15C can be pretty "slow" at some operations, but still, for my applications, most of the time its taken by entering the data and reading it back, not computing the results. And it sure is a lot faster and more accurate than me trying to do the same calculations with pencil, paper and a slide rule. 😂
@@gcolombelli The -15C is awesome! If you're missing the nostalgia of working with it and you want something that belongs in the 21st century, I *LOVE* my HP-15CE!
@@HobbitJack1 Oh yeah, the 15C CE is amazing. About the only time I'd feel the original 15C could be faster was when running programs that did quite a bit of branching / lots of iterations, as branching seems to be quite an expensive operation when programming it, the CE solves that. But of course, when running more complex stuff, I'll often pull a more modern calculator, like the 48 or 49, as User RPL is a lot easier to code in, or get a computer and quickly write something in Python, or a mix of Python and C if I need more performance. Still, the size and form factor of the 15C makes it more convenient for daily / casual use. And even if it's programming model isn't the most convenient out there, I still find it fun to write small programs for it every once in a while.
@@gcolombelli Absolutely! I think the biggest think I've written for it is an eigenvalue/eigenvector program. Absolutely huge, but I'm pretty proud of the result! But yeah, User RPL is infinitely better.
I think it is better to pronounce it as "cosine inverse" rather than "cos minus one" Edit 1: Honourable mention from replies 1. decos 2. cos root 3. arcus cosinus 4. cos upper
Back in my college days, over 50 years ago, we used mechanical calculators, Brunsviga & Muldivo. Watching them do a calculation was fun. Then we got an electronic desktop calculator made by Monroe. My first was a Texas TI 50 (I think). One night in a pub, we met a guy, with a super-duper Texas (his employer's) , that could do all sorts of things. He asked it for the sqrt of a number, and we went for a drink, while it worked it out. Even, worse, there were some calculators that worked in Reverse Polish Notation.
Your college days were before I was born, but I refuse to buy a calculator that isn't RPN. When I was in high school in the 90s, the school loaned me an HP 32SII (which I forgot to return) and after learning how to use it, algebraic notation seemed so inefficient. I still have that calculator plus an HP 48GII and for my phone calculator I have an app that emulates an HP 42.
@chitlitlah As someone not into heavy math, but nevertheless find it interesting, may I ask why? I learned standard algebraic/infix notation, and all my calculators and computers use it. It's all I've ever used. Because of that, I simply cannot see how or why anyone would want to use either prefix or postfix notations. So, I'm asking. Why do you like postfix notation in general and why reverse Polish notation specifically?
@@scaper8 Well for one, algebraic notation is similar to how we would write a problem, but I think prefix and postfix notations are more like how we would work it out in our heads. "Take the number 4. Multiply it by 3. Subtract 2 from that." Prefix notation most closely resembles this way of doing it in English, but postfix (putting the verb last) makes more sense to me technically since once you press the operator, the answer is there. (I've never seen a prefix calculator though. It might make more sense if I tried it.) If you're just reading a problem from a piece of paper, it might be easier to type it into a calculator with algebraic notation, but when I'm figuring out a problem in my head, as it usually goes outside of a math class, I think RPN matches what's going on in my head better. Another thing is you can start with any number in the problem and solve it, albeit perhaps with extra key presses. On a few occasions, I've wanted to know something, such as the distance between two objects, so I calculate it on a calculator using Pythagorean's theorem, and then I want to calculate something from that result but it's in the denominator of a fraction or the exponent or something, so with infix I'd have to remember the number to two or three digits or write it down and type it into the new problem. With RPN, you just hit the xy button before hitting the divide or exponentiate button. Granted some algebraic mode calculators have an "Answer" button that allows you to insert the last answer into a new problem, but it's not on all of them and it feels like an afterthought whereas it's baked into the whole RPN system. Some less significant reasons: knowing RPN has helped me with computer programming, particularly assembly, because it's almost exactly how an FPU works and somewhat like the general stack, so I understood them more quickly than if I'd never used RPN. Also, I like to be different, and using RPN is the nerd equivalent of driving a car with a manual transmission.
@@chitlitlah Interesting. I can see some of that. Especially working it as you would in one's head and when needing to take a previous calculation and do something else to it. I don't know if it would ever be enough to want to re-train myself, but it may certainly be enough for me to want to play around with a calculator, online or physical, that uese it. Thanks!
@@scaper8 I played with it for a few days before it really clicked, and when it did, I knew I liked it. I don't think you need to dedicate a large amount of your life to it to know if you're going to like it, just an hour or two at a time for a few days. And I started with a scientific calculator that could only show one number at a time; if you get a graphing calculator that shows several items in the stack at the same time, you'll probably get used to it more quickly.
There’s a much easier way. 25 is close to start with. Divide 869 by 25 and you get ~34. Calculate the average (25+34)/2 and you get 29.5 Divide 869 by 29.5 and you get 29.457. Now calculate the average (29.5+29.457) you get something and you divide 869 by that number. Remember that when you divide a number by its square root, you get the square root. Divide by something else and the square root is between what you divide by and what you get
869 decimal is 11 0110 0101 (spaces every 4 bits, for clarity). That's a ten bit number. First approximation: truncate half of the bits, from the bottom. That leaves: 1 1011 or 27 decimal, as FIRST guess. Square to get 729, add 869 (1,598) and take the average (divide by 2) : 799 Divide by 27 (First guess) to get: 29.59 (SECOND guess) Square (875.568), add 869 (1,744.568), average: 872.284 Divide by 29.59 to get: 29.479 How close do you want?
@@dmace14 It is and the idea works for any integer root. Lets say you want the cube root for 56, what could it be? Well let’s say your initial guess is 4. Do this 56/(4*4) = 3.5. Now you average 4 and 3.5 which gives you 3.75. Now do 56/(3.75 * 3.75). This gives you about 3.98 Average 3.98 and 3.75 which gives about 3.865. So if you want the 32nd root of some value, you divide that value by your guess to the power of 31. Then you do like before. It even works for non integer roots as long as the reciprocal of the decimal is an integer. So if you want the 3.5th root you divide by your guess to the power of 3. Now you need to square the result, add inititial guess and divide by two. Feel free to ask if you have questions. It’s all based on a simple but fundamental observation. Greetings from Sweden
@@TheFrewah I like that your method is simpler, I just think the first method in the video is easier (at least for square roots) because it’s easier to do without a calculator, whereas with your method I’d have to do a lot of decimal division to get a precise answer. Your method would be easier to do with a calculator because it’s less complex and more intuitive, but if I had a calculator I’d just use the regular square root button so it’s not practical to use either method unless you don’t have a calculator
@@dmace14 well, when I say ”by hand”, I mean you can use a calculator as long as you only use basic algebra functions, i.e. +,-,* and /. It doesn’t take away anything, i think that it actually encourage you to do this kind of calculation to see that it works. You can learn how to do logarithms ”by hand” if you start with a calculated value and then work from both ends as it were to see how you can get closer and closer with each pass. If you want the cube root and your calculator doesn’t do this, you can do this method. Funny thing is that many that fancy themselves at being good at math often have no idea to calculate roots. You can challenge someone, just make sure you have done it yourself and use a piece of paper to write down intermediate values as if you had done the calculations on a different piece of paper. If they know square roots, ask them if they can do 5th root.
Well, I studied lots of math to get my EE degree (1971), but this is new and interesting to me. Considering how expensive the first HP scientific calculator was, today’s versions are wonderful and affordable. Your explanation of binary floating point was superb.
This video is partly incorrect. The vast majority of calculators from the 80s (and still some calculators today) store numbers in binary-coded decimal, not as binary ints or floats. So a number like 869.2 would be stored essentially as-is. Something like 8692000000000 with an additional byte to indicate the position of the decimal point and another for the sign. The Casio has the advantage of a higher clock speed, hardware multiplication and division, additional memory and ROM (for lookup tables and such), and some other features that make it faster and more accurate than the one from the 80s, but it still has no FPU. (The only types are signed and unsigned ints of 8 to 64 bits.) It's a misconception that BCD was only used for 7-segment LCD displays; many calculators like the TI-84, -89, and -92 with bitmapped displays still use it. Even if the Casio does use a binary format internally, it uses hardware "decimal correction" addition and multiplication for the same purpose. So although technically the calculations may be done in binary, in practice, they will produce the same results as pure BCD arithmetic. The details of exactly how the calculator works are hard to find, so some of what I said might be wrong. But the lack of any FPU in the nX-U8/100 demonstrates conclusively that that it only operates directly on integers, whether binary or BCD. At least one 2003 source ( _Decimal Floating-Point: Algorism for Computers_ by Michael F. Cowlishaw ) does claim that Casio calculators use a 15-digit decimal format. Also, since the nX-U8/100 has hardware multiplication, CORDIC is not strictly the fastest algorithm available (though it might be used anyway).
Although binary is technically better for computer arithmetic, there are a couple of reasons in favour of using decimal internally in calculators. One is that eliminates the need to convert between binary and decimal, which would have been a big deal in older calculators with limited processing ability. Another is that the arithmetic works the way humans expect, with no surprises due to some decimal numbers not being exactly representable in binary.
Most IBM mainframes which were targeted at the business market used BCD. At least in the 1960→2000 era. (They also had their own EBCDIC alternative to ASCII encoding. EBCDIC meshed well with BCD, ASCII better with plain binary.) I know of several BCD/EBCDIC IBM machines that were still being used in 2012. (May still be for all I know.)
I started engineering without a calculator. We all had slide rules. I learned a lot of ways to calculate things in my head - including square and cube roots.
I had an old, incredibly old soviet "elektronika" calculator...which could use up batteries faster than you could end your maths lesson, and I needed to finish a test...so I tried to aproximate..so I guesstimated, multiplied and corrected my guesses till I had a result precise enough.
If teachers thought this man was slow, then I'm cooked. But fr, if they had taken a second to look at your work, & realise you weren't using a calculator to find the roots, they should've understood you were intelligent enough to realise where the root of a certain number would approximately be.
When I was in high school back in the late 90s-early 00s all the cool kids had graphing TI calculators, so I told my mom that I wanted one. She bought me an HP 48G which uses RPN. At first I thought that it sucked, but after a while I became addicted to it. 25 years later I am still an HP fanboy and I am absolutely incapable of using non-RPN/algebraic calculators. I love them HP’s. Thanks mom!
I looked square root up, in a book on C. Assuming the input was a float, it took the exponent and divided by 2. The result was a first approximation, which fed into a Newton-Raphson algorithm for about five loops. A decent first guess means the result converges quickly.
I’m 37, back in school for engineering, graduate next year, and for the first time in my life I have seen a method to do roots by hand thanks to this video. Now I’m unstoppable.
Great video, you showed how to you showed how to get the square root in a simple way and diced deep into it. I would strongly recommend this channel to anyone!
The way i sqrt numbers is (b+a/b)/2 where a is the number you are taking the sqrt of and b is is approximately sqrt a. This formula f(x)=½(x+a/x) is newton's method used on x²-a=0 Also fun fact a lot of old programming code defined log2(x+1) as x+.043 for 01
I like to use newton's method to calculate the square root. It served me well one time where I didn't had the square root function on my simple calculator. Basically, you can use "(Y/2)/X + X/2" where X is your guess from what you think the root should be and Y is the number you want the root of. For example, if I want the square root of 100, and I guess it should be next to 12. The formula would be "50/12 + 12/2" which is equal to 10.16, now it's closer, if i use it instead, "50/10.16 + 10.16/2" it's now 10.00126! Each time you get double the precision numbers right with only 2 divisions and 1 sum, its great! The bad part is only write the each time bigger X you find, but if your calculator have at least some memory, it's not that bad
I have tried a number of newer calculators over the years, but I keep coming back to the HP 48GX from the mid 90s. Nothing else has impressed me as much.
@@ramunasstulga8264 I’d love to have one of those. Many years ago, om my way home from work, I found a mechanical calculator in a dumpster container. It was only ten minutes from home and I was carrying a lot. So I thought I’d go home and come back to get it. Alas, it was gone when I came back.
They taught us a slightly different way of doing manual square roots in grade school that looks more like long division. I had mostly forgotten it by the time I was working as a computer lab attendant in college and set out to try to remember it one day. It turns out you can do any integer root by hand. You need to break the number up into groups of digits matching the power you're working with, and the number you subtract each round is found by summing up "Pascals_Number(root,1...root) * 10^(root−1...0) * Leading_Answer_So_Far^(root−1...0) * Guess_Digit^(1...root)"
When I did my O levels in the 1970s, we used log tables (base 10 logs). It was easy to find a square root: find the log, divide by 2, and then look up the answer in the antilog tables.
When Noah's ark landed, and Noah had to get the animals to start breeding, he noticed the two snakes weren't getting the job done. He asked them why, and they said, "we're adders, we need logs to multiply". So Noah cut down a tree, and sawed it into logs, and built a table for the snakes. And sure enough, the snakes started breeding like rabbits. Because even adders can multiply with a log table.
Fantastic ! (I am still scratching my head...) In the 80's, I was the proud owner of a TI59, which was stolen, and after that, the PERFECTION: an HP 15c. I still have it. Works perfectly, 40 years after. God it was good to program complex things. I was so proud :)
I got an ad right after the log hit the calculator, and it cut off the bang sound in the most beautiful way, thank you for making my day even more magical
When you were going through how to do a square root by hand, I realized that I was never taught how. I guess I never realized that until now. My whole life, calculators were available, and a scientific calculator, if not a graphing calculator, were required for any level of classes that I would need to do roots for! I'm sure high advanced math classes would probably show me how, but never for the levels I needed.
You actually got your calculator in a similar way to how I got mine. I lost my scientific calculator at some point in middle school and in my chemistry class, the one class I had at the time that needed a scientific calculator, a Ti-84 Plus was sitting in the lost and found. I used it for a week or two, always putting it back on the shelf just in case there was someone who lost it and was just sick for a bit. Soon enough I just figured that whoever lost it didn't care about it so decided to adopt it. It's still my main calculator that I'm using for my engineering degree.
Every “how does a calculator find XXX” video I’ve found is misinformation so far, and calculus teachers in high school telling kids “Yeah look, Taylor series are how trig values are calculated” plays a huge part in it
yeah, it's deeply frustrating. I know these algorithms exist, and they're neat, but I'm also pretty fuckin sure my pocket calculator doesn't have an infinite series kicking around back there. I want the real deal.
@@teddy4271 infinite series aren’t genuinely used for approximations nearly as much as people think, I notice. They’re mostly a really nice tool for things like proving Euler’s identity or deriving what it means to take “e to the power of a matrix” and stuff like that. They’re more useful when they’re seen as what they actually converge to in my opinion, not to approximate what they converge to
@@teddy4271 It's oversimplified, not a million miles off. :) The power of infinite series expansions is that they let you approximate solutions, by solving the first N terms until the error's acceptably small. It's an interesting topic actually, well worth reading into! Different root-finding algorithms prioritise accuracy vs speed- but rather than solving to infinity, they all just iterate until the error's "good enough". Embedded systems like calculators use lookup tables and interpolate between known values. It's lightweight and fast, but not _as_ accurate. The lookup tables were generated using slower, more accurate algorithms on more powerful computers.
Taylor series aren't very computationally efficient, so it is uncommon that they are used in practice for a calculator. There are more efficient methods that ultimately are infinite series behind the scenes, evaluated to a level that is "good enough, for government work". The reason calculus teachers in high school say it's Taylor series, is that Taylor series are one possible way of doing it, even if it isn't computationally efficient. Taylor series are easier to explain at a high school / first year of college level, and are a stepping stone to getting you to an understanding of the actual algorithms a calculator uses.
I used the "antique" Casio's rival (from Sharp) during my school days in the 1990's. The modern cr*p removes part of logical thinking on how to use the calculator. You just blindly enter the expression and let the machine do it for you. My day-to-day calculator is an App on my phone (a scientific calculator). To make it more fun, I set it to use Polish notation.
Meanwhile as someone studying and designing computer architectures as a hobby I would just do successive approximation. (But I come from a completely different field with different priorities.) If we search for the square root of X, then we take a "random" Y, is it larger or smaller when squared? Initially we start out with a guess from a table, pick one larger value and one smaller value. (unless X is the value on the table, then we are lucky! This table greatly reduces the total amount of cycles needed.) Then we pick the value between our currently closest pair of guesses thus far as our new guess for Y. Rinse and repeat and soon your number is finished. At 1 bit per cycle. (that can be improved. And our table has also devoured a good few bits at the start, so we have that too.) Advantage of this is: A. You only need multiplication and simple comparisons. This is quite fast in hardware at usually 1 cycle per multiplication in our loop, comparisons are effectively free. (We usually implement this as a dedicated instruction, so far from the whole core is partaking in this process and that helps it achieve this 1 cycle per multiplication/iteration.) B. The approach can also be used for floating point numbers. (it is a bit more tricky, but technically just a floating point multiplication, so nothing special, it is still fast.) C. It can be used for cubic roots too, but it is slower, at half the speed. D. It can be scaled to any arbitrary amount of bits. But as numbers get larger (100+ bits), other approaches starts to outpace this again. Downside with this method is that it: A. It is mainly fast for squares, cubes less so since it then needs 2 multiplications (cycles!) per iteration... And as n in Y^n gets larger, the process gets linearly slower, ie abhorrent.... (and we likely don't want to make a massive table to cover increasingly large values of n) B. Forget any non integer "n" for Y^n. So the hardware architect's approach isn't the best for roots in general, but that weren't the point. But it is fast for what it can do, and most applications using roots mainly need square roots, however, doing cubic roots is also rather common for 3d applications. For anything more complex other approaches to the problem quickly outpace this in speed, and doing things beyond a cubic root or working out non integer roots is rather niche in computing in general. But some architectures/processors can/do have dedicated accelerators for this task. In computer architecture design one often makes the compromise between functionality, performance and resource use. It is okay for a feature to have very limited functionality for what it is, as long as the functionality it gives is done efficiently and is used adequately often to warrant its inclusion. Efficiency not just being in regards to power, but also execution time and the die space it occupies. (for this approach to computing roots, we can make a massive table for our initial guess, at the expense of large amounts of die space. Another method is to do multiple guesses in parallel and speed up the process generally at the cost of also increased die space usage and decreased power efficiency (since one makes more guesses than one otherwise would have done) but decreased execution time often repays that debt in overall better performance, so overall power efficiency can actually improve in the early stages of parallelization until diminishing returns starts kicking in.) Now yes, a calculator wouldn't use this approach. Since the downsides far outweighs the advantages, calculators often don't need tons of performance and most often work with more niche calculations.
Back Around 1979 my parents gave me a Casio FX 501p as a Christmas present. It was a 12 digit programmable calculator with a 128 word memory and was pretty amazing for its day. Last year I stumbled on it in my parents’ attic. Replaced the batteries and it actually still works. Bit slower than I remember, but a very intuitive machine. You could actually load programs onto cassette tapes using an adapter. Whilst I also found the adapter, even my parents don’t have cassette tapes any more.
Take an intial guess of the square root of the number n. Let it be x. Now follow this iterative process. x -> (1/2)*[x + (n / x)] Try this code which implements this: def sq_root(num, guess, tries): if num == 0: return 0.0 for i in range(tries): sqrt_ans = guess guess = (1/2) * (guess + num/guess) return sqrt_ans
You couldn't use a Casio fx82, but you can explain all these mind-blowing equations.... I feel like a freaking genius, having used the Casio fx81 in high school.
I am from india and we had learnt this is class 8th, and we had no calculators in our class. I am honesty surprised that when you said the method to calculate with hands and you give us division method.
I was never taught any algorithm for calculating a square root, but the naive "guess and check" method seems to work almost as fast if you're good at multiplying and dividing on paper. It's just as accurate, but it gets computationally slow as you add more digits.
Those older Casio calculators were fantastic, so easy to use. The new ones are awful things, cannot get my head round them at all. This from someone who started using calculators in 1973 when they first started to appear.
As a life-long Texas Instruments user, it always fascinates me how different brands, and even different era within a brand, set their calculators up differently and how some run their calculations differently.
I have a degree in electrical engineering, emphasizing CPU design. I'm a statistician for a living. I've never seen the method you explained to get square roots by hand. I expected it to be more of the first several terms of an infinite Taylor series approximation.
My mum did the exact same thing, she gave me her old TI-30 Galaxy from the mid 1980s when I started high school in 2008. I was gutted but got used to it and it got me through high school and I actually ended up buying a second hand replacement off ebay when it broke during my engineering degree. Wouldn't use any other
For the correct answer, look up CORDIC algorithm. With one set of logic gates you can use it to find trig, hyperbolic, square roots, multiplication, division, exponentials, and logarithms. To quote wiki: "Decimal CORDIC became widely used in pocket calculators, most of which operate in binary-coded decimal (BCD) rather than binary. This change in the input and output format did not alter CORDIC's core calculation algorithms. CORDIC is particularly well-suited for handheld calculators, in which low cost - and thus low chip gate count - is much more important than speed." Also, one of the reasons why a calculator used BCD is because, a number like 0.1 or 0.01 and it's multiples does not have an exact binary representation, so before 32 and 64 bit floating point representation, floating point error would creep into multiple operations.
I was pretty sure Newton's approximation to the square root is used most of time. Logarithms are indeed calculated by polynomial approximations. Trigonometric functions can either be approximated py polynomials or use the CORDIC rotations, the latter being favored as it only incurs in sums and shifts.
I took a c++ class during undergrad and my professor asked us to write an algorithm to find the square root of *n* as long as it was a double data type (i.e 128.4, 1.1, 9.23423) and we had to write the algorithm to print out the correct answer to at most 1 decimal place. This was also in the 2nd week of class. Only one person in the class knew how to do this method by hand, and nobody knew how to write it in code. We all got 0's and many of the students dropped the class after that day.
I prefer the BogoRoot algorithm. Randomly generate a number, square it, is it equal to the number in question? If no, randomly generate another, repeat until successful. In the best case scenario it’s O(1) runtime
note: I made this comment before watching the video here is a very fast method that is easy to program firstly what is a square root, it's a number a>=0 where a^2=x a^2-x=0 so we can approximate the root of this equation using the Newton-Rampant method we just need to compute the derrivetive of function x^2 which is 2x now we just need to determinate our first guess I will take 2 beacause it is always closer to a positive root (like any positive real number but I don't care) now just make it into a python program: def square(x): return x*x def dsquare(x): return 2x def sqrt(x): x=100 y=2 while x>0: y=square(y)/dsquare(y) x=x-1 return y
When I was in college the Intel 386 CPUs were still the workforce of computing (yeah, I'm old), and we were taking an Assembly language course. Our project was to create a very simple calculator like the basic one that Windows has. Our professor wanted it to do the basic + - * / operations, I think that some statistical stuff, and square roots. And I remember that, since we had to do everything using the CPU registers, this algorithm of separating everything in two digits was also what we used to compute the root. I didn't even know that roots could be computed by hand that way, like similar to the long division. It was very interesting.
In the 1980s and 1990s I had a CASIO FX 82 calculator with just one line. It was better than the TI computer the teacher had. The first FX 82 still had two AA cells. Later these became more energy efficient and had a solar cell. Mine also has a CR927 cell that lasts 20 years.
For your calculator case; guess > memory. input/memory. Add to original guess. Divide by 2 Next guess > memory.... If you guess close, this will converge in 2 iterations.
lol, your description of our beloved Casio calculators is quite funny - only seeing one number on the screen is never a problem and the IPhone’s calculator TODAY is still EXACTLY like our 1980’s Casios. I’m stunned your teachers cannot use them.
There is a trick that works if the number you take is a simple one that you can enter a few times. First make an estimate. This can be rather bad, it does not hurt. Put the estimate in the calculator memory, then repeat keystrokes: nnn / / 2 where nnn is the number you take the square root from.This also is self correcting so if you make a mistake, it just slows you down.What this does is divides the number by your estimate and then averages the result with your estimate to get a new estimate. But even without that it is simple to get an estimate with guess and verify. I could easily get 12.49.
The method you describe is Newton's method. The problems are knowing when to stop the iterations, & how accurate the answer is. If you are using "4 banger" calculator, this is the best method. But if you are programming a computer, the binary scheme I describe in recent comment is much better as only bit shifts & integer subtractions are required. There is no question when to stop (when you have enough digits) & the accuracy of the result (accurate to the number of digits you calculated).
@@okaro6595 No you don't, because the result never stops changing! You have not considered the consequences of real numerical calculations, which are done in limited precision. The result can "bobble" or "circle". The result is becoming trapped in a routine that never exits. If you are doing by hand on calculator, then you can manually stop. But if you are writing a routine to perform this, it can be more complex to determine the endpoint than the calculation itself!
@@okaro6595 I figured this method out at the tender age of 14 and I had no idea what Newton’s method was. I divided 9 by 2 and get 4.5 The average is 3.25 so I divide 9 by 3.25 and repeat. My key insight was that when you divide a number by its square root, you get the square root. Divide by something smaller and you get something larger amd vice versa. I still feel very proud of this achievement.
In very ancient times I used a slide ruler. The fun part was you could get a pretty good estimate by just looking at the number on one scale, and moving your eyes up or down to the other scale and then moving the decimal.
Taking a square root in decimal is rather awkward but doing the same thing in binary is extremely easy. When a number in a register is shifted left one bit it is doubled. By iterating between shift-left and subtract, you can easily perform the equivalent of a manual square rooting algorithm. This is the first program I ever wrote. I used IMB 360 Assembler Language.
Brilliant. I used to work for the Canon calculator agents in New Zealand in the days when calculators were constructed from discrete RTL logic. Not only did they have circuit diagrams but they had a theory of operation and they explained in detail how these calculations and the rest of the processing was done. I am so annoyed with myself that I never kept one as I have never found any book that goes into the same detail. If anybody knows of one please say I would love to get one.
You can get better values from the Taylor approximation if you express the number as A*2^n where you make A be a number on the interval (1/2, 1] instead of (1,2). It just amounts to getting a power of 2 that's 1 larger than the (1,2)-interval approach. So in your last example, rather than get 563 = 1.0996 * 2^9, you get 563 = 0.5498*2^10, and the Taylor series converges much more smoothly and quickly around those values than near 1
Damn. We didn't get to use that in our classes in Asia. We just did it by hand, like real mathematicians. I don't even know how to this scientific calculator.
Bro I have two Casio fx-300ES PLUS calculators and I take both of them literally everywhere. I got the habit from taking Honors Chemistry and it actually has helped me out way too many times to count. Shop teacher needs people to go calculate the sum of all the square footage of the CTE course rooms? I've got you. Need to calculate the gravitational attraction between you and the school principal during lunch? (he had a significant magnitude) Boom, been there done that. Accidentally forget your calculator in another class? No problem, you have a second calculator.
I used a Casio fx-85GT Plus through my GCSEs, A-levels, and uni. I wrote a few trigonometry formulas on its back and inside case with a permanent marker, and it helped me with exams in all three stages of education, lol. Exam restrictions were still somewhat lax, so I got away with it undetected. I still have it, it still works, and the formulas are still legible lol.
There's a faster aproximation that can be made by hand. Whats the biggest perfect square that falls below your number and the lowest perfect square number after yours? Thats 841 (29²) and 900 (30²). Now the number is 29+ a factor. The factor is the diference between your number and the biggest square number smaller than yours. This is 869.2-841 = 28.2. Now divide it by the difference of both square numbers. This is 900-841 = 59 Then the approximation is 29+(28.2/59) = 29.4480... The real value is 29.4821...
I had in this question in my mind for years, until I learned about taylor series, it was genuinely the most eye opening thing I had ever seen in mathematics. Taylor series pretty much approches a function with an infinite sum with every summation of a term getting a more accurate value for the function.
@@shanggosteen9804 except it's not how computers usually do that in that case. I think Mathemaniac and Oscar Veilz made videos about it. I agree that Taylor series are awesome, though.
@@jurel-enlatado1 You *can* use Taylor seriese expansion for exp(x) to compute that. You don't need to raise anything to non-integer powers, and raising to integer powers can be done by repeating multiplication. Factorials don't need anything beyond multiplication of whole numbers either. Following the Taylor seriese expansion can help make sense out of more exotic things, like e^matrix , or operators. The fact that you *can* do it, doesn't mean you should, though. Purely from practical point of view, there are more efficient ways to compute exp(x)
You could sqrt (3) for example. / If the solution is not an intager you cannot technically calculate it but only estimate it!! 0) Squaring n is the same as n*n 1) Go from 1 towards infininty squaring each whole number until you get reatult bigher or eqthal than n (3). 2) If equal you have a solution. 3) Of bigger you calculate an arithmetic mean between the number and the biggest smaller number checked. 0) Arethmical mean for rational at least where p>o: (o+p)/2=o/2+p/2=o/2+((p-o)+o)/2=o/2+o/2+(p-o)/2=o+p/2 0) Tip: If available use calculator memory.
When I was in 7th grade I found a book that showed how to do square roots like this (your first example). The book was from the 60s - before pocket calculators were a thing.
Do you know for a fact that this is the way calculator calculated square roots? (Did you dig into the code?) There are much simpler ways without using heinous mathematical approximations! There is Newton's method (using division & addition only, but requires iterations until it converges), but there is one better! This one manipulates bits & & uses subtraction only!
It is derivative of the hand method you show at the start, but done in binary. First, the number is adjusted (left/right shift of the mantissa) making exponent term even. Exponent term of the result will be half this. The new mantissa is calculated by the hand method, except in binary. The simplification going to binary is unbelievable! Pair the digits. Put 1 over the first non-zero pair (will be the left most pair if the number is normalized). Subtract the 1 to get remainder. Bring down the next pair. Now in hand method, you have in 10 options for the next digit, but only 9 need to be tested. In binary, you have 2 options, but only 1 need be tested. In hand method, you double the existing answer. You do the same in binary, but do that by simply appending 0 to the right. In hand method, you carry out multiplication, both appending test digit & using it for multiplier. In binary, you append a 1 to the right of the 0, then multiply that by 1. (But you don't actually need to multiply by 1 as that's just the same number.) If it is not larger than remainder, subtract from remainder & write 1 for the next digit of answer. If it is larger then remainder, discard the subtract & write 0 for next digit of answer. (This whole thing boils down to appending 0 1 to the right & testing against the remainder.) Even 64 bit math requires only 64 iterations, each little more complex than a divide iteration being only bit shifts & subtraction.
Advantages: first, manipulating bits is much easier for computers than doing math. Second, there is no doubt when to stop the iterations (answer is accurate to the number of digits calculated).
Exactly, I wrote a C program for calculating square roots (using math library wasn't permitted), and the code was quick, accurate, and simple to read. No advanced mathematics is required, as these functions also eat into the run-time.
Came across that method in the video "A better way to count." Binary truly is the best way to count
@@bryanbain3509 Do it in assembly. It's smaller & faster.
Thank you for your comment. To answer your question, I tried to research the code for my specific calculator but as far as I'm aware, CASIO doesn't release the code for specific calculators to members of the public. I wasn't aware of this method until you commented it so I'll pin it here.
Does anyone know where we can find the code for CASIO calculators? If so, it would be greatly appreciated if you linked the resource
- yo whats the square root of 144?
- holup let me draw the bus stop
144?! That's just gross
@@adub303erm actually it's a dozen 🤓☝️
@@UpdateFreak33 😂
@@UpdateFreak33gros is a word meaning 144
@@UpdateFreak33 thou hast failed to nerd, not pointing out the logical fallacy of the question.
🗣🗣🗣 "I walked up to it and then I stole it"
🔥🔥🔥
✍️🔥🔥🔥
Based
"Now you know how a calculator finds square roots."
Nope, I don't.
Old calculators are not slow. You just have to know how to use them. Even the slowest calculator makes a human look like he's standing still.
Yup, my good old HP-15C can be pretty "slow" at some operations, but still, for my applications, most of the time its taken by entering the data and reading it back, not computing the results. And it sure is a lot faster and more accurate than me trying to do the same calculations with pencil, paper and a slide rule. 😂
@@gcolombelli The -15C is awesome! If you're missing the nostalgia of working with it and you want something that belongs in the 21st century, I *LOVE* my HP-15CE!
@@HobbitJack1 Oh yeah, the 15C CE is amazing. About the only time I'd feel the original 15C could be faster was when running programs that did quite a bit of branching / lots of iterations, as branching seems to be quite an expensive operation when programming it, the CE solves that.
But of course, when running more complex stuff, I'll often pull a more modern calculator, like the 48 or 49, as User RPL is a lot easier to code in, or get a computer and quickly write something in Python, or a mix of Python and C if I need more performance.
Still, the size and form factor of the 15C makes it more convenient for daily / casual use. And even if it's programming model isn't the most convenient out there, I still find it fun to write small programs for it every once in a while.
@@gcolombelli Absolutely! I think the biggest think I've written for it is an eigenvalue/eigenvector program. Absolutely huge, but I'm pretty proud of the result! But yeah, User RPL is infinitely better.
I have a TI-85 from the early 90's, still going strong. :)
I think it is better to pronounce it as "cosine inverse" rather than "cos minus one"
Edit 1: Honourable mention from replies
1. decos
2. cos root
3. arcus cosinus
4. cos upper
Or arccos
i prefer “inverse cosine” or “arccosine”
I call it decos because its removing the cos
@@anghme28ang11 XD
"arccosine" is correct.
Back in my college days, over 50 years ago, we used mechanical calculators, Brunsviga & Muldivo. Watching them do a calculation was fun. Then we got an electronic desktop calculator made by Monroe.
My first was a Texas TI 50 (I think).
One night in a pub, we met a guy, with a super-duper Texas (his employer's) , that could do all sorts of things. He asked it for the sqrt of a number, and we went for a drink, while it worked it out.
Even, worse, there were some calculators that worked in Reverse Polish Notation.
Your college days were before I was born, but I refuse to buy a calculator that isn't RPN. When I was in high school in the 90s, the school loaned me an HP 32SII (which I forgot to return) and after learning how to use it, algebraic notation seemed so inefficient. I still have that calculator plus an HP 48GII and for my phone calculator I have an app that emulates an HP 42.
@chitlitlah As someone not into heavy math, but nevertheless find it interesting, may I ask why?
I learned standard algebraic/infix notation, and all my calculators and computers use it. It's all I've ever used. Because of that, I simply cannot see how or why anyone would want to use either prefix or postfix notations. So, I'm asking. Why do you like postfix notation in general and why reverse Polish notation specifically?
@@scaper8 Well for one, algebraic notation is similar to how we would write a problem, but I think prefix and postfix notations are more like how we would work it out in our heads. "Take the number 4. Multiply it by 3. Subtract 2 from that." Prefix notation most closely resembles this way of doing it in English, but postfix (putting the verb last) makes more sense to me technically since once you press the operator, the answer is there. (I've never seen a prefix calculator though. It might make more sense if I tried it.) If you're just reading a problem from a piece of paper, it might be easier to type it into a calculator with algebraic notation, but when I'm figuring out a problem in my head, as it usually goes outside of a math class, I think RPN matches what's going on in my head better.
Another thing is you can start with any number in the problem and solve it, albeit perhaps with extra key presses. On a few occasions, I've wanted to know something, such as the distance between two objects, so I calculate it on a calculator using Pythagorean's theorem, and then I want to calculate something from that result but it's in the denominator of a fraction or the exponent or something, so with infix I'd have to remember the number to two or three digits or write it down and type it into the new problem. With RPN, you just hit the xy button before hitting the divide or exponentiate button. Granted some algebraic mode calculators have an "Answer" button that allows you to insert the last answer into a new problem, but it's not on all of them and it feels like an afterthought whereas it's baked into the whole RPN system.
Some less significant reasons: knowing RPN has helped me with computer programming, particularly assembly, because it's almost exactly how an FPU works and somewhat like the general stack, so I understood them more quickly than if I'd never used RPN. Also, I like to be different, and using RPN is the nerd equivalent of driving a car with a manual transmission.
@@chitlitlah Interesting. I can see some of that. Especially working it as you would in one's head and when needing to take a previous calculation and do something else to it. I don't know if it would ever be enough to want to re-train myself, but it may certainly be enough for me to want to play around with a calculator, online or physical, that uese it.
Thanks!
@@scaper8 I played with it for a few days before it really clicked, and when it did, I knew I liked it. I don't think you need to dedicate a large amount of your life to it to know if you're going to like it, just an hour or two at a time for a few days. And I started with a scientific calculator that could only show one number at a time; if you get a graphing calculator that shows several items in the stack at the same time, you'll probably get used to it more quickly.
I took that punch personally. You will hear from my lawyer.
Me too !
he hearted this comment 💀💀💀💀💀💀💀
"Objection! I was a little autistic."
There’s a much easier way. 25 is close to start with. Divide 869 by 25 and you get ~34. Calculate the average (25+34)/2 and you get 29.5 Divide 869 by 29.5 and you get 29.457. Now calculate the average (29.5+29.457) you get something and you divide 869 by that number. Remember that when you divide a number by its square root, you get the square root. Divide by something else and the square root is between what you divide by and what you get
Not sure I’d say that’s easier in my opinion than the first method, but yours is a lot more intuitive
869 decimal is 11 0110 0101 (spaces every 4 bits, for clarity). That's a ten bit number. First approximation: truncate half of the bits, from the bottom. That leaves: 1 1011 or 27 decimal, as FIRST guess.
Square to get 729,
add 869 (1,598)
and take the average (divide by 2) : 799
Divide by 27 (First guess) to get: 29.59 (SECOND guess)
Square (875.568),
add 869 (1,744.568),
average: 872.284
Divide by 29.59 to get: 29.479
How close do you want?
@@dmace14 It is and the idea works for any integer root. Lets say you want the cube root for 56, what could it be? Well let’s say your initial guess is 4. Do this 56/(4*4) = 3.5. Now you average 4 and 3.5 which gives you 3.75. Now do 56/(3.75 * 3.75). This gives you about 3.98
Average 3.98 and 3.75 which gives about 3.865.
So if you want the 32nd root of some value, you divide that value by your guess to the power of 31. Then you do like before. It even works for non integer roots as long as the reciprocal of the decimal is an integer. So if you want the 3.5th root you divide by your guess to the power of 3. Now you need to square the result, add inititial guess and divide by two. Feel free to ask if you have questions. It’s all based on a simple but fundamental observation. Greetings from Sweden
@@TheFrewah I like that your method is simpler, I just think the first method in the video is easier (at least for square roots) because it’s easier to do without a calculator, whereas with your method I’d have to do a lot of decimal division to get a precise answer.
Your method would be easier to do with a calculator because it’s less complex and more intuitive, but if I had a calculator I’d just use the regular square root button so it’s not practical to use either method unless you don’t have a calculator
@@dmace14 well, when I say ”by hand”, I mean you can use a calculator as long as you only use basic algebra functions, i.e. +,-,* and /. It doesn’t take away anything, i think that it actually encourage you to do this kind of calculation to see that it works. You can learn how to do logarithms ”by hand” if you start with a calculated value and then work from both ends as it were to see how you can get closer and closer with each pass. If you want the cube root and your calculator doesn’t do this, you can do this method. Funny thing is that many that fancy themselves at being good at math often have no idea to calculate roots. You can challenge someone, just make sure you have done it yourself and use a piece of paper to write down intermediate values as if you had done the calculations on a different piece of paper. If they know square roots, ask them if they can do 5th root.
"Hit it with this log"
💀
Well, I studied lots of math to get my EE degree (1971), but this is new and interesting to me. Considering how expensive the first HP scientific calculator was, today’s versions are wonderful and affordable. Your explanation of binary floating point was superb.
This video is partly incorrect. The vast majority of calculators from the 80s (and still some calculators today) store numbers in binary-coded decimal, not as binary ints or floats. So a number like 869.2 would be stored essentially as-is. Something like 8692000000000 with an additional byte to indicate the position of the decimal point and another for the sign. The Casio has the advantage of a higher clock speed, hardware multiplication and division, additional memory and ROM (for lookup tables and such), and some other features that make it faster and more accurate than the one from the 80s, but it still has no FPU. (The only types are signed and unsigned ints of 8 to 64 bits.) It's a misconception that BCD was only used for 7-segment LCD displays; many calculators like the TI-84, -89, and -92 with bitmapped displays still use it. Even if the Casio does use a binary format internally, it uses hardware "decimal correction" addition and multiplication for the same purpose. So although technically the calculations may be done in binary, in practice, they will produce the same results as pure BCD arithmetic.
The details of exactly how the calculator works are hard to find, so some of what I said might be wrong. But the lack of any FPU in the nX-U8/100 demonstrates conclusively that that it only operates directly on integers, whether binary or BCD. At least one 2003 source ( _Decimal Floating-Point: Algorism for Computers_
by Michael F. Cowlishaw
) does claim that Casio calculators use a 15-digit decimal format. Also, since the nX-U8/100 has hardware multiplication, CORDIC is not strictly the fastest algorithm available (though it might be used anyway).
Holy crap, it's the Lord of Calculators.. may his dynasty reign for ages to come.
Although binary is technically better for computer arithmetic, there are a couple of reasons in favour of using decimal internally in calculators. One is that eliminates the need to convert between binary and decimal, which would have been a big deal in older calculators with limited processing ability. Another is that the arithmetic works the way humans expect, with no surprises due to some decimal numbers not being exactly representable in binary.
Heard of 2nd complement? No sign needed.
@@ivar_oslo-hr3mc BCD could use 9's complement, but that has no performance advantage. It just uses a separate sign byte.
Most IBM mainframes which were targeted at the business market used BCD. At least in the 1960→2000 era. (They also had their own EBCDIC alternative to ASCII encoding. EBCDIC meshed well with BCD, ASCII better with plain binary.)
I know of several BCD/EBCDIC IBM machines that were still being used in 2012. (May still be for all I know.)
I started engineering without a calculator. We all had slide rules. I learned a lot of ways to calculate things in my head - including square and cube roots.
i did not expect the log.
Search about newton's method to calculate the sqrt
I didn't expect the fist.
@@MarcusAndersonsBlog TWSS
Nobody expects the log!
I swear the fact that you hit that calculator with a log and just a fraction of a second after I get an ad was the funniest moment I had today 😂
I had an old, incredibly old soviet "elektronika" calculator...which could use up batteries faster than you could end your maths lesson, and I needed to finish a test...so I tried to aproximate..so I guesstimated, multiplied and corrected my guesses till I had a result precise enough.
If teachers thought this man was slow, then I'm cooked.
But fr, if they had taken a second to look at your work, & realise you weren't using a calculator to find the roots, they should've understood you were intelligent enough to realise where the root of a certain number would approximately be.
3:43 I like the fact that he showed us the difference from truncated and rounded up
When I was in high school back in the late 90s-early 00s all the cool kids had graphing TI calculators, so I told my mom that I wanted one. She bought me an HP 48G which uses RPN. At first I thought that it sucked, but after a while I became addicted to it. 25 years later I am still an HP fanboy and I am absolutely incapable of using non-RPN/algebraic calculators. I love them HP’s. Thanks mom!
I looked square root up, in a book on C. Assuming the input was a float, it took the exponent and divided by 2.
The result was a first approximation, which fed into a Newton-Raphson algorithm for about five loops.
A decent first guess means the result converges quickly.
I’m 37, back in school for engineering, graduate next year, and for the first time in my life I have seen a method to do roots by hand thanks to this video. Now I’m unstoppable.
Great video, you showed how to you showed how to get the square root in a simple way and diced deep into it. I would strongly recommend this channel to anyone!
The way i sqrt numbers is
(b+a/b)/2 where a is the number you are taking the sqrt of and b is is approximately sqrt a. This formula f(x)=½(x+a/x) is newton's method used on x²-a=0
Also fun fact a lot of old programming code defined log2(x+1) as x+.043 for 01
I never cease to be amazed at what school teachers don't know.
I like to use newton's method to calculate the square root.
It served me well one time where I didn't had the square root function on my simple calculator.
Basically, you can use "(Y/2)/X + X/2" where X is your guess from what you think the root should be and Y is the number you want the root of.
For example, if I want the square root of 100, and I guess it should be next to 12. The formula would be "50/12 + 12/2" which is equal to 10.16, now it's closer, if i use it instead, "50/10.16 + 10.16/2" it's now 10.00126! Each time you get double the precision numbers right with only 2 divisions and 1 sum, its great!
The bad part is only write the each time bigger X you find, but if your calculator have at least some memory, it's not that bad
I have tried a number of newer calculators over the years, but I keep coming back to the HP 48GX from the mid 90s. Nothing else has impressed me as much.
I use my HP 48G+ near daily. Fantastic calculator, my favorite of all time.
I love my hp-41 which is older
I love my mechanical calculator from the 1800s!
@@ramunasstulga8264 I’d love to have one of those. Many years ago, om my way home from work, I found a mechanical calculator in a dumpster container. It was only ten minutes from home and I was carrying a lot. So I thought I’d go home and come back to get it. Alas, it was gone when I came back.
My GX sits on my desk. Such a lovely machine, even when I tend to use the Emacs calc more 😭
They taught us a slightly different way of doing manual square roots in grade school that looks more like long division. I had mostly forgotten it by the time I was working as a computer lab attendant in college and set out to try to remember it one day. It turns out you can do any integer root by hand.
You need to break the number up into groups of digits matching the power you're working with, and the number you subtract each round is found by summing up "Pascals_Number(root,1...root) * 10^(root−1...0) * Leading_Answer_So_Far^(root−1...0) * Guess_Digit^(1...root)"
I'm autistic too.
When I did my O levels in the 1970s, we used log tables (base 10 logs). It was easy to find a square root: find the log, divide by 2, and then look up the answer in the antilog tables.
When Noah's ark landed, and Noah had to get the animals to start breeding, he noticed the two snakes weren't getting the job done. He asked them why, and they said, "we're adders, we need logs to multiply".
So Noah cut down a tree, and sawed it into logs, and built a table for the snakes. And sure enough, the snakes started breeding like rabbits. Because even adders can multiply with a log table.
Fantastic ! (I am still scratching my head...)
In the 80's, I was the proud owner of a TI59, which was stolen, and after that, the PERFECTION: an HP 15c.
I still have it. Works perfectly, 40 years after.
God it was good to program complex things. I was so proud :)
I got an ad right after the log hit the calculator, and it cut off the bang sound in the most beautiful way, thank you for making my day even more magical
Underrated channel I’m glad i found, looking forward to more content. Good luck buddy!
Im confused hitting it with a log should of worked?
Rookie mistake. It only works if you hit it with an ln.
Ye gotta go into the woods and get an all naturally grown log
Unfortunately this method is flawed, since by the time the tree is a log, its roots are never to be found
All of us could produce a natural log 12 hours after a hearty meal
Yeah, I tried this in math class and got sent to the principals office, in addition to my calculator being broken.
When you were going through how to do a square root by hand, I realized that I was never taught how. I guess I never realized that until now. My whole life, calculators were available, and a scientific calculator, if not a graphing calculator, were required for any level of classes that I would need to do roots for! I'm sure high advanced math classes would probably show me how, but never for the levels I needed.
You actually got your calculator in a similar way to how I got mine. I lost my scientific calculator at some point in middle school and in my chemistry class, the one class I had at the time that needed a scientific calculator, a Ti-84 Plus was sitting in the lost and found. I used it for a week or two, always putting it back on the shelf just in case there was someone who lost it and was just sick for a bit. Soon enough I just figured that whoever lost it didn't care about it so decided to adopt it. It's still my main calculator that I'm using for my engineering degree.
Every “how does a calculator find XXX” video I’ve found is misinformation so far, and calculus teachers in high school telling kids “Yeah look, Taylor series are how trig values are calculated” plays a huge part in it
yeah, it's deeply frustrating. I know these algorithms exist, and they're neat, but I'm also pretty fuckin sure my pocket calculator doesn't have an infinite series kicking around back there. I want the real deal.
@@teddy4271 infinite series aren’t genuinely used for approximations nearly as much as people think, I notice. They’re mostly a really nice tool for things like proving Euler’s identity or deriving what it means to take “e to the power of a matrix” and stuff like that. They’re more useful when they’re seen as what they actually converge to in my opinion, not to approximate what they converge to
@@teddy4271 It's oversimplified, not a million miles off. :) The power of infinite series expansions is that they let you approximate solutions, by solving the first N terms until the error's acceptably small. It's an interesting topic actually, well worth reading into!
Different root-finding algorithms prioritise accuracy vs speed- but rather than solving to infinity, they all just iterate until the error's "good enough".
Embedded systems like calculators use lookup tables and interpolate between known values. It's lightweight and fast, but not _as_ accurate. The lookup tables were generated using slower, more accurate algorithms on more powerful computers.
Taylor series aren't very computationally efficient, so it is uncommon that they are used in practice for a calculator. There are more efficient methods that ultimately are infinite series behind the scenes, evaluated to a level that is "good enough, for government work". The reason calculus teachers in high school say it's Taylor series, is that Taylor series are one possible way of doing it, even if it isn't computationally efficient. Taylor series are easier to explain at a high school / first year of college level, and are a stepping stone to getting you to an understanding of the actual algorithms a calculator uses.
I love that you collected all your calculators :)
I used the "antique" Casio's rival (from Sharp) during my school days in the 1990's. The modern cr*p removes part of logical thinking on how to use the calculator. You just blindly enter the expression and let the machine do it for you.
My day-to-day calculator is an App on my phone (a scientific calculator). To make it more fun, I set it to use Polish notation.
@@gregwochlik9233 I should get a hp-41 emulator for my iPad. A great calculator.
Meanwhile as someone studying and designing computer architectures as a hobby I would just do successive approximation. (But I come from a completely different field with different priorities.)
If we search for the square root of X, then we take a "random" Y, is it larger or smaller when squared?
Initially we start out with a guess from a table, pick one larger value and one smaller value. (unless X is the value on the table, then we are lucky! This table greatly reduces the total amount of cycles needed.)
Then we pick the value between our currently closest pair of guesses thus far as our new guess for Y. Rinse and repeat and soon your number is finished. At 1 bit per cycle. (that can be improved. And our table has also devoured a good few bits at the start, so we have that too.)
Advantage of this is:
A. You only need multiplication and simple comparisons. This is quite fast in hardware at usually 1 cycle per multiplication in our loop, comparisons are effectively free. (We usually implement this as a dedicated instruction, so far from the whole core is partaking in this process and that helps it achieve this 1 cycle per multiplication/iteration.)
B. The approach can also be used for floating point numbers. (it is a bit more tricky, but technically just a floating point multiplication, so nothing special, it is still fast.)
C. It can be used for cubic roots too, but it is slower, at half the speed.
D. It can be scaled to any arbitrary amount of bits. But as numbers get larger (100+ bits), other approaches starts to outpace this again.
Downside with this method is that it:
A. It is mainly fast for squares, cubes less so since it then needs 2 multiplications (cycles!) per iteration... And as n in Y^n gets larger, the process gets linearly slower, ie abhorrent.... (and we likely don't want to make a massive table to cover increasingly large values of n)
B. Forget any non integer "n" for Y^n.
So the hardware architect's approach isn't the best for roots in general, but that weren't the point.
But it is fast for what it can do, and most applications using roots mainly need square roots, however, doing cubic roots is also rather common for 3d applications. For anything more complex other approaches to the problem quickly outpace this in speed, and doing things beyond a cubic root or working out non integer roots is rather niche in computing in general. But some architectures/processors can/do have dedicated accelerators for this task.
In computer architecture design one often makes the compromise between functionality, performance and resource use. It is okay for a feature to have very limited functionality for what it is, as long as the functionality it gives is done efficiently and is used adequately often to warrant its inclusion. Efficiency not just being in regards to power, but also execution time and the die space it occupies.
(for this approach to computing roots, we can make a massive table for our initial guess, at the expense of large amounts of die space. Another method is to do multiple guesses in parallel and speed up the process generally at the cost of also increased die space usage and decreased power efficiency (since one makes more guesses than one otherwise would have done) but decreased execution time often repays that debt in overall better performance, so overall power efficiency can actually improve in the early stages of parallelization until diminishing returns starts kicking in.)
Now yes, a calculator wouldn't use this approach. Since the downsides far outweighs the advantages, calculators often don't need tons of performance and most often work with more niche calculations.
Back Around 1979 my parents gave me a Casio FX 501p as a Christmas present. It was a 12 digit programmable calculator with a 128 word memory and was pretty amazing for its day. Last year I stumbled on it in my parents’ attic. Replaced the batteries and it actually still works. Bit slower than I remember, but a very intuitive machine. You could actually load programs onto cassette tapes using an adapter. Whilst I also found the adapter, even my parents don’t have cassette tapes any more.
Take an intial guess of the square root of the number n. Let it be x. Now follow this iterative process.
x -> (1/2)*[x + (n / x)]
Try this code which implements this:
def sq_root(num, guess, tries):
if num == 0:
return 0.0
for i in range(tries):
sqrt_ans = guess
guess = (1/2) * (guess + num/guess)
return sqrt_ans
You couldn't use a Casio fx82, but you can explain all these mind-blowing equations....
I feel like a freaking genius, having used the Casio fx81 in high school.
5:00 atleast you're honest about it. Some people will never admit to doing anything wrong
I am from india and we had learnt this is class 8th, and we had no calculators in our class. I am honesty surprised that when you said the method to calculate with hands and you give us division method.
Rtl design engineer, and this is gold. Thank you very much
At exactly 6:14 when my man hit it with the actual log came an add.
That timing was perfect
I was never taught any algorithm for calculating a square root, but the naive "guess and check" method seems to work almost as fast if you're good at multiplying and dividing on paper. It's just as accurate, but it gets computationally slow as you add more digits.
How to find a square root of X
1:48 we gonna find the largest squared number and *Square root it*
Those older Casio calculators were fantastic, so easy to use. The new ones are awful things, cannot get my head round them at all. This from someone who started using calculators in 1973 when they first started to appear.
As a life-long Texas Instruments user, it always fascinates me how different brands, and even different era within a brand, set their calculators up differently and how some run their calculations differently.
I have a degree in electrical engineering, emphasizing CPU design. I'm a statistician for a living. I've never seen the method you explained to get square roots by hand. I expected it to be more of the first several terms of an infinite Taylor series approximation.
Loved the video. It was more than I expected.
My mum did the exact same thing, she gave me her old TI-30 Galaxy from the mid 1980s when I started high school in 2008. I was gutted but got used to it and it got me through high school and I actually ended up buying a second hand replacement off ebay when it broke during my engineering degree. Wouldn't use any other
"let me just hit it with this log"
congrats, you got my like.
Ive been waiting for this video for 10 years
For the correct answer, look up CORDIC algorithm. With one set of logic gates you can use it to find trig, hyperbolic, square roots, multiplication, division, exponentials, and logarithms. To quote wiki:
"Decimal CORDIC became widely used in pocket calculators, most of which operate in binary-coded decimal (BCD) rather than binary. This change in the input and output format did not alter CORDIC's core calculation algorithms. CORDIC is particularly well-suited for handheld calculators, in which low cost - and thus low chip gate count - is much more important than speed."
Also, one of the reasons why a calculator used BCD is because, a number like 0.1 or 0.01 and it's multiples does not have an exact binary representation, so before 32 and 64 bit floating point representation, floating point error would creep into multiple operations.
6:12 yooooo the calculator just dodged a log 💀
6:02 I'm now always gonna remember it when I hear log, lol. Thank you for the great cosy video
I was pretty sure Newton's approximation to the square root is used most of time. Logarithms are indeed calculated by polynomial approximations. Trigonometric functions can either be approximated py polynomials or use the CORDIC rotations, the latter being favored as it only incurs in sums and shifts.
I thought this was how calculators/computers calculated square roots:
function sqrt(x,p) {
h=1;
a=x;
for (let i=0;i
I also used my parent's 80s Casio calculator lol
Gave my Chemistry teacher a dose of nostalgia once when he saw it
I took a c++ class during undergrad and my professor asked us to write an algorithm to find the square root of *n* as long as it was a double data type (i.e 128.4, 1.1, 9.23423) and we had to write the algorithm to print out the correct answer to at most 1 decimal place. This was also in the 2nd week of class. Only one person in the class knew how to do this method by hand, and nobody knew how to write it in code. We all got 0's and many of the students dropped the class after that day.
This guy's humor is on another level that I had never seen
bro the calculator struggle is so real
I prefer the BogoRoot algorithm. Randomly generate a number, square it, is it equal to the number in question? If no, randomly generate another, repeat until successful. In the best case scenario it’s O(1) runtime
I was not expecting that outro; loved it nonetheless
note: I made this comment before watching the video
here is a very fast method that is easy to program
firstly what is a square root, it's a number a>=0 where a^2=x a^2-x=0 so we can approximate the root of this equation using the Newton-Rampant method we just need to compute the derrivetive of function x^2 which is
2x now we just need to determinate our first guess I will take 2 beacause it is always closer to a positive root (like any positive real number but I don't care) now just make it into a python program:
def square(x):
return x*x
def dsquare(x):
return 2x
def sqrt(x):
x=100
y=2
while x>0:
y=square(y)/dsquare(y)
x=x-1
return y
The log one hit me so unexpectedly well 😂
When I was in college the Intel 386 CPUs were still the workforce of computing (yeah, I'm old), and we were taking an Assembly language course. Our project was to create a very simple calculator like the basic one that Windows has. Our professor wanted it to do the basic + - * / operations, I think that some statistical stuff, and square roots. And I remember that, since we had to do everything using the CPU registers, this algorithm of separating everything in two digits was also what we used to compute the root. I didn't even know that roots could be computed by hand that way, like similar to the long division. It was very interesting.
In the 1980s and 1990s I had a CASIO FX 82 calculator with just one line. It was better than the TI computer the teacher had.
The first FX 82 still had two AA cells. Later these became more energy efficient and had a solar cell. Mine also has a CR927 cell that lasts 20 years.
hey thats my calculator i have on the thumbnail i love that thing its hella intuitive
A perfect ending to a messy video !
For your calculator case; guess > memory. input/memory. Add to original guess. Divide by 2 Next guess > memory.... If you guess close, this will converge in 2 iterations.
Why do these types of videos only get suggested to me after 1 am? Its far to late for me to be learning math yet here I am once again 😂
i relate with how it felt being one of the only kids without a scientific calculator lmao
And here I was, just thinking it used the good old brute force method! Neat!
lol, your description of our beloved Casio calculators is quite funny - only seeing one number on the screen is never a problem and the IPhone’s calculator TODAY is still EXACTLY like our 1980’s Casios. I’m stunned your teachers cannot use them.
There is a trick that works if the number you take is a simple one that you can enter a few times. First make an estimate. This can be rather bad, it does not hurt. Put the estimate in the calculator memory, then repeat keystrokes: nnn / / 2 where nnn is the number you take the square root from.This also is self correcting so if you make a mistake, it just slows you down.What this does is divides the number by your estimate and then averages the result with your estimate to get a new estimate.
But even without that it is simple to get an estimate with guess and verify. I could easily get 12.49.
The method you describe is Newton's method. The problems are knowing when to stop the iterations, & how accurate the answer is. If you are using "4 banger" calculator, this is the best method. But if you are programming a computer, the binary scheme I describe in recent comment is much better as only bit shifts & integer subtractions are required. There is no question when to stop (when you have enough digits) & the accuracy of the result (accurate to the number of digits you calculated).
@@bpark10001 You know when to stop when the result does not change anymore.
@@okaro6595 No you don't, because the result never stops changing! You have not considered the consequences of real numerical calculations, which are done in limited precision. The result can "bobble" or "circle". The result is becoming trapped in a routine that never exits. If you are doing by hand on calculator, then you can manually stop. But if you are writing a routine to perform this, it can be more complex to determine the endpoint than the calculation itself!
@@okaro6595 I figured this method out at the tender age of 14 and I had no idea what Newton’s method was. I divided 9 by 2 and get 4.5 The average is 3.25 so I divide 9 by 3.25 and repeat. My key insight was that when you divide a number by its square root, you get the square root. Divide by something smaller and you get something larger amd vice versa. I still feel very proud of this achievement.
In very ancient times I used a slide ruler. The fun part was you could get a pretty good estimate by just looking at the number on one scale, and moving your eyes up or down to the other scale and then moving the decimal.
Taking a square root in decimal is rather awkward but doing the same thing in binary is extremely easy. When a number in a register is shifted left one bit it is doubled. By iterating between shift-left and subtract, you can easily perform the equivalent of a manual square rooting algorithm. This is the first program I ever wrote. I used IMB 360 Assembler Language.
Liked and subscribed at the moment you committed to the bit and actually log smashed the calculator 6:15
Brilliant. I used to work for the Canon calculator agents in New Zealand in the days when calculators were constructed from discrete RTL logic. Not only did they have circuit diagrams but they had a theory of operation and they explained in detail how these calculations and the rest of the processing was done. I am so annoyed with myself that I never kept one as I have never found any book that goes into the same detail. If anybody knows of one please say I would love to get one.
You can get better values from the Taylor approximation if you express the number as A*2^n where you make A be a number on the interval (1/2, 1] instead of (1,2). It just amounts to getting a power of 2 that's 1 larger than the (1,2)-interval approach. So in your last example, rather than get 563 = 1.0996 * 2^9, you get 563 = 0.5498*2^10, and the Taylor series converges much more smoothly and quickly around those values than near 1
I love the compucorp 142 statistician really easy to use for its power in my opinion
Damn. We didn't get to use that in our classes in Asia. We just did it by hand, like real mathematicians. I don't even know how to this scientific calculator.
Bro I have two Casio fx-300ES PLUS calculators and I take both of them literally everywhere. I got the habit from taking Honors Chemistry and it actually has helped me out way too many times to count. Shop teacher needs people to go calculate the sum of all the square footage of the CTE course rooms? I've got you. Need to calculate the gravitational attraction between you and the school principal during lunch? (he had a significant magnitude) Boom, been there done that. Accidentally forget your calculator in another class? No problem, you have a second calculator.
This is extraordinarily well explained. Thank you
I used a Casio fx-85GT Plus through my GCSEs, A-levels, and uni. I wrote a few trigonometry formulas on its back and inside case with a permanent marker, and it helped me with exams in all three stages of education, lol. Exam restrictions were still somewhat lax, so I got away with it undetected. I still have it, it still works, and the formulas are still legible lol.
Oh wow! That's cool! I've actually always wondered that.
How cool! Thank you for the explanation.
RIP headphone users when he tries the log method.
"Maybe if I just hit it with this log (bonk)" I was not expecting that. xD
1) The integral used to find the square root has a square root.
2) Many calculators use BCD, not binary to represent numbers
There's a faster aproximation that can be made by hand.
Whats the biggest perfect square that falls below your number and the lowest perfect square number after yours? Thats 841 (29²) and 900 (30²).
Now the number is 29+ a factor.
The factor is the diference between your number and the biggest square number smaller than yours. This is 869.2-841 = 28.2.
Now divide it by the difference of both square numbers. This is 900-841 = 59
Then the approximation is 29+(28.2/59) = 29.4480...
The real value is 29.4821...
I wonder how they found out that fancy way of calculating the square root. Like what the hell-
You can count square roots... BY HAND?? Why today is the first time of me hearing it?
How the hell do they calculate exponents!!
Computers are more powerful than humans, so I guess they just do it the normal way but faster than us.
I had in this question in my mind for years, until I learned about taylor series, it was genuinely the most eye opening thing I had ever seen in mathematics.
Taylor series pretty much approches a function with an infinite sum with every summation of a term getting a more accurate value for the function.
@@shanggosteen9804 except it's not how computers usually do that in that case. I think Mathemaniac and Oscar Veilz made videos about it. I agree that Taylor series are awesome, though.
@@jurel-enlatado1 You *can* use Taylor seriese expansion for exp(x) to compute that. You don't need to raise anything to non-integer powers, and raising to integer powers can be done by repeating multiplication. Factorials don't need anything beyond multiplication of whole numbers either.
Following the Taylor seriese expansion can help make sense out of more exotic things, like e^matrix , or operators.
The fact that you *can* do it, doesn't mean you should, though. Purely from practical point of view, there are more efficient ways to compute exp(x)
@@jurel-enlatado1 I think there's a typo
exp(x*ln(a)) = exp(ln(a^x)) = a^x. Not x^a
Going into so much detail of how to approximate ln but glossing over how it exponentiates?
You could sqrt (3) for example. / If the solution is not an intager you cannot technically calculate it but only estimate it!!
0) Squaring n is the same as n*n
1) Go from 1 towards infininty squaring each whole number until you get reatult bigher or eqthal than n (3).
2) If equal you have a solution.
3) Of bigger you calculate an arithmetic mean between the number and the biggest smaller number checked.
0) Arethmical mean for rational at least where p>o: (o+p)/2=o/2+p/2=o/2+((p-o)+o)/2=o/2+o/2+(p-o)/2=o+p/2
0) Tip: If available use calculator memory.
I used to do trigonometry all day, being an engineer, this was before calculators were invented. We used eight figure logs.
When I was in 7th grade I found a book that showed how to do square roots like this (your first example). The book was from the 60s - before pocket calculators were a thing.
Ncert?
@@MahiPriyaPrasadNcert wasn't from 60s , and he is probably not from India..
Slide rules and logarithmic tables!
"What is the largest square number that is less than 8?"
Might want to explain what the definition of a square number is.