I had heard that phi (the golden ratio) is the "most irrational" number. But this exercise helps me see why: in approximating irrational numbers with continued fractions, larger numbers make a better approximation-so the irrational number hardest to approximate with rationals would be the one with all 1's in the continued fraction, which is phi.
@@hyperpsych6483 I'd argue there's a pretty clear distinction to be made about algebraic and analytic properties of rational numbers. The algebraic competition of Q is just the algebraic numbers (over C, but it's an easy reduction to over R), while the metric competition (ie, consider every limit of a cauchy sequence under the absolute value in Q) is R. It happens that Q is a strict subset of the algebraic numbers and is a strict subset of the reals (eg. 1, sqrt2, pi). They're definitely related (they formed from the same number system), but I'd say that it's possible for an algebraic number to be more irrational than a transcendental number.
@@psymar along that line of reasoning, I interpret rational numbers as solutions to some linear polynomial over the integers. That is, for any rational number x = p/q, it is a solution to the polynomial qx - p = 0 The numbers that cannot be expressed in this form are called irrational, but there's degrees to this. For example, x = √2 is an irrational number, but it's a solution to a degree 2 polynomial: x² - 2 = 0 So on and so forth for higher and higher degrees. In this sense, transcendental numbers-which are never a solution to any polynomial of any degree-are "more irrational" than irrational numbers that are still solutions to degree n polynomials. For this continued fractions definition, it pretty much hinges on the statement "larger numbers make better approximations", but I fail to see under what grounds this is true or what the underlying logic is for it. Based on that, I don't really agree with the conclusion by OP. If someone could elaborate on why larger numbers are better or worse and by what metric this is achieved or measured, that would be appreciated.
@@ffc1a28c7 I'd put it more simply thus: by "most irrational" we mean "hardest to approximate as a *ratio* of two integers". 22/7, for example, is a pretty decent approximation of π given its small denominator - no equally close n/m approximation exists for φ despite its algebraicity (if that's a word). [Edit since that may be unclear - obviously there are closer rational approximations if you make m larger, the point is that you can't get the same accuracy with similar values for m. E.g., 34/21 is about as accurate for φ as 22/7 is for π, but we needed a denominator 3x as large to get there.] Your point's well taken that it helps to make it clear what the basis for comparison is. I would define transcendentality to be a property additional to, but separate from, irrationality, not as "more irrational than the irrationals"
for a slightly more concrete sense of scale: this approximation is better than 3.14159 is an approximation for π edit: my bad, that's just barely wrong. but it's right if i'd said 3.1416 ;)
A better approximation for pi is 355/113 where the next term in the continued fraction is 1/292. Whereas the next term in this continued fraction is 1/4813 so it’s an even better approximation.
Am I missing something? The relative error of pi and 3.14159 is 0.00008446%, which is less than the approximation in the video. Wouldn't it be more accurate?
I'm impressed! I worked this out, very carefully, 3 times on my Hemmi 260 slide rule; and the hairline lands exactly over 10 every time. That's insanely good as an approximation. Now on to watch the video.
Great video! Note: this process for obtaining the integers 431 and 510 is essentially what is happening under the hood if one runs (in R): MASS::fractions(log(7, base = 10))
I thought you were going to introduce Dirichlet's approximation theorem before you introduced continuous fractions. We can always find n,m such that | \log_{10}(7) - n/m | < 1/m^2.
With arbitrary bases x and y, y^m ~ x^n -> n/m = ln(y)/ln(x) -> finite continued fractions -> (a/b)/(c/d) -> ad/bc. Reversed: (a,b,c,d) -> ad/bc -> (a/b)/(c/d) -> ln(y)/ln(x) == n/m -> y ~ x^n with m = 1.
@@robertveith6383 this is pedantic and literally *does not matter*. As in, this is standard notation in mathematical research papers. Every single person with half a brain knows that ad/bc means (ad)/(bc). If you want to say (ad/b)c, you write (adc/b).
The way mathematics is taught in school is nowhere close to how it should be taught. Neither they help develop intuition for finding inner patterns, nor they teach analysis. If someone (who graduated a long time ago) wants to learn mathematics all over again in the proper way, what should be the learning path? Which books might help in this journey? Please share your view.
If you've ever wondered why a kilobyte is 1024 bytes and not 1000, part of the explanation lies in the fact that log_10(2) is just a fraction of a hair over 3/10.
@@elliottsampson1454 I was going to bring that up, but I will just say this - I've been in IT for almost 30 years now and I have heard someone say "kibibyte" out loud exactly *once*.
A kilobyte (kibibyte) is 1024 bytes because 1024 is a power of 2 -> (2^10) which matters because computers operate in base 2 at the lowest level, and also it is close to 1000 (kilo = 1000)
So its just by chance that you happened to get 6.999, a number so close to a whole number, in the continued fraction? And thats why this particular approximation (for n and m) is so accurate and convenient, correct? Is there a way to predict/calculate the error between the exponents (0.00009% ), from how “deep” the continued fraction goes, or in other words, from how good of an approximation the continued fraction is? Or do you just have to brute force calculate it, or maybe theres some other method?
You can stop at any point and you'll have a pretty good approximation, but generally you'll have a better approximation if you cut it off at a larger number, because a large number appearing in the continued fraction means the previous number was very close to being an integer. According to Wolfram|Alpha, the continued fraction of log10(7) goes like [0; 1, 5, 2, 5, 6, 1, 4813, 1, 1, 2, 2, 2, 1, 1, 1, 6, 5, 1, 83, 7, 2, 1, 1, 1, 8, 5, 21, 1, ...] That 4813 is a _very_ big number, so if we cut it off there we get [0; 1, 5, 2, 5, 6, 1], or [0; 1, 5, 2, 5, 7], which is the fraction shown in the video, and it's a very good approximation for its size. You asked if you can predict how much error there will be, and this is a _very rough_ estimate, but... our fraction 431/510 uses 3-digit numbers, and we cut off the continued fraction at 4813 which is a 4-digit number, add those together and you get 7 digits, which is roughly how accurate our approximation is. This is a very loose way of measuring it, but it'll get you in the ballpark at least. For another, perhaps more familiar example, the continued fraction of pi starts out like [3; 7, 15, 1, 292, 1, 1, 1, ...] If we cut that off at the 15, making it just [3; 7], this gives us the classic approximation 22/7, which is pretty close But if we cut off at the 292, being a much larger number, we get the famous 355/113, which is similarly a very good approximation for its size. It's not a guarantee that if you pick any particular number, it will eventually have a nice large number to cut off the fraction... for example, the golden ratio is sometimes called the "least-rational number" because its continued fraction is just [1; 1, 1, 1, 1, ...], there's no large numbers to be found. I suspect that whoever came up with this example tried a few different bases, and picked 7 and 10 because log10(7) had this nice cutoff point.
The continued fraction gives the best approximation among the fractions with the same or smaller denominator. For an estimate on how the error evolves when increasing the number of terms, search for Lochs' Theorem. Basically, the error follows an exponential decay when increasing the number of terms of the continued fraction, and it gives roughly 1 more digit of precision for each term (just as the rational approximation produced by chopping the decimal expansion, but the continued fraction achieves much smaller denominators than that). It's curious that in a sense the worst case is achieved by the golden ratio, which requires around 2.39 terms per decimal place.
The error in the ratios a/b that emerge from continued fractions go like 1/b², whereas for a/b with arbitrary b and optimal corresponding a they go like 1/b. So you have twice the number of correct decimals compared to what you would expect for an arbitrary denominator.
It is by chance that it is so close to a whole number, but you also choose to stop when you find that whole number, since you know the next error term would be very small. Unless you're using this method to calculate phi, where you just keep choosing the largest possible error term each time.
2^19 ≈ 3^12, this near-equality is the Pythagorean comma. Another one: 2^84 ≈ 3^53, the Mercator comma. Slightly altering the “3” to make these into true equalities yields the musical tuning systems 12-TET and 53-TET.
Don't forget 2^24 being close to 4^12 as well, ha Why do we need a 19 to get 12-TET? If we are using true equalities, we just calculate the 12th root of 2?
To get a sense of how good this approximation is on your calculator you can take the base 10 logarithm of both sides of the equation and use the log rule that log(a^b)=b*log(a). So you do 510 * log (7) and you’ll get 431.0000004… so this is a very good approximation.
This is neat, But in the background how is your calculator calculating log(7). Won't it be using a talor series or something that would more directly get you a fractional approximation?
Very interesting! Just for fun I wrote a script that finds the best rational approximation for Log(7) (using a bisection method, given a number of iteration), and 431/510 is incredibly good compared to the other first thousand iteration. Also my program found that 60175773 / 71205671 could be an even better approximation! I could be wrong, as closer fractions require more precision and i dont really know how to work with variables precision in python, so if anyone knows if this is actualy correct let me know :)
Using Wolfram Alpha, I found that your fraction gives a relative error of about 1.6e-8. It may be the first fraction that gives a smaller relative error for 10^m - 7^n, compared to 431/510. There are smaller fractions than yours that are a better approximation to log(7) than 431/510, but the relative error for 10^m - 7^n in at least a couple cases was much greater than for 431/510.
There is a somewhat important omission in this video that can be a bit misleading - we should be trying to make m as big as possible, as long as n/m is a convergent (a continued fraction approximation) of log_10(7). Continued fractions are absolutely overpowered as approximations to irrational numbers, as they give us "the most bang per buck" most of the time in return for making the denominator as big as possible. What this means is that quite often (at least one in three) approximations by continued fractions will behave in the following way: if n/m is our continued fraction approximation, then our error is no more than 1/(sqrt(5)m^2). This is a theorem due to Hurwitz (7.17 in 5th edition of Number Theory by Niven, et al). What this means for our problem is that by making m gigantic, we have good odds to improve on the value of k, because now log(k)/m is smaller than 1/(sqrt(5)m^2). In other words, log(k) is no bigger in size than 1/(sqrt(5)m), and by a series approximation to the log, we can approximate k to be 1 +/- 1/(sqrt(5)m). As an example, if we take the first 44 terms in the continued fraction expansion of log(7), we will get k to be 5e-9 away from 1, as opposed to 9e-7. So making m bigger, at least one third of the time will result in a substantial increase in the "accuracy" of these approximations. And by taking only 42, we can make our approximation better to get 4e-10 as an error. The two exponents in that case are 7^3674335653184836132224 and 10^3105173858861009154451
My understanding is, yes we want m small, but not "as small as possible", rather, "no larger than necessary". We're looking for a value of m that's reasonably easy to calculate with by hand, while providing as much accuracy as possible within that limitation - the most "bang for your buck". This is the point of using a continued fraction, which guarantees that efficiency, as opposed to just chopping off the decimal at some arbitrary place.
1024 = 2^10 1000 = 10^3 So approx 2=10^0.3 0.7 approx= sqrt(2)/2 = 2^-0.5 So 0.7 = 10^-0.15 approx 7 = 10*0.7 = 10^0.85 approx 7^510 = 10^433 approx I mean 7=10^431/510 is superior, very precise, especially it is just 3 digit. What I showed is just a quick simply approximation which does not require calculator
I know you want to keep m small but i can't see why the continued fraction technique will give you a better approximation than taking first 3 or 6 or 9 decimal places and hoping something might cancel down? And do continued fractions always work better then regular approx using powers of 10?
It's a well-known theorem on rational approximations which roughly says that, if you fix a denominator m and an irrational number k, we have that the minimal absolute error for all fractions with denominator less-than-or-equal-to m is the one produced by cutting the continued fraction expansion of k at m (these approximations are called "convergents"). See for example https😮😮sites.math.rutgers.edu😮~sk1233😮courses😮ANT-F14😮lec2.pdf (change the emojis to slashes; this is because YT usually deletes comments with links)
They always work better indeed. Approximating a random number by a fraction a/b by choosing some b and then optimizing for a gives an error of order 1/b, whereas continued fractions have error of only order 1/b². So you get better approximations, or can pick small denominators.
He mentioned in the video, though only briefly, that these continued fraction approximations provably give the closest approximation without having to increase the denominator (And will often be closer than those even with greater denominators). As such, they're the best way to generate increasingly fine rational approximations. For example, the commonly known sequence of approximations for pi (3, 22/7, 333/106, 355/113, ...) is generated through this same process. Note that if we look specifically at 22/7 (roughly 3.142857), it's significantly more accurate than if we had resorted to decimal places with 31/10 or 3.1 and even 314/100 or 3.14 both being further with the added benefit of keeping the denominator as small as possible. If we look at the last one I listed, 355/113 (roughly 3.14159292), the decimal approximation doesn't get closer until 3.1415927 or 31415927/10000000. I hope this helps clear up why he took that approach.
The video's approximation is way closer than your derivation implies. You're accidentally relying on almost all of the error in your approximations miraculously cancelling out. Similar approaches would be expected to get something like 3-5% error, as opposed to the one part in ten million or so error (all of this is estimated).
yes, 7^510 ≈ 10^431, is a great find. However, consider that: 7^303 ≈ 46^154 is about 5 times better 65^1942 ≈ 3^7379 about 138 times better 13^54353 ≈ 2^201130, about 1548 times better and 75^95792 ≈ 2^596671, about 30380 times better. and by better, I mean that the 2 results have an error ratio that many times smaller. Furthermore, 29^863859 ≈ 37^805576 is probably about 1476400 times better. In fact, 29^863859 is so close to 37^805576 that Excel can't even tell the difference.
When I saw the title I assumed this was a non calculator exercise. Starting from 2^10 x (7^2)^10 =! (10^2)^10 (where the symbol =!means approximately equals) yields 7^20=!10^17... And 7^500=10^425, hence 7^510=!10^425x7^10. Now I am stuck and the approximation is not very good anyway (cumulative errors). Is there a better (non calculator ) way?
i had a small panic attack watching this.....i kept seeing this presentable young gentleman who is a mathematical prodigy with a ph d but he can't find a way to keep his own hair out of his eyes .... while writing logarithmic puzzles about one plus zero point one eight three dot dot dot and the reciprocal of n over m to the k and log to the base 10 over seven but can't find a comb to reveal his eyebrows to the world
*@ Dr Barker* -- I disagree with you. These numbers are far off from each other on an absolute scale, but they are close to each other on a percentage scale. So, it makes sense to write (7^510)/(10^431) ~ 1, that is, their ratio is a good approximation to 1. However, one is not a good approximation to the other.
This is a fair point - there is some abuse of notation/language here. It's interesting that a/b ≈ 1 doesn't imply that a ≈ b (on an absolute scale). Similarly, we could say that the converse isn't true either, e.g 0.0001 and 0.000000001 have a small difference, but their ratio is not close to 1.
Those were primes, and these are not, bet even better: 828^11 vs 473^12. Dividing the first by the seconds gives us: 1.00000003633.... that's 7 zeroes, almost 8. While 7^510 / 10^431 is barely 6 zeroes.
A few more gems. Primes: 2^35 vs 3251^3 is 34359738368 vs 34359822251 which gives us 5-zero quality: 1.000002441.. 61^1530 vs 1201^887 gives us 9-zero quality: 1.0000000002208.. And here comes the boss: 5039^133901815598/11813^121735457259 gives us 18-zero quality: 1.00000000000000000017504.. Coprimes: 470^282 / 2087^227 gives us 10-zero quality: 1.0000000000328.. 587^16684 / 14316^11115 gives us 13-zero quality: 1.0000000000000978898....
Mathematicians typically mean natural log when writing "log." It is a "lie to children" to teach that the natural log is written "ln." This is usually done because the idea of a log with an irrational base makes something that's already difficult to grasp (logarithms) even more difficult to grasp. So educators lie and say that log means log_10 because that is the easiest log to understand (count the decimal places in front of 0) when you first learn logs. "ln" (short for logarithmus naturali) still enjoys limited use when we want to emphasize that we are working with base e, but base e is so natural that it goes unspoken by almost all mathematicians in almost all contexts.
100000093777653550411595210802362918755003975459590643380209104661183007174725629025164267575411351038190653865785147280048260730260174481888634961017465832153767083905055973115171463147305117967991362274707998617076906864856409342232579989850809934789297204951088829488795931267365054891926973026731194699108560193240351188856787007789047064042449190113938271461053770539111553700620401009514948932630689691333216918712082217175249 vs 100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 not even close
I had heard that phi (the golden ratio) is the "most irrational" number. But this exercise helps me see why: in approximating irrational numbers with continued fractions, larger numbers make a better approximation-so the irrational number hardest to approximate with rationals would be the one with all 1's in the continued fraction, which is phi.
And yet it's still an algebraic number, as a root of x^2-x-1. Whereas numbers like pi and e aren't roots of any polynomials.
@@psymar yeah i would definitely consider those transcendental numbers to be "more irrational"
@@hyperpsych6483 I'd argue there's a pretty clear distinction to be made about algebraic and analytic properties of rational numbers. The algebraic competition of Q is just the algebraic numbers (over C, but it's an easy reduction to over R), while the metric competition (ie, consider every limit of a cauchy sequence under the absolute value in Q) is R. It happens that Q is a strict subset of the algebraic numbers and is a strict subset of the reals (eg. 1, sqrt2, pi).
They're definitely related (they formed from the same number system), but I'd say that it's possible for an algebraic number to be more irrational than a transcendental number.
@@psymar along that line of reasoning, I interpret rational numbers as solutions to some linear polynomial over the integers. That is, for any rational number x = p/q, it is a solution to the polynomial
qx - p = 0
The numbers that cannot be expressed in this form are called irrational, but there's degrees to this. For example, x = √2 is an irrational number, but it's a solution to a degree 2 polynomial:
x² - 2 = 0
So on and so forth for higher and higher degrees.
In this sense, transcendental numbers-which are never a solution to any polynomial of any degree-are "more irrational" than irrational numbers that are still solutions to degree n polynomials.
For this continued fractions definition, it pretty much hinges on the statement "larger numbers make better approximations", but I fail to see under what grounds this is true or what the underlying logic is for it. Based on that, I don't really agree with the conclusion by OP. If someone could elaborate on why larger numbers are better or worse and by what metric this is achieved or measured, that would be appreciated.
@@ffc1a28c7 I'd put it more simply thus: by "most irrational" we mean "hardest to approximate as a *ratio* of two integers". 22/7, for example, is a pretty decent approximation of π given its small denominator - no equally close n/m approximation exists for φ despite its algebraicity (if that's a word).
[Edit since that may be unclear - obviously there are closer rational approximations if you make m larger, the point is that you can't get the same accuracy with similar values for m. E.g., 34/21 is about as accurate for φ as 22/7 is for π, but we needed a denominator 3x as large to get there.]
Your point's well taken that it helps to make it clear what the basis for comparison is. I would define transcendentality to be a property additional to, but separate from, irrationality, not as "more irrational than the irrationals"
for a slightly more concrete sense of scale: this approximation is better than 3.14159 is an approximation for π
edit: my bad, that's just barely wrong. but it's right if i'd said 3.1416 ;)
That’s helpful!
This is a neat way to explain it!
It is also uses more digits to express it than 3.14159 though, so how surprising is that...?
A better approximation for pi is 355/113 where the next term in the continued fraction is 1/292.
Whereas the next term in this continued fraction is 1/4813 so it’s an even better approximation.
Am I missing something? The relative error of pi and 3.14159 is 0.00008446%, which is less than the approximation in the video. Wouldn't it be more accurate?
Hi Dr. Barker!
This is really nice! I love being able to follow along every step.
I'm impressed! I worked this out, very carefully, 3 times on my Hemmi 260 slide rule; and the hairline lands exactly over 10 every time. That's insanely good as an approximation.
Now on to watch the video.
Nice, this must have been a very satisfying way of verifying the approximation!
This 80 year old is impressed that ANYONE can still use a slide rule. Kudos!
@@jamesorr6537 I not only can - I keep getting better. I'm still discovering tricks I wish I'd known 50 years ago.
For some bizarre unknown reason, I find this video rather humourous
Engineers: They're equal
Programmers: They're equal
Mate, don't get me wrong: I love math yet your soothing voice and english posh accent are perfect to fall asleep
Oh, the problems of native speakers xD
by far the most underated creator on this plataform
Great video! Note: this process for obtaining the integers 431 and 510 is essentially what is happening under the hood if one runs (in R): MASS::fractions(log(7, base = 10))
Bros back with the bangers
I thought you were going to introduce Dirichlet's approximation theorem before you introduced continuous fractions. We can always find n,m such that | \log_{10}(7) - n/m | < 1/m^2.
Thank you so much for this approximation! I just needed it for work today. /s
Thank you! Have never known how really these approximations were done ❤
With arbitrary bases x and y, y^m ~ x^n -> n/m = ln(y)/ln(x) -> finite continued fractions -> (a/b)/(c/d) -> ad/bc. Reversed: (a,b,c,d) -> ad/bc -> (a/b)/(c/d) -> ln(y)/ln(x) == n/m -> y ~ x^n with m = 1.
You are missing grouping symbols around the denominator: ad/(bc).
@@robertveith6383 Yep. Math speculating late at night can lead to that. 😉
@@robertveith6383 this is pedantic and literally *does not matter*. As in, this is standard notation in mathematical research papers. Every single person with half a brain knows that ad/bc means (ad)/(bc). If you want to say (ad/b)c, you write (adc/b).
Take a shot whenever you hear 'one over'.
The way mathematics is taught in school is nowhere close to how it should be taught. Neither they help develop intuition for finding inner patterns, nor they teach analysis.
If someone (who graduated a long time ago) wants to learn mathematics all over again in the proper way, what should be the learning path? Which books might help in this journey?
Please share your view.
I'd love to know too, I guess videos like this and others are a good start, but I'd love some more you know?
If you've ever wondered why a kilobyte is 1024 bytes and not 1000, part of the explanation lies in the fact that log_10(2) is just a fraction of a hair over 3/10.
technically, a kilobyte is 1000 bytes, and a kibibyte is 1024 bytes
@@elliottsampson1454 I was going to bring that up, but I will just say this - I've been in IT for almost 30 years now and I have heard someone say "kibibyte" out loud exactly *once*.
@@elliottsampson1454 I'm an engineer, and i can assure you that in Italy they taught us the opposite. I was quite astonished once I realized it
A kilobyte (kibibyte) is 1024 bytes because 1024 is a power of 2 -> (2^10) which matters because computers operate in base 2 at the lowest level, and also it is close to 1000 (kilo = 1000)
I LOVE CONTINUED FRACTIONS PLEASE PLEASE PLEASE MAKE A VIDEO EXPLAING THEM. Subbed liked notif onned.
2*7^2~=10^2
sqrt(2)~=10/7
So its just by chance that you happened to get 6.999, a number so close to a whole number, in the continued fraction? And thats why this particular approximation (for n and m) is so accurate and convenient, correct?
Is there a way to predict/calculate the error between the exponents (0.00009% ), from how “deep” the continued fraction goes, or in other words, from how good of an approximation the continued fraction is? Or do you just have to brute force calculate it, or maybe theres some other method?
You can stop at any point and you'll have a pretty good approximation, but generally you'll have a better approximation if you cut it off at a larger number, because a large number appearing in the continued fraction means the previous number was very close to being an integer.
According to Wolfram|Alpha, the continued fraction of log10(7) goes like [0; 1, 5, 2, 5, 6, 1, 4813, 1, 1, 2, 2, 2, 1, 1, 1, 6, 5, 1, 83, 7, 2, 1, 1, 1, 8, 5, 21, 1, ...]
That 4813 is a _very_ big number, so if we cut it off there we get [0; 1, 5, 2, 5, 6, 1], or [0; 1, 5, 2, 5, 7], which is the fraction shown in the video, and it's a very good approximation for its size.
You asked if you can predict how much error there will be, and this is a _very rough_ estimate, but... our fraction 431/510 uses 3-digit numbers, and we cut off the continued fraction at 4813 which is a 4-digit number, add those together and you get 7 digits, which is roughly how accurate our approximation is. This is a very loose way of measuring it, but it'll get you in the ballpark at least.
For another, perhaps more familiar example, the continued fraction of pi starts out like [3; 7, 15, 1, 292, 1, 1, 1, ...]
If we cut that off at the 15, making it just [3; 7], this gives us the classic approximation 22/7, which is pretty close
But if we cut off at the 292, being a much larger number, we get the famous 355/113, which is similarly a very good approximation for its size.
It's not a guarantee that if you pick any particular number, it will eventually have a nice large number to cut off the fraction... for example, the golden ratio is sometimes called the "least-rational number" because its continued fraction is just [1; 1, 1, 1, 1, ...], there's no large numbers to be found.
I suspect that whoever came up with this example tried a few different bases, and picked 7 and 10 because log10(7) had this nice cutoff point.
The continued fraction gives the best approximation among the fractions with the same or smaller denominator. For an estimate on how the error evolves when increasing the number of terms, search for Lochs' Theorem. Basically, the error follows an exponential decay when increasing the number of terms of the continued fraction, and it gives roughly 1 more digit of precision for each term (just as the rational approximation produced by chopping the decimal expansion, but the continued fraction achieves much smaller denominators than that). It's curious that in a sense the worst case is achieved by the golden ratio, which requires around 2.39 terms per decimal place.
The error in the ratios a/b that emerge from continued fractions go like 1/b², whereas for a/b with arbitrary b and optimal corresponding a they go like 1/b. So you have twice the number of correct decimals compared to what you would expect for an arbitrary denominator.
It is by chance that it is so close to a whole number, but you also choose to stop when you find that whole number, since you know the next error term would be very small. Unless you're using this method to calculate phi, where you just keep choosing the largest possible error term each time.
2^19 ≈ 3^12, this near-equality is the Pythagorean comma.
Another one: 2^84 ≈ 3^53, the Mercator comma.
Slightly altering the “3” to make these into true equalities yields the musical tuning systems 12-TET and 53-TET.
Don't forget 2^24 being close to 4^12 as well, ha
Why do we need a 19 to get 12-TET? If we are using true equalities, we just calculate the 12th root of 2?
@@zzzaphod8507 What I mean is in these temperaments, the “3” is altered to the nearby 2^(19/12) or 2^(84/53).
No, their *ratios* can be good approximations to 1.
I still can't believe you only have 14 thousand subscribers
This reminds me of the classic 2^10 ≈ 10^3 which we see in data sizes
No way, the e05 TAS guy
Trackmania nations forever, and Dr. Barker's youtube channel. The collision of these worlds is something i was not prepared for
My calculator overflows with numbers that big. I had to use the properties of logs and exponents to figure out how good the approximation is.
To get a sense of how good this approximation is on your calculator you can take the base 10 logarithm of both sides of the equation and use the log rule that log(a^b)=b*log(a). So you do 510 * log (7) and you’ll get 431.0000004… so this is a very good approximation.
Awesome video. Thank you for sharing :)
This is neat,
But in the background how is your calculator calculating log(7).
Won't it be using a talor series or something that would more directly get you a fractional approximation?
7² = 49 ≈ 10²/2
2¹⁰ = 1024 ≈ 10³
7²⁰ ≈ 10²⁰ / 2¹⁰ ≈ 10²⁰ / 10³ = 10¹⁷
7⁵¹⁰ = (7²⁰)^25.5 ≈ (10¹⁷)^25.5 = 10^433.5
Too far. 7⁵¹⁰ has 431 digit and 10⁴³³‘⁵ has 434 which makes the number around 100 times bigger
Wow what a great video really intresting idea delivered masterfully
Thank you!
i will remember this next time i need to approximate how much a 1 with 431 zeros is
Are continued fractions taught in A-Level maths?
If they are not, then why aren't they?
Very interesting!
Just for fun I wrote a script that finds the best rational approximation for Log(7) (using a bisection method, given a number of iteration), and 431/510 is incredibly good compared to the other first thousand iteration.
Also my program found that 60175773 / 71205671 could be an even better approximation!
I could be wrong, as closer fractions require more precision and i dont really know how to work with variables precision in python, so if anyone knows if this is actualy correct let me know :)
use decimal library for higher precision, just do _getcontext.setprecision or something
if you switch your numbers to Decimal you get infinite precision
You can use the Decimal library for more precision, or rewrite +-/* to work with ints where python has unbounded precision
Using Wolfram Alpha, I found that your fraction gives a relative error of about 1.6e-8. It may be the first fraction that gives a smaller relative error for 10^m - 7^n, compared to 431/510. There are smaller fractions than yours that are a better approximation to log(7) than 431/510, but the relative error for 10^m - 7^n in at least a couple cases was much greater than for 431/510.
2^(46÷46)=2, 2^(73÷46)≈3, 2^(107÷46)≈5, 2^(129÷46)≈7, 2^(159÷46)≈11, 2^(170÷46)≈13, 2^(188÷46)≈17
First seven primes in powers of 2^(1÷46)
There is a somewhat important omission in this video that can be a bit misleading - we should be trying to make m as big as possible, as long as n/m is a convergent (a continued fraction approximation) of log_10(7).
Continued fractions are absolutely overpowered as approximations to irrational numbers, as they give us "the most bang per buck" most of the time in return for making the denominator as big as possible. What this means is that quite often (at least one in three) approximations by continued fractions will behave in the following way: if n/m is our continued fraction approximation, then our error is no more than 1/(sqrt(5)m^2). This is a theorem due to Hurwitz (7.17 in 5th edition of Number Theory by Niven, et al).
What this means for our problem is that by making m gigantic, we have good odds to improve on the value of k, because now log(k)/m is smaller than 1/(sqrt(5)m^2). In other words, log(k) is no bigger in size than 1/(sqrt(5)m), and by a series approximation to the log, we can approximate k to be 1 +/- 1/(sqrt(5)m).
As an example, if we take the first 44 terms in the continued fraction expansion of log(7), we will get k to be 5e-9 away from 1, as opposed to 9e-7. So making m bigger, at least one third of the time will result in a substantial increase in the "accuracy" of these approximations. And by taking only 42, we can make our approximation better to get 4e-10 as an error.
The two exponents in that case are 7^3674335653184836132224 and 10^3105173858861009154451
Yes I had to go back and rewatch that bit as well. He must have meant log10(k)/m small presumably?
The „m small“ part was so strange I couldnt continue watching the video at that point
My understanding is, yes we want m small, but not "as small as possible", rather, "no larger than necessary". We're looking for a value of m that's reasonably easy to calculate with by hand, while providing as much accuracy as possible within that limitation - the most "bang for your buck". This is the point of using a continued fraction, which guarantees that efficiency, as opposed to just chopping off the decimal at some arbitrary place.
1024 = 2^10
1000 = 10^3
So approx 2=10^0.3
0.7 approx= sqrt(2)/2 = 2^-0.5
So 0.7 = 10^-0.15 approx
7 = 10*0.7 = 10^0.85 approx
7^510 = 10^433 approx
I mean 7=10^431/510 is superior, very precise, especially it is just 3 digit.
What I showed is just a quick simply approximation which does not require calculator
2^(41÷41)=2, 2^(65÷41)≈3, 2^(95÷41)≈5, 2^(115÷41)≈7, 2^(142÷41)≈11, 2^(152÷41)≈13
First six primes in powers of 2^(1÷41)
I know you want to keep m small but i can't see why the continued fraction technique will give you a better approximation than taking first 3 or 6 or 9 decimal places and hoping something might cancel down? And do continued fractions always work better then regular approx using powers of 10?
It's a well-known theorem on rational approximations which roughly says that, if you fix a denominator m and an irrational number k, we have that the minimal absolute error for all fractions with denominator less-than-or-equal-to m is the one produced by cutting the continued fraction expansion of k at m (these approximations are called "convergents"). See for example https😮😮sites.math.rutgers.edu😮~sk1233😮courses😮ANT-F14😮lec2.pdf (change the emojis to slashes; this is because YT usually deletes comments with links)
They always work better indeed.
Approximating a random number by a fraction a/b by choosing some b and then optimizing for a gives an error of order 1/b, whereas continued fractions have error of only order 1/b².
So you get better approximations, or can pick small denominators.
He mentioned in the video, though only briefly, that these continued fraction approximations provably give the closest approximation without having to increase the denominator (And will often be closer than those even with greater denominators). As such, they're the best way to generate increasingly fine rational approximations.
For example, the commonly known sequence of approximations for pi (3, 22/7, 333/106, 355/113, ...) is generated through this same process. Note that if we look specifically at 22/7 (roughly 3.142857), it's significantly more accurate than if we had resorted to decimal places with 31/10 or 3.1 and even 314/100 or 3.14 both being further with the added benefit of keeping the denominator as small as possible. If we look at the last one I listed, 355/113 (roughly 3.14159292), the decimal approximation doesn't get closer until 3.1415927 or 31415927/10000000.
I hope this helps clear up why he took that approach.
Feels like cheating to use the decimal expansion of log 7. Is there any way pull this approximation out of the ether?
Slightly different route: 7^510 = (7^2)^255 ~ (100/2)^255 = 10^510 / 2^255 = 10^510 / ((2^10)^25 * 32) ~ 10^510 / (10^3)^25 * 32) ~ 10^(510-75) / (100/3) = 10^435 / (100/3) = 3 * 10^433
approximation used.
7^2 ~ 50 = 100/2
2^10 ~ 10^3
32 ~ 100/3
The video's approximation is way closer than your derivation implies. You're accidentally relying on almost all of the error in your approximations miraculously cancelling out. Similar approaches would be expected to get something like 3-5% error, as opposed to the one part in ten million or so error (all of this is estimated).
yes, 7^510 ≈ 10^431, is a great find.
However, consider that:
7^303 ≈ 46^154 is about 5 times better
65^1942 ≈ 3^7379 about 138 times better
13^54353 ≈ 2^201130, about 1548 times better
and 75^95792 ≈ 2^596671, about 30380 times better.
and by better, I mean that the 2 results have an error ratio that many times smaller.
Furthermore,
29^863859 ≈ 37^805576 is probably about 1476400 times better.
In fact, 29^863859 is so close to 37^805576 that Excel can't even tell the difference.
And how to calculate the exact difference between these numbers?
93777653550411595210802362918755003975459590643380209104661183007174725629025164267575411351038190653865785147280048260730260174481888634961017465832153767083905055973115171463147305117967991362274707998617076906864856409342232579989850809934789297204951088829488795931267365054891926973026731194699108560193240351188856787007789047064042449190113938271461053770539111553700620401009514948932630689691333216918712082217175249
@@michaeljones1686Can it be factorized? 😂
@@ПавелКуликов-м9м Probably not in any reasonable amount of time with current algorithms unless there's a specialized trick
@@ПавелКуликов-м9м why dont you check
@@ПавелКуликов-м9м Possibly? It's definitely 3*3*37*something, but that something has no prime factor less than 200000000...
Why is he writing the decimal point in the middle of the number height, like multiplication. Confusing and weird.
That's how many Europeans do it.
When I saw the title I assumed this was a non calculator exercise. Starting from 2^10 x (7^2)^10 =! (10^2)^10 (where the symbol =!means approximately equals) yields 7^20=!10^17... And 7^500=10^425, hence 7^510=!10^425x7^10. Now I am stuck and the approximation is not very good anyway (cumulative errors). Is there a better (non calculator ) way?
i had a small panic attack watching this.....i kept seeing this presentable young gentleman who is a mathematical prodigy with a ph d but he can't find a way to keep his own hair out of his eyes .... while writing logarithmic puzzles about one plus zero point one eight three dot dot dot and the reciprocal of n over m to the k and log to the base 10 over seven but can't find a comb to reveal his eyebrows to the world
Truly this video seems useless. But I like this very much, awesome! I don’t know why.
Beautiful
2^(12÷12)=2, 2^(19÷12)≈3, 2^(28÷12)≈5
wow, very impressive
So close percentage means that their first digits are same. Thus 7⁵¹⁰ = 100000....49
Yep, about 1000000938...
Are they best consecutive rational approximations?
What do you mean by consecutive in that context?
@@dmondot sequential
Pyrocynical?
could you have tried other way without log , decimal vetc !!
Leet approximation: π ≈ ∛62602 - √1337.
Ridiculous approximation: π ≈ root5(7/10) *10 - ∛√55166. Also my favourite: π ≈ ∜44445 - ∛1473. 😂
*@ Dr Barker* -- I disagree with you. These numbers are far off from each other on an absolute scale, but they are close to each other on a percentage scale. So, it makes sense to write (7^510)/(10^431) ~ 1, that is, their ratio is a good approximation to 1. However, one is not a good approximation to the other.
This is a fair point - there is some abuse of notation/language here. It's interesting that a/b ≈ 1 doesn't imply that a ≈ b (on an absolute scale). Similarly, we could say that the converse isn't true either, e.g 0.0001 and 0.000000001 have a small difference, but their ratio is not close to 1.
Check this: 251^205381 vs 337^194984, much better approximation!
Those were primes, and these are not, bet even better: 828^11 vs 473^12. Dividing the first by the seconds gives us: 1.00000003633.... that's 7 zeroes, almost 8. While 7^510 / 10^431 is barely 6 zeroes.
pretty nice approximation indeed, m8
A few more gems.
Primes:
2^35 vs 3251^3 is 34359738368 vs 34359822251 which gives us 5-zero quality: 1.000002441..
61^1530 vs 1201^887 gives us 9-zero quality: 1.0000000002208..
And here comes the boss: 5039^133901815598/11813^121735457259 gives us 18-zero quality: 1.00000000000000000017504..
Coprimes:
470^282 / 2087^227 gives us 10-zero quality: 1.0000000000328..
587^16684 / 14316^11115 gives us 13-zero quality: 1.0000000000000978898....
They are not better approximations.. Their *ratio* can be a good approximation to 1.
@@robertveith6383 what’s the difference?
7 has another fun 'concidental' property, namely that 7^3 = 18 * 19 + 1 and hence 17 * 7^3 = 18^3 - 1 and 20 * 7^3 = 19^3 + 1.
Move your hair out from your eyes! 😂
Thanks Dr Barker, I'm first
:24 relative approximate
first 7 digits are the same :) technically log of base 10 is just log - no need to put base
Mathematicians typically mean natural log when writing "log." It is a "lie to children" to teach that the natural log is written "ln." This is usually done because the idea of a log with an irrational base makes something that's already difficult to grasp (logarithms) even more difficult to grasp. So educators lie and say that log means log_10 because that is the easiest log to understand (count the decimal places in front of 0) when you first learn logs. "ln" (short for logarithmus naturali) still enjoys limited use when we want to emphasize that we are working with base e, but base e is so natural that it goes unspoken by almost all mathematicians in almost all contexts.
I used Google
This guy could apply for chinese citizenship with his math skills.
how dumb....but also brillant
7^510 - 10^431 = 9.377765355........ × 10^424
これでも、単純に引き算だと大差だね・・・
100000093777653550411595210802362918755003975459590643380209104661183007174725629025164267575411351038190653865785147280048260730260174481888634961017465832153767083905055973115171463147305117967991362274707998617076906864856409342232579989850809934789297204951088829488795931267365054891926973026731194699108560193240351188856787007789047064042449190113938271461053770539111553700620401009514948932630689691333216918712082217175249 vs 100000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 not even close