You only have to test each number for divisibility by four prime numbers, and three of those are trivial tests (evens end in 0,2,4,6,8: multiples of 5 end in 0 or 5: multiples of 3 have digits that sum to a multiple of 3) so it becomes easier than it sounds, and you'd also start testing each time with the smallest prime, 2, and work up, so that only rarely would you rule out 2, 3 & 5 as co-factors, at which point if you've also ruled out one or both being primes then the only option left is that they'd share a prime factor of 7.
Having gone to the 'all the rolls' video, then picked a time point about Matt's emotional state from the comments underneath it, I was immediately shown a pair of rolls that proved my last comment incorrect, as it showed a pair whose only common prime factor was 23. I now think that you would need to test for divisibility by all primes up to 37, but for each prime factor you would rule out a large-ish percentage of the remaining possible combinations, so you'd only got to the largest primes a vanishingly small percentage of the time (Assuming you'd memorised and eliminated any actual primes first). I make it 37 because you'd need them to be coprime but not actually primes, so they must have at least one other prime factor each. To make it to the largest factor then the other prime factors that they don't share must be as small as possible, so keep to one prime factor each, and make them the smallest two primes available, so that one is a multiple of 2 and the other is a multiple of 3, but both are multiples of some largest prime. The maximum limit is thus 1/3 of 120, or 40, with the largest prime under that limit being 37. So yeah it was potentially more impressive than I first thought.
In my original, and somewhat incorrect, comment I had assumed that any combination where one or both were primes to be essentially trivial cases, so had calcuated the largest prime test only for a situation where two composite numbers had been rolled. A pair of primes are automatically co-prime, but I hadn't fully considered the situation where one was prime and one was composite. Ok so add an extra rule that says that for all instances with one prime being 41, 43, 47, 53 or 59 double check that the other number isn't twice the value of the prime. If your one prime is 29, 31 or 37 you need to check both twice and thrice the number. If it's 19 or 23 you check twice, thrice and five times. If it's 11, 13 or 17 you need to check twice, thrice, five times and seven times. But for all combinations where neither number is itself prime the largest prime test (I think) is 37. And for those where one is prime, it's easier to multiply the prime up (by the multiples mentioned above) and see if that matches the composite one, rather than simply trying sucessive prime tests on the composite one.
a suggestion for next year: 1. at a constant rate, dig a tunnel through the center of the earth 2. at the same rate, walk along a path formed by a great circle around the earth 3. divide the walking time by the digging time you can expect some error but it would make for great video
mthielssalvo Your method is flawed!! Considering the fact that the Earth is flat surrounded by the great circle of iceberg wall, how would digging a tunnel through the center of the earth help?? 😂
Your Pal Nurav Well in that case, walk around along that ice ridge and then walk from the ridge to the center of the circle. But of course, the Earth is neither flat nor round, but is in fact a figment of your imagination. Wake up, child. You've slept for so long.
When you're doing a time lapse like this in the future, can you please put an analog wall clock in the frame behind you so we can see it whirl around when you're working?
There are 120^2=14400 possible pairs of numbers between 1 and 120. 8771out of them are coprime pairs. This gives a value of pi~sqrt(6*14400/8771)~3.13857.
@@f0kes32 You can use the excel method that Matt uses in his video but instead of stopping at 1000, you keep going til you hit 14400. Also, instead of having it give you 14400 random numbers, you have it give you one of every possible option. Count them up and, voila.
If you bump this up to 200-sided dice, there are 40,000 combinations 24463 of them are coprime This gives you pi = 3.132, which interestingly is even less accurate.
"I probably made some mistakes... I'm kinda hopping the mistakes have averaged out." It's good coming back to older videos and seeing that the spirit of Matt hasn't changed one bit in essence.
Fun fact: Euler got the right answer by using a completely incorrect method. As I recall, he wrote the sine function as a product of its roots, which isn't accurate, but it worked out correctly. Since then, we've proven it several other ways and concurred that it is actually (pi^2)/6
As Pierre Roux suggested, I have used the digits of Pi as a source of random numbers from which to calculate pi! Taking the first 10^6 digits of pi, and splitting them into 10^5 integers of length 10, I got 5*10^4 pairs of random integers (assuming the digits of pi are truly random - this is unproven). Comparison of these integers yields the estimate 3.14099105107796, which is accurate to a 0.00019% error :) PI-Ception!
I'm probably not the first person to mention this, but in Excel you can have it just count a column in its entirety. You don't need to define a set range, you can just put COUNT(C:C) and then you can add more rows at will without having to edit that function.
You can also use more complex systems to to more complex things references to the maximum cell or simply have a counter and use thats maximum as the reference. In the end with index() you can do pretty much anything.
O.K. I did the whole column method, worked out a value of pi and compared it to the actual value of pi and took the difference. I then ran the thing 100 times (selected in advance so as to not bodgy up the value to be close as explained in the actual video). My total difference from pi was -1.8*10^-5. This used 1,048,575 rows and a maximum integer of 1,000,000.
For those following along at home, these are the methods of calculation he has used in order of decreasing error: calculating the first terms of an infinite series (off by 3.1%), rolling two dice 500 times (off by 2.8%), timing a pendulum of known length (off by 0.433%), weighing a cardboard circle (off by 0.31%), and measuring a circle using pies (off by 0.1035%).
Actually, the random function gives you a number in the interval [0;1). So the random number is between zero and one and can take on the value 0, but not the value 1. When you multiply by 120, you get a number in [0;120). *The correct way to proceed* at this point is to *round down and add one*. This gives a number in the range [0;119] plus 1, which is in the interval [1;120]. If on the other hand, you round up, you mess with the probabilities, because in case the random number is 0, you get roundup(0·120)=roundup(0)=0, which is outside of your desired range. Also, drawing 120 is slightly less likely because the event of the "direct hit" (where no rounding is required) is missing (which is kind of a sloppy explanation, but I hope y'all get what I mean).
Mathematician: But the measure of any finite set is zero! So the probability of hitting 0 is zero Engineer: According to the IEEE 754 specification, there are exactly 2^52 double-precision floating point values in the range [1, 2). This range is used for the initial rand() computation, since they all fall between two consecutive powers of 2, and therefore all have the same “exponent” under the standard (unlike numbers from 0 to 1, which must worry about cross-over points such as 1/2, 1/4, 1/8, etc… on their way down to zero, with the smallest possible exponent being -1022) This initial random number is then adjusted by subtracting 1, which gives a uniformly pseudorandom number in the range [0, 1). Thus the probability of returning exactly zero is 1 in 2^52 (certainly far smaller than the experimental variance in our calculation of pi, but it’s still worth writing correct code)
No@@malharr9993 , you don’t get it. 31*3=93, now I know you are probably a little child (when you made this) so I’ll tell you. 31*3= 30*3+(1*3)=90+3=93. He said 63. 31*3=93, not 61.
If your happened to get really lucky and select exactly 304 dice, it would give an approximation for pi about: 3.1414043... which is astronomically close :).
Another simpler way: - Take random pairs of numbers in [0, n] and interpret them as x and y coordinates within an n*n square - For each pair, calculate the distance to the origin. - Count the number of pairs for which that distance is less than or equal to n Effectively, you've just determined whether or not that point is inside of a quarter-circle centered at the origin, with a radius n. So, the probability that a pair will be in that group is: p = ((area of circle) / 4) / (area of square) = ((πn^2) / 4) / n^2 = π / 4
Gravity Falls called. The infinity sided dice is a type of apeirohedron die that was brought back by Ford from another dimension. These type of dice are outlawed in 9,000 other dimensions due to the properties of their roll. Because of their infinite sides, anything can happen when one of these dice is rolled. Our faces could melt into jelly. The world could turn into an egg! Or, you could just roll and eight. Who knows?
that reminds me, I created what I call parker cosine(cosp): cosp[subscript n](x) = 2 * integral from -n/2 to ∞ of (x^2T)cos(πT)/(2T)! dT At n=1, it is just ever so slightly off!
Right now i'm gonna write a code so that the range of random number generated is much larger, thanks matt this will be a good test since i'm only a begginer in coding
The probability of 2 random numbers being coprime is only 6/π² if you consider _ALL_ numbers, like, from 1 to infinity. If you only consider numbers within a specific range (such as from 1-120 or 42-63360 or whatever), the probability will be slightly higher than 6/π², and you'll get an experimental approximation that likely undershoots pi. Not sure if that was mentioned or not. Edit: yes it was 8:41
Hey Matt, I know I'm months behind on seeing your awesome video, but I have a question; Could one potentially generate more results with fewer rolls by, say, rolling 3 'random' numbers (a, b, and c) and comparing 'ab', 'ac', and 'bc' for cofactors and coprimes? Or, would that introduce some kind of error that defeats the purpose of such a method?
Amazingly, this does not change the expected number of coprime pairs. This is a consequence of the fact that expected values simply add up regardless of dependency. It does, however, change the standard deviation.
2 years and almost 2,400 comments later I'm sure someone has already suggested this, but you can use =RANDBETWEEN(1,120) for a much easier way to get a random integer between 1 and 120. You can also simplify your coprime formula to =(GCD(A1,B1)=1)*1 -Excelphile
So, I just had to pull out a cpp file and do it myself. Using numbers ranging from 1 to 2^16 and 1000000 samples, I was able to get PI with 3 digit precision. Kinda impressive considering you can't even notice the time it takes to execute.
Weird I almost didn't notice is accent. Something tells me he's on the other side of the world. OR he was just nice and allowed people to prepare a day in advance in order to shine on actual PI day ;)
When I got here there were 1459 comments. Did you know if you take the sum of the cubes of the digits of 1459, and then take the sum of the cubes of the digits of the result, you'll get back to 1459? Try it!
As soon as you said 6/π², I thought of the Basel problem, but I couldn't prove it before you got to that point in the explanation. But once again, we see that familiar numbers do indeed indicate a connection.
I wrote a python script for the same which did 100k iterations and the range of random number was (0 to 1M),got 3.1416 as the result,it was amazing to see how the result started to converge towards 3.14 with the increase in number of iterations ☺
1992jamo The function RANDBETWEEN is just a simpler way to do what Matt did in the video. I don't mean to say that a computer can generate a truly random number.
Marginal return on effort seems to be limited here. By that I mean, increasing the max number only gets you a few fractions of a percent closer to the real value. Which compares with the infinite sum from last year, where adding a lot of terms adds only a minor and diminishing correction each time. For next year's Pi day, divide 32 by 10, then come to America and visit the Indiana legislature and try and convince them that it should be changed to 3.2 so you'll be correct. Again.
What if you alternated rolling dice so each roll was used in 2 calculations. i.e gcd(a,b) , gcd(b,c) , then gcd(c,d) etc. Would that change the probability?
jonlottgaming, I do not know, but SorenEragon had a similar question and according to SmileMPV the probability is the same, but there is a different standard deviation.
Thank you Matt. This is going to make the terrific activity when my Year 7 class studies Probability at the end of the year - it links together the work we will have done on prime numbers, and on pi - and we'll have an Excel exploration at the same time.
The dice have only faces 8 and 3. What I want to know is what purpose they serve; why would anyone need a set of 500 d2's or d3's that only show 3 or 8?
My guess is in rolling pairs. 2d6 typically averages exactly 7. You can get this by adding the lowest and highest possible rolls from each die. 3 and 8 make that 11, however. While that would generate some very powerful D&D characters, I don't know what the exact purpose of these is.
The moment that 6/pi^2 formula came up, I was laughing my ass off, because it occurred to me why you were rolling giant dice, and how you would approach the problem. Pi is used in a lot of really weird stuff, lol.
You could use floor(rand()/rand()) to get a random positive integer that goes up as high as the precision of rand allows. I'm not certain if the probability is uniform, though.
Hey Matt, keyboard shortcut for ya. To lock a cell in a formula, in lieu of typing the "$" , you can use F4. Multi press of F4 will cycle through the locking options (both row and column, column, row).
Very cool video! Here's a suggestion for next Pi Day: Run a Monte Carlo simulation of pairs (x, y) of random real numbers from -1 to 1 (or 0 to 1 if you prefer) and count how many are inside the unit circle, i.e., with x^2 + y^2 < 1. The fraction should be π/4, so multiply the final result by 4 to obtain an estimate of π.
you can also put points in a grid in a circle/square and then do the ratio between points where x^2+y^2 is smaller/bigger than 1. funnily enough this actually corresponds to the integral of the volume of a tube. (unit circle 1 unit above the ground and then volume underneath that)
You could probably work out the functions you'd need in a couple hours at most. Then just know that you can drag to duplicate code, other than using a different row.
Happy PI-Day. This video was awesome! I actually replicated it with slightly bigger max. rnd.numbers (100Mio.) and some more trials (5k) - estimation error is around 0.00-something. Loving it! Thx :)
What happens if instead of rolling both dice each time, you alternate between the red and the blue ? (In effect, suppose you have a sequence of random numbers X_i, instead of comparing X_(2n) and X_(2n+1), what happens if you compare X_n and X_(n+1) ? This introduces a dependance; does it wreck the whole convergence thing ?
Does the fact that 1 shows up on your dice 1/120 affect the amount of coprimes since it is coprime to all numbers and for any larger subset [n] the rate of 1 is 1/n, or does this not really affect it? I'm not sure what the rate of coprimes is so I guess if coprimes are very common then this wouldn't matter much but I think you get too many coprimes using smaller subsets if so
Another way to do it with two dices and probability would be: 1. Draw a circle in a square 2. Split the whole area into a mesh N by N where N is the number of sides you have on a dice. 3. Roll the coordinates for shooting how many times you want. 4. Mark your hits. 5. Calculate pi based on expected hit-miss ratio. A totally random shot has a chance of hitting equal to the area of the circle over the area of the square (you can play with it e.g. use 1/4th of a circle and modify formulas). It's similar to this, but uses less fancy maths (still approximating three infinities). I think it's a monte-carlo method for pi. Not sure.
From a practical perspective, I’m not sure how you would actually generate a random, unbounded integer (via a computer, or otherwise). However, if you’re interested in what “probability” means for an infinite, countable set (like the integers), then see the following: en.m.wikipedia.org/wiki/Natural_density (Which is studied as a part of en.m.wikipedia.org/wiki/Probabilistic_number_theory ) As an example, the probability of randomly choosing an even number turns out to be exactly 50% with this definition, as we might hope (However, if your infinite set is “uncountable,” such as the continuous number line, then you would want to learn about Measure Theory and Probability Theory)
Note: Every finite set will have zero probability, but various infinite sets will have non-zero probability Basically, you just have to make sure that your number generator results in probabilities that match the Natural Density for every subset of the Natural Numbers (You could easily include negative integers by randomizing the sign, but Natural Density itself only cares about the Natural Numbers, which start from zero)
I had this quite fun assignment for maths class on my uni where they asked us to approximate pi anyway you wish as long as you could back up your methods. I decided to go by the circumference of a circle being pi*d or in this case we'll use pi*2r and then continued to use the circumference of polygons of "radius" 1 with increasingly more edges. So if you make it easy on yourself you'd have a triangle for example you can divide it up into 3 pieces by drawing a line from the middle to each corner. Now every one of these sections would have an angle of 360/3 degrees. For the circumference, you can name the two "radii" of 1 a and b and slap the cosine rule against it. So in this case sqrt(1^2+1^2-2cos(120)) =~ 1.73. Now triple this for the number of edges and you got yourself your first-order approximation of pi*2r = 2pi = 5.19 -> pi = 2.595. Not that great. But for a triangle, not all that bad either. Now I found that for any given polygon with N edges you can write up this formula for its circumference as a function of N. So the 360/3 would logically become 360/N for an Nth polygon and at the end instead of multiplying it by 3 you'd multiply it by N instead. This leaves us with this beauty of an equation: N*sqrt(2-2cos(360/N)). This allowed me to quickly drag and drop a column in excel showing the progress of polygons approximating pi for each order of N. To have a nice little taste of the result we'll plug in N = 100 and see where we get: 100*sqrt(2+2cos(3.6)) = 6.282.... -> pi = 6.28215/2 = 3.141075. A hundred terms further we suddenly have pi accurate to the third decimal. Now obviously that's very slow progress but considering I had a very simple algorithm to follow I could easily bump up the value of N and reach quite a decent accuracy in this way. For example the 1000th term we get 3.141587 which rounds to two more decimals already. As we allow N -> infinity we of course find the exact value of pi eventually. Nothing too groundbreaking there, just thought it was a fun assignment and I was pretty stoked about my method actually working out :p
Enjoyed this. Thanks. Terminology quibble, which may just be a US vs UK thing: you kept referring to gcd as "greatest common denominator", whereas in my experience it should be called the "greatest common divisor". In fact I have more often seen gcf, for "greatest common factor" used in this context.
Jonathan Beeson How are there not more comments on this verbal gaffe? There's no such thing as a greatest common denominator; the LEAST common denominator is what you use to add fractions like 1/6+3/10 (least common denominator is 30; 5/30+9/30=14/30=7/15). Using a greater common denominator than 30, like 60 or 300 or one of the other infinitely numerous multiples of 30, is just a silly waste of time. I'm starting to think Matt Parker doesn't actually know anything about math. I bet he learned the Pythagorean Theorem from the Scarecrow in The Wizard of Oz.
Matt was obviously thinking "gcd", as in greatest common divisor, but he muddled it with lcd, least common denominator, so "divisor" came out "denominator". After he said it once he just kept repeating the same mistake.
+Bill Schlafly No one in the class tried to correct the teacher's error? It would have been fairly easy to point out that the text book said "divisor" or "factor", not "denominator".
Here in New Zealand we were taught it as greatest common denominator, but I admit that it doesn't really make sense. What we were taught instead of least common denominator is least common multiple. It's just a regional thing, Matt clearly knows what it is mathematically, it's just that the words are different.
This is, what I can say, a mind blowing proof. I am actually from Electronics and Communiation background and Fourier series and transform is our base of Job 😒. I know using Fourier we can get that pi^2/6 =1/1^2 +1/2^2 +.... and we use it many times in electronics. When you first showed the probability it remembered that result which I accustomed of and stopped your video and tried to solve it using that. I failed and then went through your proof. It seemed mind blowing to me.
Hey Matt, so how quickly does this method converge? As in, how many significant digits do you expect (with say 3 sigma certainty) as a function of the amount of numbers you generated?
If somebody wants to try it in python: #pi from gcd(m,n) import math as m import random as rand max = 10 ** 6 #maximum random numbers n = 10 ** 3 #number of tries count = 0 for i in range(n): if m.gcd(rand.randint(1, max), rand.randint(1, max)) == 1: count = count + 1 pi = (6 /(count/n)) ** (1/2) error = (pi - m.pi) / m.pi * 100 print('my PI = ' + str(pi)) print('error = ' + str(error) + '%')
I don't know about python 3, but this doesn't work in python 2. You have to import gcd from fractions, and (1/2) would evaluate to 0 because that's integer division. Same for count/n.
32067556468961617291928917468944691078415889440957409598868331169124195615386172222400691736659976927447959192663550973155779593483783518986239862993732189384258981097738376910872618253601364990649785047655561578439433859272532542871522801494427160944487116591211782322166488236650521840085772472656412214283230617489294013328478334799218901917766043346082439328693640169069953256877821050587468 and 011690831556953000989847442314410871432230909476675100232411566806574488300892786239522736583734007653900529228180796640848882957660173525446800753714978489251028666274472699627503741492379637273565861446114509245232628696508135939285048928469828921910574977735772997911418785521728794326552869187542144085179959435570594507773038760153138244929159635570575708613582411261344112771313501943697453506367167380426164736331755769549405946769197807456491445508576658187580875653158758957854660090951
62318965723465975028560832450120234568264506127856218075623578061328462356015270129456784652387456716864509189475982759856235786328021608952475876578215017596345871509846534601963471687265248956057325604651647486503845634892654106501574650475618623956475065870162578036786123456785071625089640591723496590124765456452375047567465206572398569023650892658268266194568712578463276052346586195687647836576235762634890170397845769120573265816509126589216841265761897298562184712895612985679016512658937143061927655873964873946821365013651289583651334676754586575327591234534574575612751051320698763062190653658926598365928652185217850357102532582175918581295897213574821546752564575647021517286371547120502562548523478602578678531282538660403124721367457850105124586587352106736017567325712546756460735678350365787581508736570328672345327848205672562946165407275378452378056367812050836578365070620375389451763459153103587645062785640576056170865782367635726587256465178506713649612572657801256098216596130526528561098561835612605963218963589 And 12085761034639586012756129736312850782561736509476589061980650981789060980872346979824798671265890620897409172495806092562980315629813659823659826578360123658124905769845619823658921365983267589327589265784756980174857289175891026590685967328957239185985618265091256986248095027984578327509817589265237598659026859627562109560918265902136589623587385016258057350896235782605928650945628957892568957689107586802649856298165418956895627856029562987895175896456589162958465019826592465029630946315986034650916589478561285694601641289567059281658092658965289568962145612502185629805789416589216589216586405612896589427195647601617246598214568926589365801265824916518965908659083714986309568326589231658213651650126521856890569148658175871594605912865821651265086509216570564658956109562856986509816591265913265975636504650984783798789326946895689157894659046068685219806125763745091246504598641504619856456984569865980568914569146574502685061756479562856286509426598659704295871209385612905609281659081236589012759865987215985217988725740328
Fun way to get a very close estimate with excel: Do a target-value analysis of the final value for pi and make the variable some cell that has no influence on the result. Excel will keep searching until it gets within 0.001 of the target value (PI)
this is a Python script that does the same thing: import random import math from fractions import gcd intsize = 1000000 a = [] b = [] length = 10000 portion = 0 for i in range(length): a.append(random.randint(0 , intsize)) b.append(random.randint(0 , intsize)) c = gcd(a[i],b[i]) if c == 1: d = 1 portion = portion + 1 else: d = 0 print(a[i] , b[i] , d) i = i+1 pi = math.sqrt(6 /(portion / length)) print("") print(length , portion) print ("Pi is" , pi)
# Here's how I did it in Ruby. There might be a more compact way do create the Enumerator but I've never worked with streams of randoms before so I just wrote it by hand. A = 39393939 # max random value N = 39393939 # iteration count coprimes = Enumerator.new { |y| loop { y
heres the python code: def gcd(x, y): while y != 0: (x, y) = (y, x % y) return x def est(n,random_dist=10**6): import random; import math; cofactor=0; coprime=0; for i in range(n): a,b=random.randint(1,random_dist),random.randint(1,random_dist); if gcd(a,b)==1: coprime+=1; else: cofactor+=1; print(coprime,cofactor); pi=math.sqrt(6*(coprime+cofactor)/coprime); return(pi); when calling the est (estimate) function you have to tell it how many tries (n), and the max for the random integers (optional) (default=10^6)
Here's my attempt. Made it as small as I could and added ability to easily change two of the important parameters, how many numbers are generated and what the maximum number can be. gets to ~3.14 most of the time with those parameters. import random, math, fractions cofactorcount = 0 rangenum = 100000 randmax = 1000000 for i in range(rangenum): x, y = random.randint(1,randmax), random.randint(1,randmax) if fractions.gcd(x, y) == 1: cofactorcount += 1 print(str(math.sqrt(6 / (cofactorcount / rangenum))))
The rand() function in excel is limited to a precision of 1 x 10^-9, meaning that the numbers to randomly choose from get capped for a random pool of numbers between 1 and n > 1 x 10^9. You will see the number of coprimes found decrease dramatically if you step above 1 x 10^12, significantly distorting your results if using the rand() function. The max number compatible with the gcd() function is 2^53, so if you want to push excel to the limits you will need to use the randbetween() function for your trials, i.e. randbetween(1,2^53).
I'm actually more impressed by the ability to look at two random numbers and know if they are coprime within a few seconds.
Ha. The Euclidean Algorithm is O(log n) for gcds. But since he's limiting n
Yeah I think he knows the primes pretty much, but the coprime part is harder
You only have to test each number for divisibility by four prime numbers, and three of those are trivial tests (evens end in 0,2,4,6,8: multiples of 5 end in 0 or 5: multiples of 3 have digits that sum to a multiple of 3) so it becomes easier than it sounds, and you'd also start testing each time with the smallest prime, 2, and work up, so that only rarely would you rule out 2, 3 & 5 as co-factors, at which point if you've also ruled out one or both being primes then the only option left is that they'd share a prime factor of 7.
Having gone to the 'all the rolls' video, then picked a time point about Matt's emotional state from the comments underneath it, I was immediately shown a pair of rolls that proved my last comment incorrect, as it showed a pair whose only common prime factor was 23. I now think that you would need to test for divisibility by all primes up to 37, but for each prime factor you would rule out a large-ish percentage of the remaining possible combinations, so you'd only got to the largest primes a vanishingly small percentage of the time (Assuming you'd memorised and eliminated any actual primes first). I make it 37 because you'd need them to be coprime but not actually primes, so they must have at least one other prime factor each. To make it to the largest factor then the other prime factors that they don't share must be as small as possible, so keep to one prime factor each, and make them the smallest two primes available, so that one is a multiple of 2 and the other is a multiple of 3, but both are multiples of some largest prime. The maximum limit is thus 1/3 of 120, or 40, with the largest prime under that limit being 37. So yeah it was potentially more impressive than I first thought.
In my original, and somewhat incorrect, comment I had assumed that any combination where one or both were primes to be essentially trivial cases, so had calcuated the largest prime test only for a situation where two composite numbers had been rolled. A pair of primes are automatically co-prime, but I hadn't fully considered the situation where one was prime and one was composite.
Ok so add an extra rule that says that for all instances with one prime being 41, 43, 47, 53 or 59 double check that the other number isn't twice the value of the prime. If your one prime is 29, 31 or 37 you need to check both twice and thrice the number. If it's 19 or 23 you check twice, thrice and five times. If it's 11, 13 or 17 you need to check twice, thrice, five times and seven times.
But for all combinations where neither number is itself prime the largest prime test (I think) is 37. And for those where one is prime, it's easier to multiply the prime up (by the multiples mentioned above) and see if that matches the composite one, rather than simply trying sucessive prime tests on the composite one.
Me: Can you calculate pi?
Matt: 3.04
Me: I think it's a bit higher.
Matt: 3.05?
payton richards 9 left
It’s a Parker pi!
this comment was too funny, it absolutely made my day 😂😂
Right after your last sentence, UA-cam answers "Less" :)
3.14≈(1-(1÷3)+…
a suggestion for next year:
1. at a constant rate, dig a tunnel through the center of the earth
2. at the same rate, walk along a path formed by a great circle around the earth
3. divide the walking time by the digging time
you can expect some error but it would make for great video
mthielssalvo
Please delete your comment, he might literally just do it!!
mthielssalvo
Your method is flawed!! Considering the fact that the Earth is flat surrounded by the great circle of iceberg wall, how would digging a tunnel through the center of the earth help??
😂
Your Pal Nurav Well in that case, walk around along that ice ridge and then walk from the ridge to the center of the circle. But of course, the Earth is neither flat nor round, but is in fact a figment of your imagination. Wake up, child. You've slept for so long.
Your Pal Nurav not to mention the earth isn't a perfect sphere.
Mephostopheles
🕔
🎵Beep Beep Beep Beep Beep Beep🎵
Good morning Mephostopheles, thanks for waking me up I can see the reality now
From that day forward it was forever known that Parker pi is equal to 3.052338478336799.
Actually Parker pi changes each year, making it even more Parker. Also sorry if I'm a bit late l o l
When you're doing a time lapse like this in the future, can you please put an analog wall clock in the frame behind you so we can see it whirl around when you're working?
Yes. Also, put a baby next to you and then, we'll get to see it grow.
@@solidwaterslayer How about put a 5 year old in the same room but give them nothing to do, then we can watch frustration grow
@@silverjohnson3163 I do not wish a bored 5 year old on anyone, let alone themselves. Nooo thank you!
There are 120^2=14400 possible pairs of numbers between 1 and 120.
8771out of them are coprime pairs.
This gives a value of pi~sqrt(6*14400/8771)~3.13857.
'8771 out of them are coprime pairs.' how did you figure it out? bruteforce or not?
@@f0kes32 You can use the excel method that Matt uses in his video but instead of stopping at 1000, you keep going til you hit 14400. Also, instead of having it give you 14400 random numbers, you have it give you one of every possible option. Count them up and, voila.
@@f0kes32 Euler's phi function. You just have to prime factorize the number 14400. Then use phi function.
@@anirbanmallick8502 thank you
If you bump this up to 200-sided dice, there are 40,000 combinations
24463 of them are coprime
This gives you pi = 3.132, which interestingly is even less accurate.
"I probably made some mistakes... I'm kinda hopping the mistakes have averaged out."
It's good coming back to older videos and seeing that the spirit of Matt hasn't changed one bit in essence.
Euler strikes again. great video!
LOVE your channel.
Fun fact: Euler got the right answer by using a completely incorrect method. As I recall, he wrote the sine function as a product of its roots, which isn't accurate, but it worked out correctly. Since then, we've proven it several other ways and concurred that it is actually (pi^2)/6
Wait wtf are you doing here i just left your channel stop following me help
Never would I think you to be a mathsy person lol
I from 1 year future hi past
As Pierre Roux suggested, I have used the digits of Pi as a source of random numbers from which to calculate pi! Taking the first 10^6 digits of pi, and splitting them into 10^5 integers of length 10, I got 5*10^4 pairs of random integers (assuming the digits of pi are truly random - this is unproven). Comparison of these integers yields the estimate 3.14099105107796, which is accurate to a 0.00019% error :)
PI-Ception!
you mean 0.019% error... You haven't multiplied by 100.
That is goddamn brilliant.
beautiful idea !
Adam Brown so you used pi to find pi
So pi begets pi... interesting, lol.
I'm probably not the first person to mention this, but in Excel you can have it just count a column in its entirety. You don't need to define a set range, you can just put COUNT(C:C) and then you can add more rows at will without having to edit that function.
oh wow it’s the real 12tone
You can also use more complex systems to to more complex things references to the maximum cell or simply have a counter and use thats maximum as the reference. In the end with index() you can do pretty much anything.
Could we do some programming 🤔🤔
I never know that actually
O.K. I did the whole column method, worked out a value of pi and compared it to the actual value of pi and took the difference. I then ran the thing 100 times (selected in advance so as to not bodgy up the value to be close as explained in the actual video). My total difference from pi was -1.8*10^-5. This used 1,048,575 rows and a maximum integer of 1,000,000.
the montage starts at 3:14, coincidence? i think not
Which is less than pi and therefore a Parker Timing
No, actually it is slightly more (like literally about 0.16(yes, that was modified to best fit pi)) seconds after 3:14.
Pi minutes would be actually 3:08
Ah so that’s why the audio cuts off.
Circle every where
For those following along at home, these are the methods of calculation he has used in order of decreasing error: calculating the first terms of an infinite series (off by 3.1%), rolling two dice 500 times (off by 2.8%), timing a pendulum of known length (off by 0.433%), weighing a cardboard circle (off by 0.31%), and measuring a circle using pies (off by 0.1035%).
Love that the most accurate results came from the method he explicitly stated WS inaccurate.
I know all PI digits, not in the correct order though
Wicchi so do I.
I know (Binary) 1000 digits of pi.
Gideon McKinlay, do you mean 1000 (1111101000) digits of pi in binary or 8 (1000) digits of pi in decimal form?
@@nathanderhake839 3.1415926. So yes, decimal 8 (Binary 1000) digits of pi in decimal form
Vihart: 3.14199999
Actually, the random function gives you a number in the interval [0;1). So the random number is between zero and one and can take on the value 0, but not the value 1. When you multiply by 120, you get a number in [0;120). *The correct way to proceed* at this point is to *round down and add one*. This gives a number in the range [0;119] plus 1, which is in the interval [1;120]. If on the other hand, you round up, you mess with the probabilities, because in case the random number is 0, you get roundup(0·120)=roundup(0)=0, which is outside of your desired range. Also, drawing 120 is slightly less likely because the event of the "direct hit" (where no rounding is required) is missing (which is kind of a sloppy explanation, but I hope y'all get what I mean).
this, entirely
Mathematician: But the measure of any finite set is zero! So the probability of hitting 0 is zero
Engineer: According to the IEEE 754 specification, there are exactly 2^52 double-precision floating point values in the range [1, 2). This range is used for the initial rand() computation, since they all fall between two consecutive powers of 2, and therefore all have the same “exponent” under the standard (unlike numbers from 0 to 1, which must worry about cross-over points such as 1/2, 1/4, 1/8, etc… on their way down to zero, with the smallest possible exponent being -1022)
This initial random number is then adjusted by subtracting 1, which gives a uniformly pseudorandom number in the range [0, 1). Thus the probability of returning exactly zero is 1 in 2^52 (certainly far smaller than the experimental variance in our calculation of pi, but it’s still worth writing correct code)
12:20 "63 has a 3 in it, it is 31*3" XD, You need a nap Matt
He said 61 I thought that too then dobble checked it
Parker division
No@@malharr9993 , you don’t get it. 31*3=93, now I know you are probably a little child (when you made this) so I’ll tell you. 31*3= 30*3+(1*3)=90+3=93. He said 63. 31*3=93, not 61.
If your happened to get really lucky and select exactly 304 dice, it would give an approximation for pi about:
3.1414043... which is astronomically close :).
Another simpler way:
- Take random pairs of numbers in [0, n] and interpret them as x and y coordinates within an n*n square
- For each pair, calculate the distance to the origin.
- Count the number of pairs for which that distance is less than or equal to n
Effectively, you've just determined whether or not that point is inside of a quarter-circle centered at the origin, with a radius n. So, the probability that a pair will be in that group is:
p = ((area of circle) / 4) / (area of square)
= ((πn^2) / 4) / n^2
= π / 4
i.e., in C:
#include
#include
#include
static const uint32_t n = 1000000;
int main()
{
uint32_t count = 0;
for(int i = 0; i< n; ++i) {
double x = arc4random_uniform(n);
double y = arc4random_uniform(n);
double distance = sqrt(x * x + y * y);
if (distance
10X10X5 is a Parker cube
I know right, I was like "why did he call that a 'cuboid' instead of a rectangular prism," but now I understand ;)
so if you wanted to do this perfectly, you would need 2 infinitely sided die, which are spheres... PI strikes again
Gravity Falls called.
The infinity sided dice is a type of apeirohedron die that was brought back by Ford from another dimension. These type of dice are outlawed in 9,000 other dimensions due to the properties of their roll. Because of their infinite sides, anything can happen when one of these dice is rolled.
Our faces could melt into jelly. The world could turn into an egg! Or, you could just roll and eight. Who knows?
…is that actually why there’s a pi in there???
Are all of the weird pi appearances like that???
3:47 Yes, 169 is a great darts score. It's also impossible to score 169 in just one visit.
how long until the Parker square jokes when he doesn't perfectly calculate pi
Sammy Saperstein Δt=0.00000000314s
Sammy Saperstein
probably a Parker square minute or something...
Parker's Pi
he parker squared the circle.
that reminds me, I created what I call parker cosine(cosp):
cosp[subscript n](x) = 2 * integral from -n/2 to ∞ of (x^2T)cos(πT)/(2T)! dT
At n=1, it is just ever so slightly off!
Right now i'm gonna write a code so that the range of random number generated is much larger, thanks matt this will be a good test since i'm only a begginer in coding
Please notice me when you code will be done. I want to see it ;)
Shambo Basu in for results
Shambo Basu hahaha that was my idea too, we can compare our programs when we're finished :)
Mathematica makes things so fast: twitter.com/IsYitzack/status/841333878774222848
IsYitzach Reaching the answer is not my objective, it is just to teat my java coding skills
correction 12:28 63 is 3*21 not 3*31
+Nathan James You're right! How silly of me. I've added that to the corrections.
and you still wrote it down incorrectly ;>
+standupmaths now you made a mistake in the corrections, you wrote 61=3*21 haha
+standupmaths you've completely fucked up that edit in the comments, makes absolutely no sense.
+standupmaths Now the correction in the description is wrong, it says 61 instead of 63.
The probability of 2 random numbers being coprime is only 6/π² if you consider _ALL_ numbers, like, from 1 to infinity. If you only consider numbers within a specific range (such as from 1-120 or 42-63360 or whatever), the probability will be slightly higher than 6/π², and you'll get an experimental approximation that likely undershoots pi. Not sure if that was mentioned or not. Edit: yes it was 8:41
The proof for the sum of 1/k^2 is such a beautiful proof, it's absolutely amazing
Next year throw darts at a circle
Carrot Slice ua-cam.com/video/M34TO71SKGk/v-deo.html
Related videos list: Calculating Pi with Darts from Physics Girl
Tip: If press F9 excel will recalc you table and update randoms.
Or press delete
Hey Matt,
I know I'm months behind on seeing your awesome video, but I have a question; Could one potentially generate more results with fewer rolls by, say, rolling 3 'random' numbers (a, b, and c) and comparing 'ab', 'ac', and 'bc' for cofactors and coprimes? Or, would that introduce some kind of error that defeats the purpose of such a method?
Amazingly, this does not change the expected number of coprime pairs. This is a consequence of the fact that expected values simply add up regardless of dependency. It does, however, change the standard deviation.
2 years and almost 2,400 comments later I'm sure someone has already suggested this, but you can use =RANDBETWEEN(1,120) for a much easier way to get a random integer between 1 and 120.
You can also simplify your coprime formula to =(GCD(A1,B1)=1)*1
-Excelphile
"Impossibly the most convoluted way i ever done" - 3blue1brown appears with calculating Pi by number of collisions.
So, I just had to pull out a cpp file and do it myself. Using numbers ranging from 1 to 2^16 and 1000000 samples, I was able to get PI with 3 digit precision. Kinda impressive considering you can't even notice the time it takes to execute.
Yeah how accurate does it do it for a billion samples? Or 4 billion? Or 2^32 (0xFFFFFFFF and one more)
but this was uploaded on the 13'th
matt you had one job
Classic Parker Square
Well, he underestimated the value of Pi, so he compenstaed for it with upload data
Time zones. There's a possibility that he uploaded it when you were still on the 13th.
Weird I almost didn't notice is accent. Something tells me he's on the other side of the world. OR he was just nice and allowed people to prepare a day in advance in order to shine on actual PI day ;)
Pharaoh on LFS
It's most likely the former though
When I got here there were 1459 comments. Did you know if you take the sum of the cubes of the digits of 1459, and then take the sum of the cubes of the digits of the result, you'll get back to 1459?
Try it!
21:25 pro-tip: use C1:C to specify entire C column starting with C1; or C1:1 to specify entire 1 row starting with C1
As soon as you said 6/π², I thought of the Basel problem, but I couldn't prove it before you got to that point in the explanation. But once again, we see that familiar numbers do indeed indicate a connection.
I wrote a python script for the same which did 100k iterations and the range of random number was (0 to 1M),got 3.1416 as the result,it was amazing to see how the result started to converge towards 3.14 with the increase in number of iterations ☺
Cool!
Nice one. I guess the pi estimation would improve with larger sample size. Now try again with 5000 dices.
So can we use Parker pi to calculate the circumference of a Parker circle?
I was so happy when you were at the science fair I was like isn't that the guy from standupmaths? Thank you you really made my day :)
Weird question, where did you get that shirt?
pst... Use RANDBETWEEN in Excel to generate a random integer between two numbers.
Glad someone else pointed this out. Didn't wanna be that guy. ;oP
that what i use as well :)
Psst... Most non cryptography random number generators are not that random...
@1992jamo - Just how random do you want for this. I mean, you're not wrong but...
1992jamo The function RANDBETWEEN is just a simpler way to do what Matt did in the video. I don't mean to say that a computer can generate a truly random number.
To recalculate or re-run formulas in excel, simply press F9. At least it works in windows which has a Function Key row.
Selenium Glow thx it is so much quicker now
Stand-up maths 2030: calculating points by baking 100 pies of random diameters
I ran it on a D120 2500 times and got within 0.6% on the first try. You're the best mathematician of our time Matt! I love watching your videos!
Absolutely love you dude, makes maths so much more interesting
Brown paper is too mainstream; it's all about that brown table, dawg!
Who the heck uses brown paper....
A buncha filthy nerds, that's who.
NERD PRIDE
Marginal return on effort seems to be limited here. By that I mean, increasing the max number only gets you a few fractions of a percent closer to the real value. Which compares with the infinite sum from last year, where adding a lot of terms adds only a minor and diminishing correction each time.
For next year's Pi day, divide 32 by 10, then come to America and visit the Indiana legislature and try and convince them that it should be changed to 3.2 so you'll be correct. Again.
What if you alternated rolling dice so each roll was used in 2 calculations. i.e gcd(a,b) , gcd(b,c) , then gcd(c,d) etc. Would that change the probability?
jonlottgaming, I do not know, but SorenEragon had a similar question and according to SmileMPV the probability is the same, but there is a different standard deviation.
Thank you Matt. This is going to make the terrific activity when my Year 7 class studies Probability at the end of the year - it links together the work we will have done on prime numbers, and on pi - and we'll have an Excel exploration at the same time.
I have actually a nostalgia feeling to this video... one of many and many I watched in one stage of my life, but it took me straight back
Can anyone explain why those dice have only III and VIII?
No one else wanted them, so he got them super cheap?
They are part of the non-transitive grime dice sold on maths gear. I'm sure there is a video on them.
x is equal to 6 over the parker square of pi
Matt Parker to be the next Doctor!!
If only he had been.
Nope. We are supposed to suffer through another season. 13 is an unlucky number.
Who?
I wish you'd talk more about the sneaky Pi proof. You explanations make things much easier for me to understand!
Waaaay more entertaining than this should have been. Glad you pulled in the Excel how-to. Cheers!
Wow just take a look at the side of the two boxes! Look at how many dices with 3 are facing us!
What are the odds of that?
Look at how many 3’s are facing us on the pile they are picked from.not so surprising really :)
Colin Mackinlay but he throws them in...
The dice have only faces 8 and 3. What I want to know is what purpose they serve; why would anyone need a set of 500 d2's or d3's that only show 3 or 8?
My guess is in rolling pairs. 2d6 typically averages exactly 7. You can get this by adding the lowest and highest possible rolls from each die. 3 and 8 make that 11, however. While that would generate some very powerful D&D characters, I don't know what the exact purpose of these is.
The moment that 6/pi^2 formula came up, I was laughing my ass off, because it occurred to me why you were rolling giant dice, and how you would approach the problem.
Pi is used in a lot of really weird stuff, lol.
"that may have been my largest understatement" literal lol
You could use floor(rand()/rand()) to get a random positive integer that goes up as high as the precision of rand allows. I'm not certain if the probability is uniform, though.
Hey Matt, keyboard shortcut for ya. To lock a cell in a formula, in lieu of typing the "$" , you can use F4. Multi press of F4 will cycle through the locking options (both row and column, column, row).
"Not a very rigorous proof..."
A Parker proof? ;)
PI DAY OMG I FORGOT
JiaMing Lim Still one day.
Yeah, one day to go here
JiaMing Lim u suck
Thomas Southgate, Rude
It's okay, just wait until UK pi day which is on the third of Fourteember.
Oh God, it hurts so much seeing someone not closing the marker pens they're not using.
Very cool video! Here's a suggestion for next Pi Day: Run a Monte Carlo simulation of pairs (x, y) of random real numbers from -1 to 1 (or 0 to 1 if you prefer) and count how many are inside the unit circle, i.e., with x^2 + y^2 < 1. The fraction should be π/4, so multiply the final result by 4 to obtain an estimate of π.
done it. It works. I think Matt prefers something a bit more spectacular though. ;-)
you can also put points in a grid in a circle/square and then do the ratio between points where x^2+y^2 is smaller/bigger than 1. funnily enough this actually corresponds to the integral of the volume of a tube. (unit circle 1 unit above the ground and then volume underneath that)
"exactly the same thing yourself..."
me: lol, too late I did it in c
him: "...in excel"
He would have had to count less if he counted just the dice in the cofactor box.
Err... I do know how to program but I don't know how to use Excel lol :D
same
You could probably work out the functions you'd need in a couple hours at most. Then just know that you can drag to duplicate code, other than using a different row.
Happy PI-Day. This video was awesome! I actually replicated it with slightly bigger max. rnd.numbers (100Mio.) and some more trials (5k) - estimation error is around 0.00-something. Loving it! Thx :)
What happens if instead of rolling both dice each time, you alternate between the red and the blue ? (In effect, suppose you have a sequence of random numbers X_i, instead of comparing X_(2n) and X_(2n+1), what happens if you compare X_n and X_(n+1) ? This introduces a dependance; does it wreck the whole convergence thing ?
What I'm taking away from this is that trying to calculate pi makes you crazy.
Or that being crazy makes you calculate pi?
@@rosuav or both
This is a little bit of a Parker pi, isn't it?
"169... which is a great score in darts"
It is an impossible score (with three darts). No idea if thats why you smile while saying it?
Does the fact that 1 shows up on your dice 1/120 affect the amount of coprimes since it is coprime to all numbers and for any larger subset [n] the rate of 1 is 1/n, or does this not really affect it? I'm not sure what the rate of coprimes is so I guess if coprimes are very common then this wouldn't matter much but I think you get too many coprimes using smaller subsets if so
Another way to do it with two dices and probability would be:
1. Draw a circle in a square
2. Split the whole area into a mesh N by N where N is the number of sides you have on a dice.
3. Roll the coordinates for shooting how many times you want.
4. Mark your hits.
5. Calculate pi based on expected hit-miss ratio.
A totally random shot has a chance of hitting equal to the area of the circle over the area of the square (you can play with it e.g. use 1/4th of a circle and modify formulas). It's similar to this, but uses less fancy maths (still approximating three infinities). I think it's a monte-carlo method for pi. Not sure.
Something I can't wrap my mind around: how can you describe fairly and randomly picking a number from the unbounded set of all integers?
From a practical perspective, I’m not sure how you would actually generate a random, unbounded integer (via a computer, or otherwise). However, if you’re interested in what “probability” means for an infinite, countable set (like the integers), then see the following:
en.m.wikipedia.org/wiki/Natural_density
(Which is studied as a part of en.m.wikipedia.org/wiki/Probabilistic_number_theory )
As an example, the probability of randomly choosing an even number turns out to be exactly 50% with this definition, as we might hope
(However, if your infinite set is “uncountable,” such as the continuous number line, then you would want to learn about Measure Theory and Probability Theory)
Note: Every finite set will have zero probability, but various infinite sets will have non-zero probability
Basically, you just have to make sure that your number generator results in probabilities that match the Natural Density for every subset of the Natural Numbers
(You could easily include negative integers by randomizing the sign, but Natural Density itself only cares about the Natural Numbers, which start from zero)
@@Muhahahahaz thanks
Wait did he say 3 and 31 is 63?
3+31=34, Matt
12:24 "63, which has a 3 in it, Its 3 by 31..." I don't know if I trust his math.
I had this quite fun assignment for maths class on my uni where they asked us to approximate pi anyway you wish as long as you could back up your methods.
I decided to go by the circumference of a circle being pi*d or in this case we'll use pi*2r and then continued to use the circumference of polygons of "radius" 1 with increasingly more edges.
So if you make it easy on yourself you'd have a triangle for example you can divide it up into 3 pieces by drawing a line from the middle to each corner. Now every one of these sections would have an angle of 360/3 degrees. For the circumference, you can name the two "radii" of 1 a and b and slap the cosine rule against it. So in this case sqrt(1^2+1^2-2cos(120)) =~ 1.73. Now triple this for the number of edges and you got yourself your first-order approximation of pi*2r = 2pi = 5.19 -> pi = 2.595. Not that great. But for a triangle, not all that bad either.
Now I found that for any given polygon with N edges you can write up this formula for its circumference as a function of N. So the 360/3 would logically become 360/N for an Nth polygon and at the end instead of multiplying it by 3 you'd multiply it by N instead. This leaves us with this beauty of an equation: N*sqrt(2-2cos(360/N)). This allowed me to quickly drag and drop a column in excel showing the progress of polygons approximating pi for each order of N. To have a nice little taste of the result we'll plug in N = 100 and see where we get:
100*sqrt(2+2cos(3.6)) = 6.282.... -> pi = 6.28215/2 = 3.141075. A hundred terms further we suddenly have pi accurate to the third decimal. Now obviously that's very slow progress but considering I had a very simple algorithm to follow I could easily bump up the value of N and reach quite a decent accuracy in this way. For example the 1000th term we get 3.141587 which rounds to two more decimals already. As we allow N -> infinity we of course find the exact value of pi eventually.
Nothing too groundbreaking there, just thought it was a fun assignment and I was pretty stoked about my method actually working out :p
WOW! Escher _AND_ origami in the background? Truly a man of taste!
"To prove..." - we believe you!!!
Enjoyed this. Thanks. Terminology quibble, which may just be a US vs UK thing: you kept referring to gcd as "greatest common denominator", whereas in my experience it should be called the "greatest common divisor". In fact I have more often seen gcf, for "greatest common factor" used in this context.
Jonathan Beeson How are there not more comments on this verbal gaffe? There's no such thing as a greatest common denominator; the LEAST common denominator is what you use to add fractions like 1/6+3/10 (least common denominator is 30; 5/30+9/30=14/30=7/15). Using a greater common denominator than 30, like 60 or 300 or one of the other infinitely numerous multiples of 30, is just a silly waste of time.
I'm starting to think Matt Parker doesn't actually know anything about math. I bet he learned the Pythagorean Theorem from the Scarecrow in The Wizard of Oz.
Matt was obviously thinking "gcd", as in greatest common divisor, but he muddled it with lcd, least common denominator, so "divisor" came out "denominator". After he said it once he just kept repeating the same mistake.
Midwest USA here...I learned it as largest common denominator.
+Bill Schlafly No one in the class tried to correct the teacher's error? It would have been fairly easy to point out that the text book said "divisor" or "factor", not "denominator".
Here in New Zealand we were taught it as greatest common denominator, but I admit that it doesn't really make sense. What we were taught instead of least common denominator is least common multiple.
It's just a regional thing, Matt clearly knows what it is mathematically, it's just that the words are different.
pi^2 / 6
... surely something about the zeta function...
jorcyd Well of course, zeta(2) is defined to be the sum of the reciprocals of the squares. This isn't a mysterious connection, afaik.
Yes. The probability that n positive integers are coprime is simple 1/zeta(n). So for n=2, we have 1/zeta(2)=6/pi^2. Nothing special.
Lol
^
This is, what I can say, a mind blowing proof. I am actually from Electronics and Communiation background and Fourier series and transform is our base of Job 😒. I know using Fourier we can get that pi^2/6 =1/1^2 +1/2^2 +.... and we use it many times in electronics. When you first showed the probability it remembered that result which I accustomed of and stopped your video and tried to solve it using that. I failed and then went through your proof. It seemed mind blowing to me.
Hey Matt, so how quickly does this method converge? As in, how many significant digits do you expect (with say 3 sigma certainty) as a function of the amount of numbers you generated?
if he keeps doing this every year and averages his results they will get there eventually
If somebody wants to try it in python:
#pi from gcd(m,n)
import math as m
import random as rand
max = 10 ** 6 #maximum random numbers
n = 10 ** 3 #number of tries
count = 0
for i in range(n):
if m.gcd(rand.randint(1, max), rand.randint(1, max)) == 1:
count = count + 1
pi = (6 /(count/n)) ** (1/2)
error = (pi - m.pi) / m.pi * 100
print('my PI = ' + str(pi))
print('error = ' + str(error) + '%')
I don't know about python 3, but this doesn't work in python 2. You have to import gcd from fractions, and (1/2) would evaluate to 0 because that's integer division. Same for count/n.
cryptexify
it's a py3 code, yeah
EVERYONE WRITE TWO RANDOM INTEGERS AND WE CALCULATE PI
Armageddon2k 110, 987654321
194763; 98572159876645
32067556468961617291928917468944691078415889440957409598868331169124195615386172222400691736659976927447959192663550973155779593483783518986239862993732189384258981097738376910872618253601364990649785047655561578439433859272532542871522801494427160944487116591211782322166488236650521840085772472656412214283230617489294013328478334799218901917766043346082439328693640169069953256877821050587468
and
011690831556953000989847442314410871432230909476675100232411566806574488300892786239522736583734007653900529228180796640848882957660173525446800753714978489251028666274472699627503741492379637273565861446114509245232628696508135939285048928469828921910574977735772997911418785521728794326552869187542144085179959435570594507773038760153138244929159635570575708613582411261344112771313501943697453506367167380426164736331755769549405946769197807456491445508576658187580875653158758957854660090951
9434554 and 344325675532
62318965723465975028560832450120234568264506127856218075623578061328462356015270129456784652387456716864509189475982759856235786328021608952475876578215017596345871509846534601963471687265248956057325604651647486503845634892654106501574650475618623956475065870162578036786123456785071625089640591723496590124765456452375047567465206572398569023650892658268266194568712578463276052346586195687647836576235762634890170397845769120573265816509126589216841265761897298562184712895612985679016512658937143061927655873964873946821365013651289583651334676754586575327591234534574575612751051320698763062190653658926598365928652185217850357102532582175918581295897213574821546752564575647021517286371547120502562548523478602578678531282538660403124721367457850105124586587352106736017567325712546756460735678350365787581508736570328672345327848205672562946165407275378452378056367812050836578365070620375389451763459153103587645062785640576056170865782367635726587256465178506713649612572657801256098216596130526528561098561835612605963218963589
And
12085761034639586012756129736312850782561736509476589061980650981789060980872346979824798671265890620897409172495806092562980315629813659823659826578360123658124905769845619823658921365983267589327589265784756980174857289175891026590685967328957239185985618265091256986248095027984578327509817589265237598659026859627562109560918265902136589623587385016258057350896235782605928650945628957892568957689107586802649856298165418956895627856029562987895175896456589162958465019826592465029630946315986034650916589478561285694601641289567059281658092658965289568962145612502185629805789416589216589216586405612896589427195647601617246598214568926589365801265824916518965908659083714986309568326589231658213651650126521856890569148658175871594605912865821651265086509216570564658956109562856986509816591265913265975636504650984783798789326946895689157894659046068685219806125763745091246504598641504619856456984569865980568914569146574502685061756479562856286509426598659704295871209385612905609281659081236589012759865987215985217988725740328
Mathologer did a very good video about the Basel-Problem with the use of Euler's prime formula to derive, where 6/pi2 comes from.
Fun way to get a very close estimate with excel: Do a target-value analysis of the final value for pi and make the variable some cell that has no influence on the result. Excel will keep searching until it gets within 0.001 of the target value (PI)
this is a Python script that does the same thing:
import random
import math
from fractions import gcd
intsize = 1000000
a = []
b = []
length = 10000
portion = 0
for i in range(length):
a.append(random.randint(0 , intsize))
b.append(random.randint(0 , intsize))
c = gcd(a[i],b[i])
if c == 1:
d = 1
portion = portion + 1
else: d = 0
print(a[i] , b[i] , d)
i = i+1
pi = math.sqrt(6 /(portion / length))
print("")
print(length , portion)
print ("Pi is" , pi)
from random import randint as x; from fractions import gcd;(lambda m,n:(6/(sum(gcd(x(1,m),x(1,m))==1 for i in range(n))/n))**0.5)(1000000,10000)
Bendik Mihle Hansen
soooo.... you print out 20 000 random numbers...
WHY?!?!?
oh, and a pi approximation ofc.
# Here's how I did it in Ruby. There might be a more compact way do create the Enumerator but I've never worked with streams of randoms before so I just wrote it by hand.
A = 39393939 # max random value
N = 39393939 # iteration count
coprimes = Enumerator.new { |y| loop { y
K 466 Thanks
It's pseudo-random. It's better to use numbers from random.org it's more random.
I tried this on my computer 3.14 billion times and got 3.1416. I think I messed up someplace.
Isn't GCD greatest common *divisor* and not greatest common denominator?
Well you can use it for both anyway. Greatest common denominator is done using greatest common divisor (via the least common multiple, whatever)
I think this is the first time I've ever heard someone say "cuboid" instead of the bland "box". Definitely an underused and underrated word.
Many sided dice are also notoriously uneven in the frequency of landing on particular sides.
6:32 “2 divides half of all numbers.” Someone needs to brush up on his George Cantor.
March 13, that's quite a parker square pi day
i did this in python with a million tries and a million as max integer and i got 3.14189 :)
heres the python code:
def gcd(x, y):
while y != 0:
(x, y) = (y, x % y)
return x
def est(n,random_dist=10**6):
import random;
import math;
cofactor=0;
coprime=0;
for i in range(n):
a,b=random.randint(1,random_dist),random.randint(1,random_dist);
if gcd(a,b)==1:
coprime+=1;
else:
cofactor+=1;
print(coprime,cofactor);
pi=math.sqrt(6*(coprime+cofactor)/coprime);
return(pi);
when calling the est (estimate) function you have to tell it how many tries (n), and the max for the random integers (optional) (default=10^6)
there is a math.gcd already there for you to use!
everything else looks good!
thanks :) this method is the fastest possible, so its as fast as the defualt one or quicker :) and its always fun to define functions yourself :)
Here's my attempt. Made it as small as I could and added ability to easily change two of the important parameters, how many numbers are generated and what the maximum number can be. gets to ~3.14 most of the time with those parameters.
import random, math, fractions
cofactorcount = 0
rangenum = 100000
randmax = 1000000
for i in range(rangenum):
x, y = random.randint(1,randmax), random.randint(1,randmax)
if fractions.gcd(x, y) == 1:
cofactorcount += 1
print(str(math.sqrt(6 / (cofactorcount / rangenum))))
The only issue here is that the random generator is **pseudo**-random
I'm so glad you show the proofs.
The rand() function in excel is limited to a precision of 1 x 10^-9, meaning that the numbers to randomly choose from get capped for a random pool of numbers between 1 and n > 1 x 10^9. You will see the number of coprimes found decrease dramatically if you step above 1 x 10^12, significantly distorting your results if using the rand() function. The max number compatible with the gcd() function is 2^53, so if you want to push excel to the limits you will need to use the randbetween() function for your trials, i.e. randbetween(1,2^53).