It is always so amazing to me, that Prof. Brailsford is not only _immensely_ brilliant academically, but even _moreso_ brilliant at story telling. I could listen to him on podcast daily.
i wonder if there is a function that is faster than any computable function, period.... Busy Beavers are only faster than all computable functions *eventually*, so they don't qualify
@@MABfan11 i mean, any function we think of as faster than another function is typically only faster eventually, for some value of eventually. like, x! grows faster than x^2 "only" for x > 3
As it turns out, the universe is just a Busy Beaver program running on an extra-dimensional supercomputer and the higher-ups don't know whether or not it will halt yet.
More specifically, 47176870 is the maximum number of steps that a 5-state machine can run before halting; the number of 1s produced is the 4098 mentioned in the video. Also BB(6) was shown in 2022 to be at least 10 ↑↑ 15 (i.e. 10^10^10^…^10, with 15 10s)
BB(6) may never be solved because we’ve already found machines that would require solving a version of the Collatz conjecture to determine if they halt or not
to explain why the bus beaver numbers ultimately outpace any computable function: for any computable function some sufficiently complex busy beaver with a finite number of cards can be programmed to calculate any arbitrary value of that function, then print "that many +1" 1s and halt.
It turns out that if you can compute all Busy Beaver numbers, you can solve the Halting problem. Proof: Suppose that M is a machine such that for any n, M can compute BB(n). Let's now show that M can solve the halting problem. Suppose that T is Turing machine with n instructions. All M has to do is run T for BB(n) steps. If T halts before then, then T of course halts. If T does not halt in that time, then T doesn't halt at all, because, by definition, BB(n) is the longest running halting Turing machine with n instructions. Therefore M solves the halting problem. Since no Turing machine can solve the Halting problem, no Turing machine can compute BB(n) for arbitrary n. Since computable functions are exactly those that Turing machines can compute, BB(n) grows faster than any computable function. Note that we can always build a sufficiently complex Turing machine to compute a particular Busy Beaver number. But no fixed Turing machine can compute all Busy Beaver numbers.
@@redjr242 An inmortal, sufficiently smart computer scientist with inifinite time is just a sofisticated Turing machine, isn't it? Couldn't it compute all BB(n) in increasing order?
@@redjr242 It's very interesting, but where did you get this proof? It seems incorrect here: > by definition, BB(n) is the longest running halting Turing machine with n instructions No, by definition, BB(n) is the biggest number of 1s produced by the Turing machine with n instructions. It might take much longer than BB(n) steps to produce such a number of 1s. > Note that we can always build a sufficiently complex Turing machine to compute a particular Busy Beaver number. But no fixed Turing machine can compute all Busy Beaver numbers. If we were able to make Turing machine to compute a particular busy beaver number, then (it seems like, needs more thought) we would also be able to make Turing machine "builder" - a Turing machine that builds a Turing machine that computes a particular busy beaver number and then executes it, and thus be able to compute all busy beaver numbers.
Chris Brenan at what value of n does BB outpace the TREE function? Considering TREE(3) is already unfathomably large while BB by comparison hasnt even left the starting line at 3. Or 6. Or 7.
Sorry for a slow reply. No clue. But early values of functions can be very misleading. I believe it is generally agreed that Loader's Number is much greater than TREE(3) (or, for that matter, TREE(TREE(3))), and is achieved using a 512 character C program. Obviously that could be written as a Turing machine with some odd thousands of states. To circle back to your question, the exact point at which BB(a) overtakes TREE(a) is probably beyond our determining at present, but it isnt too difficult to wrap your head around the proof of why it has to happen. The fact that the very first few BB numbers are unimpressive only reflects the fact that tiny computers cant do much. But, to use an analogy, the fact that X^2 < X for X
This man is so charismatic. More videos with him, please. Not many people can explain something with so much enthusiasm that it translates to you. Other guys are good too but he is definitely ahead of many.
BB(7198) is independent from ZFC, proven in 2016 by Adam Yedidia. BB(4888) depends on Goldbach's Conjecture; BB(5372) depends on the Riemann Hypothesis.
@@imadhamaidi Probably? I don't know about the page count, but Yedidia didn't hand write with the TMs himself or do any sort of weird indirect existence proof; instead he designed variant of a programming language that compiled to TM specifications for an existing academic TM simulator, with optimizations focused on reducing the number of states used.* Then he wrote programs with the basic logic of "if [interesting unprovable/unproved statement], return true, else loop forever." I would guess that the papers mostly focus on discussing his compiler, which is the real meat of his work, although I know that the TMs produced were published and I think checked somehow. I got this info from Scott Aaronson's blog, where he posted about it around when Yedidia (at the time one of Aaronson's grad students / post docs, IIRC) was publishing; you should check those posts out yourself if you're curious. * A fun lesson in tradeoffs, and the dangers of code golf: the TMs Yedidia's compiler produces are crazily slow. Aaronson talks about their test program, which checked that 3^3 = 27. If I remember right, the resulting TM took something like three days to return a result. Thus, while I believe Aaronson did say they had one of those TMs running, I wouldn't hold out hope for an answer on either of those two conjectures from this quarter.
@@gazeboist4535 just checked the blog post out and i'm blown away, so if you could find BB(7198) you can literally check the consistency of ZFC axioms, only if we had some magical way to do so
In primitive recursive functions you have a set of building blocks where you can compute the arguments to functions lower down in the hierarchy to produce the values of functions higher up. Something similar happens once you have enough cards in Rado's scheme to define parameterised machines like "take a computable function f and a busy beaver b of n-c cards and use c extra cards to compute f(b) and write that many ones onto the tape and then halt. This is potentially a busy beaver for n cards and these are higher-order recursive functions that are being computed. These higher order recursive functions have been studied by William Tait, I think, and the logical (intuitionistic) consequences of their existence are explored by Girard _et al_ in an infuriatingly obtuse book called "Proofs and Types". I would love to see a video about that!.
watching this video in 2020 makes me feel as if I am from the future. The record for n = 6 is 10^2879. For n = 5 the current record is 47 176 870, but it is not known about some machines whether they are in a loop, so that record could still be broken.
So to get that 10^10500 number for a score estimate for the 6-card machine case, you had to evaluate a computable function, otherwise you would not have had an answer. It sounds like you can estimate how fast the busy beaver function grows, but it will always be faster than the estimate and you will never know exactly how fast it grows because you can't compute it.
+Cooper Gates The bb function is perfectly computable for any particular value, in the same sense way that it's perfectly reasonable to determine if a particular program will halt. It's non-computable in the sense that no algorithm can output the correct bb number for an *arbitrary* input.
+James Davis Yes, though calculating 10^18500 (I think the bound has been raised already!) directly by running all those turing machines until the highest halting score is found would be very impractical, so that wasn't what was used to find the result.
+James Davis A polynomial versus an exponential is a much less painstaking problem, such as if you wanted to know when 2^x overtakes x^45, you just set them equal to each other and solve for x, and you know that the solution is greater than 45 in this case because 2^45 < 45^45.
+Cooper Gates The solution is a transcendental that's arguably non-trivial to compute, but my point was that you don't have to compute where 2^x overtakes *every* polynomial to know that it eventually will overtake *any* polynomial. That's what defines the rate of growth of a function, not where it overtakes another class of functions, but the fact that it does at some point. The proof that bb overtakes any computable function is extremely easy, even finding an upper bound like you did in your example is very easy. But yes, the exact point at which bb overtakes a particular computable function isn't easy.
+James Davis I said that I found a *lower* bound because 45 for x gives 2^45 for the exponential and 45^45 for the polynomial. Anyway, what about comparing the growth rates of the Xi, Rayo, and FOOT functions?
6-card is 232,218,265,089,212,416 machines 7-card is 1,180,591,620,717,411,303,424 machines 8-card is 7,958,661,109,946,400,884,391,936 machines Can't imagine how the large the "best" machine is from any of those considering the growth rate.
So if you had a supercomputer than can run a million Turing machines a second, it would take about 2 years (I'm not sure how fast they can really do it.)
Does this have to do with the fact that turing machines can run any algorithm? So the busy beaver problem is essentially "what is the largest number of 1s that can come out of an algorithm that only requires n cards", therefore after a point the busy beaver can always exceed a given algorithm, because it will at least include that algorithm (and every one that exceeds it).
The proof that the busy beaver function is incomputable is actually quite straightforward. Let _BB_ denote the busy beaver function; _BB(n)_ is the busy beaver for _n_ states. Now assume _BB_ is a computable function. In this case, we have a computable function _BBT_ which computes the maximum number of transitions that a *halting* Turing machine with _n_ states can go through. This gives us a solution to the Halting Problem. Given a Turing machine with _n_ states, simply run it until either: * it halts; or * it has gone through _BBT(n)_ transitions. If it is not in a halting state by the end of _BBT(n)_ transitions, then it will never halt, because _BBT(n)_ is the upper bound on the number of transitions that a *halting* Turing machine of _n_ states can go through. But the Halting Problem has been proven to have no solution. Thus, by contradiction, the busy beaver function is not computable.
so if busy beaver is a function, say b(n), that grows faster than any computable function, it also should grow faster than f(n) = b(b(n)). how? edit - I found out my error, appearently b(n) wouldn't be a computable function so f(n) is clearly faster but there's no problem about that. I find it really hard to see that b overtakes TREE, not that I really understand how big of a number such as TREE(3) is anyways.
Another statistic that I think would be interesting would be for Turing machines that actually halt, what is the maximum excursion of the read/write pointer in both the positive and negative directions? In other words, how much tape was needed?
Oh look, the ground is opening up beneath me. And there is Godel, Cantor, Turing, Conway, Mandelbrot, Heisenberg, Chaitin et al already falling, all screaming as they descend.
The proof for why BB function eventually grows faster than any computable function is beautiful in its simplicity. Its because of the infinite tape. Because the beaver has infinite tape, it can create any computable function on the tape. It can create anything on the tape that can be expressed in binary code, like the works of Shakespeare.
Someone once explained to me that BB(768) is such an unimaginably huge number that if we dedicated all the atoms in the universe to computing it, we still wouldn't be able to - and yet it is a finite integer. That boggled my mind. Such a short description of such a gargantuan quantity.
The reason that at some points, BB numbers simply cannot be calculated with our current understanding of computers, even with infinite time, space, and ressources, is precisely because of the halting problem, where you cannot know wether any given turing machine will stop or not.
+NeonsStyle Assuming all the laws of physics are Turing computable, which seems like a safe bet, then learning the size of the program our physics runs on lets us derive the size of the multiverse, and potentially even simulate ourselves. Granted, that would be of somewhat limited value for the most part, but it would still be an awesome datum of which to claim possession.
jetison333 Actually, over on Numberphile it was mentioned that fairly recently there was discovered a computationally efficient perfect test for primality.
eodguy83 I believe Tor Diryc'Goyust is referring to an efficient primality test that only works for Mersenne primes (numbers of the form 2^p-1 where p is prime). I'm not sure if these kinds of primes are used in cryptography, but I would suspect not.
i watched this then built one of these things in R at work. great video, fascinating problem
7 років тому
Love the Busy Beaver Problem - a close relative to the Halting Problem. However when he says the machine head moves to the right on the tape, that was actually in the animation the Beaver's left so I was confused when he said "right" - was it our - the viewer's right - or the Beaver's right - I think he meant our right, but it was confusing lol.
Some beavers are actually putting in a lot of work, but others are just repeating the same thing pretending they’re working every time the boss comes past.
Are there any online groups dealing with the busy beaver problem? This video was made 8 years ago and at that time, he said that there were 40 five-card busy beaver machines still running. I was wondering if any of those machines had been resolved in these past 8 years.
Yes there is he mentioned the club. I just forget the time stamp. There's also T Rado's group if you search for it. There are other independent researchers too. Then there's also you and I which can create busy beaver programs.
So are there Busy Beaver equivalents for other models of computing? Say, lambda calculus? Or process algebras? Or type theory? Or what ever other magical models there are out there? What about Conway's game of life, for instance? You could have something like "Which n by k pattern, which actually becomes static in finite time (after some finite n, it may only contain periodic patterns or static ones (although I guess, busy beavers completely rule out dynamics, so perhaps periodic patterns are not allowed either)), produces the largest number of life cells?
I haven't heard of one. I think that for any calculus the tape would be the axioms, the program the inference rules, the 1s are the number of theorems it can output and the halt state is a fixed point. It would be a little weird for games of life. I think the board would be the tape, the program would be the ruleset, the 1s would be the live cells and the halt state ... I guess an idempotent board state like you said but that does seem a little arbitrary.
Ferroneoboron san actually, a quick google search suggested that this idea indeed exists for cellular automata of sorts - namely those which are also 2D turing machines. Turmites. ( ***** please do a video on those too :) ) Though I was unable to find something for other computational models, at least not from a 5min search.
Kram1032 Oh yeah, hi! You should get yourself a profile pic, good for recognition value :) I guess they didn't go into detail because it's rather trivial to construct these things for any given model and the fundamental argument will be basically the same as for the original one.
i get that the tape is infinite, i get that the head moves in either direction. I get that the number can be a 1 or 0, what i dont get is what the tape starts with. is it all 1's all 0's or is it a random mixture of the two??
I saw this and just said to myself "well it would loop forever", but when he said you have to find a finite number of 1's, I thought " well how the f**k am I supposed to do that?" I'm still smiling about it now
Remember watching this for my A-level studies. I graduated from university 2 years ago and remember learning about the term busy beaver all them years ago. Came back to this video after all this time, after coming across the term "busy beaver" again but not quite remembering what it was about. Suffice to say this video jogged my memory.
Sure that it grows faster than Loader's number? Edit: Loader's number is computable, so it's answered right there. But what about Rayo's Number? What about the FOOT function?
That's because you can completely recreate the busy beaver game in rayo's function in like 2000 symbols so with like 3000 symbols you overtake every single bb number below
so ... If I have the busy beaver function BB(n) that for any n it calculates the maximum number of 1s writable by an n card halting Turing machine, would the function F(n) = BB(n)*n grow faster than the busy beaver function?
In the instruction layout, for the 2-card case, how can the final column represent 3 possibilities -- 0 to halt, 1 to repeat the instruction on the current card, and 2 to go to the other card -- when a single bit can only represent two states?
There is no requirement that that final column be a single bit wide. For the 2 or 3 card cases, you would need a 2-bit wide field. for 4-7 cards, you would need a 3-bit wide field.
I think a difficulty I have with this video is that, when I hear "looping," I think of periodic behavior -- the machine returning to a previous state. What makes this so nasty is that it can turn out non-periodic. It might never repeat a state. If it got into a loop (as I think of loops) the condition could be detected. But if all programs that failed to halt periodically repeated their state, the halting problem would be solvable.
eNSWE "I haven't ever really heard anyone referring to the tape as part of the machine's state," You have now. And the reasoning is simple. The contents of the tape can influence whether a Turing Machine halts or not and how long it takes, if it does halt. (It also influences the final "answer." "these are loops in the sense that the machine is in a certain state c, reads some input symbol a, and the transition function returns the same state and moves the head so that the same thing will keep happening over and over again." The problem with that analysis is the the machines that are still undecided don't quite do that. They only keep repeating their state number in the same sense as the decimal expansion of pi keeps repeating the digit '1'. Their behavior (as near as can be determined) is non-periodic. Even if you have a "partial state" that you know must return after n moves, you still determine a type of periodic behavior and can rule that it does not halt. If all Turing Machines either halted or exhibited periodic behavior, the halting problem would be decidable.
look, I didn't make up the fact that what this video refers to as "cards" is actually what is referred to as "states" in the formal definition. of course I understand that the input can affect whether the TM halts or not (if it is a parital recursive function, otherwise it doesn't matter), but the "state" of the machine is explicitly defined as what "rules" the machine is operating under at the moment (and thus how the transition function behaves). it's just terminology. the transition function takes the current state and the next character on the tape as it's arguments.
eNSWE What you said is that you had not previously encountered anyone who regarded the contents of the tape as part of the state. I took that at face value and simply told you that you have now. I also told you why I consider it part of the state. " it's just terminology." Language is just terminology. The point of my original comment is that it is evident that he is trying to convey some idea other than what the words used mean to me. That is a difficulty as I am not able to determine conclusively what he is really trying to say. Your apparent position is that it should be self-evident what he really means. That's the trouble with natural language. It doesn't always work that way.
Mat M Well, there are a few "tricks." If you can identify a partial state that must repeat itself after some finite number of turns, you can prove non-halting. It was once believed that any set of shapes that could tile the plane could do so periodically. And then it was proven that that assumption would make the halting problem decidable.
you take the next left, go a 2 blocks and then go into the nearest corner store. If you look in the aisle containing toilet paper, remove the third one in the fourth column. The problem will be waiting for you there.
P = problems computable in polynomial time. NP = checkable in poly time or computable in non-deterministic poly time. BB is not computable so does not live in any set of computable functions.
+Julian Goulette Maybe use a massive loop of tape with a highly composite number of slots so that loops could be detected easily; the problem is that if the tape is linked to itself, the results will be different, since it can wrap around and reach 1s it made that would otherwise be very far away. A Mobius strip would be fun ;)
+Julian Goulette Doesn't take away from the fact that it changes the problem. First of all, a practical loop of tape wouldn't have enough slots for the winning machine with 7 cards - it would already need to print way more than a googolplex 1s.
+Julian Goulette A looped tape is one way of expressing a function that terminates, if I recall correctly. I was doing some basic reading on the various types of Turning Machines and looped tape was a thing for some of them. I am very tired right now and am probably remembering exactly what they are used for wrong.
There already exist a couple of functions that grow faster than Busy Beaver. The Busy Beaver is only the biggest function for a Turing machine and with that also for a computer. But functions like Rayo's number are not even possible to calculate with any tool. We just know they exist because of formal logic.
I had a thought which went like: "Well, would a quantum computer here be much help? Could you program it to calculate the max number of 1's for a 10 card Turing machine, and it would take some time (but not like centuries) and give you the correct answer? Or if so, is there maybe just some mathematical trick to the busy beaver problem, so that it can be put into a rather simple formula like "beaver of x" (with x being the number of cards) equals to y (with y being the max number of 1's)? I think such a thing should be possible (for a mathematical genius like Turing it might not even have posed much of a problem, but unfortunately not many people like him are alive today, if any)...
Seegal Galguntijak You don't need a genius for that. Here BB(n). There is your mathematical formula. Thing is that you can write formulas for anything, but that doesn't make them computable. What you are looking for is an algorithm and because algorithms are just as powerful as TMs, you cannot construct a computable function growing faster than BB by definition.
In Turing machine head can't stay still, because that would be redundant. You can simulate still head with other sequences of instructions. And because Turing machine is guaranteed to be simplest example of machine capable of computing all computable functions, you can't have redundancy.
I’d be interested to know if a compute problem is actual following a Mandelbrot/fractal compute when it goes on forever. We have a situation of self modifying code (which is what a Turing machine is) and if you change the program, you can in essence, setup a chain of events which can recursively become the original program (or become a form of cyclic code such as a CRC algorithm).
Shift left and right and change to a ‘1’or ‘0’ can be thought of as a logic function. If ‘1’ change to a ‘0’/If ‘0’ change to a ‘1’ is an XOR. Shifting is how a cyclic redundancy code (or LFSR) works. So surely a program can be broken down to logical and LFSR instructions and recompiled to figure out the order of LFSR.
Integrals are usually used on continuous functions, due to the need of multiplying differencials with the value of the original function at infinite points. BB(n) is a step function, so I don't think it would work. But you might be able to use other forms of infinite sums.
Oh I see what you mean on a real computer. No you can write a program which extends the size of numbers which can be dealt with. You are not limited by the limits of the MPU or operating system.
BB(n), remember that it's been proven that it is not computable so there's no formula for it that you can plug into a computer. besides, all algorithms are O(BB(n))
It is always so amazing to me, that Prof. Brailsford is not only _immensely_ brilliant academically, but even _moreso_ brilliant at story telling.
I could listen to him on podcast daily.
+Facey Neck Yes please, we need this.
lets start a request for this podcast to exist
sounds a bit like Winnie the Pooh
Pretty sure he could in front of a freshly painted wall, describing the paint drying and I would still be captivated.
Isnt he a teacher also?
Bet hes one of those acrually great and memorable ones
I don't know about you, but "faster than any computable function" sends a chill up my spine.
No it does not. There are no busybeavers, there are just electrical signals switching from a on state to an off state.
@@jeffcarroll1990shock ?
@@renomado8616 zeros and ones.
i wonder if there is a function that is faster than any computable function, period....
Busy Beavers are only faster than all computable functions *eventually*, so they don't qualify
@@MABfan11 i mean, any function we think of as faster than another function is typically only faster eventually, for some value of eventually. like, x! grows faster than x^2 "only" for x > 3
As it turns out, the universe is just a Busy Beaver program running on an extra-dimensional supercomputer and the higher-ups don't know whether or not it will halt yet.
+Vecht Our Universe is just a G64-card Busy Beaver program.
+Vecht lets hope it doesnt halt. and really if you want to think that way it will loop eventually, when entropy runs out.
Sounds like something Douglas Adams would say.
The year is 2020, the machine is slowly approaching its halt state
@Vecht haha
Busy Beaver (5) is 47,176,870. Was verified recently in the busy beaver challenge.
More specifically, 47176870 is the maximum number of steps that a 5-state machine can run before halting; the number of 1s produced is the 4098 mentioned in the video.
Also BB(6) was shown in 2022 to be at least 10 ↑↑ 15 (i.e. 10^10^10^…^10, with 15 10s)
Yesterday people were celebrating Independence day. People should've been celebrating BB(5) being proven.
i hope computerphile or numberphile would make a video report and interview some of the folks who were proving it
Yes
BB(6) may never be solved because we’ve already found machines that would require solving a version of the Collatz conjecture to determine if they halt or not
I really enjoy seeing someone who talks so passionately about their subject. It really motivates you to want to learn more.
false.
to explain why the bus beaver numbers ultimately outpace any computable function: for any computable function some sufficiently complex busy beaver with a finite number of cards can be programmed to calculate any arbitrary value of that function, then print "that many +1" 1s and halt.
It turns out that if you can compute all Busy Beaver numbers, you can solve the Halting problem.
Proof:
Suppose that M is a machine such that for any n, M can compute BB(n). Let's now show that M can solve the halting problem. Suppose that T is Turing machine with n instructions. All M has to do is run T for BB(n) steps. If T halts before then, then T of course halts. If T does not halt in that time, then T doesn't halt at all, because, by definition, BB(n) is the longest running halting Turing machine with n instructions. Therefore M solves the halting problem.
Since no Turing machine can solve the Halting problem, no Turing machine can compute BB(n) for arbitrary n. Since computable functions are exactly those that Turing machines can compute, BB(n) grows faster than any computable function.
Note that we can always build a sufficiently complex Turing machine to compute a particular Busy Beaver number. But no fixed Turing machine can compute all Busy Beaver numbers.
@@redjr242 An inmortal, sufficiently smart computer scientist with inifinite time is just a sofisticated Turing machine, isn't it?
Couldn't it compute all BB(n) in increasing order?
@@redjr242 It's very interesting, but where did you get this proof?
It seems incorrect here:
> by definition, BB(n) is the longest running halting Turing machine with n instructions
No, by definition, BB(n) is the biggest number of 1s produced by the Turing machine with n instructions.
It might take much longer than BB(n) steps to produce such a number of 1s.
> Note that we can always build a sufficiently complex Turing machine to compute a particular Busy Beaver number. But no fixed Turing machine can compute all Busy Beaver numbers.
If we were able to make Turing machine to compute a particular busy beaver number, then (it seems like, needs more thought) we would also be able to make Turing machine "builder" - a Turing machine that builds a Turing machine that computes a particular busy beaver number and then executes it, and thus be able to compute all busy beaver numbers.
Chris Brenan at what value of n does BB outpace the TREE function? Considering TREE(3) is already unfathomably large while BB by comparison hasnt even left the starting line at 3. Or 6. Or 7.
Sorry for a slow reply. No clue. But early values of functions can be very misleading. I believe it is generally agreed that Loader's Number is much greater than TREE(3) (or, for that matter, TREE(TREE(3))), and is achieved using a 512 character C program. Obviously that could be written as a Turing machine with some odd thousands of states. To circle back to your question, the exact point at which BB(a) overtakes TREE(a) is probably beyond our determining at present, but it isnt too difficult to wrap your head around the proof of why it has to happen. The fact that the very first few BB numbers are unimpressive only reflects the fact that tiny computers cant do much. But, to use an analogy, the fact that X^2 < X for X
I'm the new busy beaver: I'm gonna replay this video until I understand it.
not the most effective algorithm
Unfortunately that would be an invalid configuration; the Turing machine is required to halt.
BoiledHam woah, you didn't have to do him like that.
But he is a 1-state machine that writes 1s to the right each time he doesn't understand.
Make sure you watch "turing machine primer" in the description :)
ok?
The David Attenborough of Computer Science!
false.
Hello from the year of Busy Beaver 5!!! super excited :D
Professor Brailsford has such a pleasant voice.. I could listen to him talk for hours. He should record some audio books or pick up voice acting.
false.
This man is so charismatic. More videos with him, please. Not many people can explain something with so much enthusiasm that it translates to you. Other guys are good too but he is definitely ahead of many.
It’s been 9 years.. are u over your crush?
BB(7198) is independent from ZFC, proven in 2016 by Adam Yedidia. BB(4888) depends on Goldbach's Conjecture; BB(5372) depends on the Riemann Hypothesis.
now that's an interesting thing to know, is the proof less than 50 pages of invented symbols and obfsucated predicate logic?
@@imadhamaidi Probably? I don't know about the page count, but Yedidia didn't hand write with the TMs himself or do any sort of weird indirect existence proof; instead he designed variant of a programming language that compiled to TM specifications for an existing academic TM simulator, with optimizations focused on reducing the number of states used.* Then he wrote programs with the basic logic of "if [interesting unprovable/unproved statement], return true, else loop forever." I would guess that the papers mostly focus on discussing his compiler, which is the real meat of his work, although I know that the TMs produced were published and I think checked somehow.
I got this info from Scott Aaronson's blog, where he posted about it around when Yedidia (at the time one of Aaronson's grad students / post docs, IIRC) was publishing; you should check those posts out yourself if you're curious.
* A fun lesson in tradeoffs, and the dangers of code golf: the TMs Yedidia's compiler produces are crazily slow. Aaronson talks about their test program, which checked that 3^3 = 27. If I remember right, the resulting TM took something like three days to return a result. Thus, while I believe Aaronson did say they had one of those TMs running, I wouldn't hold out hope for an answer on either of those two conjectures from this quarter.
@@gazeboist4535 just checked the blog post out and i'm blown away, so if you could find BB(7198) you can literally check the consistency of ZFC axioms, only if we had some magical way to do so
the bound has been lowered to BB(748)
7918 to 1919 to 748 to 745 and now 643 as of september 1st 2024
University of Nottingham has the best professors.
I could listen to this guy all day.
In primitive recursive functions you have a set of building blocks where you can compute the arguments to functions lower down in the hierarchy to produce the values of functions higher up. Something similar happens once you have enough cards in Rado's scheme to define parameterised machines like "take a computable function f and a busy beaver b of n-c cards and use c extra cards to compute f(b) and write that many ones onto the tape and then halt. This is potentially a busy beaver for n cards and these are higher-order recursive functions that are being computed. These higher order recursive functions have been studied by William Tait, I think, and the logical (intuitionistic) consequences of their existence are explored by Girard _et al_ in an infuriatingly obtuse book called "Proofs and Types". I would love to see a video about that!.
Channel is so deeply into Turings work which is really helpful
watching this video in 2020 makes me feel as if I am from the future. The record for n = 6 is 10^2879. For n = 5 the current record
is 47 176 870, but it is not known about some machines whether they are in a loop, so
that record could still be broken.
Those machines could never halt even if they're not in a loop. Something equivalent to computing π, for example.
Watching this video in 2022, the new record for BB(6) is 10↑↑15.
47 176 870 is the number of steps, not the number of 1's. The n = 5 record is still 4098 1's (I think).
Lol
ok?
At 2 mins the video proves he is a hologram.
At 14:30 it would appear he is a T1000 =P
Anders Öhlund That was scary
Anders Öhlund That can only mean one thing he is the worlds 1st transhuman. (can't wait for hate comments because I said 1st.)
@@asdf30111 it looks like you have waited.
false.
If this isn't the best science video on UA-cam it's certainly in the top five. I've never seen Turing machines described more clearly.
John K Clark
Thid lectures are really nice! They are, in their own kind, masterpieces. Thank you all involved!
He needs a roll of Brady's brown paper.
schel sullivan that's just for Numberphile, Sean has his computer printer paper!
>Brady
false.
Arghh missed some compression glitches at 2mins in 😠
It looks kind of cool, though.
almost thought it was intended.
There's something really funky at 14:30 as well =P
Anders Öhlund ah yes, but that was 'by design' :)
You're just trying to hide the fact that he's a T1000 or a hologram :-P
7:01 "Wow, bound to win an award."
Oh man, this is priceless.
BB(5) was proven to be 47176870 this year
Really cool video. It reminds me a little of langstons ant. A video about that would be cool too.
So to get that 10^10500 number for a score estimate for the 6-card machine case, you had to evaluate a computable function, otherwise you would not have had an answer. It sounds like you can estimate how fast the busy beaver function grows, but it will always be faster than the estimate and you will never know exactly how fast it grows because you can't compute it.
+Cooper Gates The bb function is perfectly computable for any particular value, in the same sense way that it's perfectly reasonable to determine if a particular program will halt. It's non-computable in the sense that no algorithm can output the correct bb number for an *arbitrary* input.
+James Davis Yes, though calculating 10^18500 (I think the bound has been raised already!) directly by running all those turing machines until the highest halting score is found would be very impractical, so that wasn't what was used to find the result.
+James Davis A polynomial versus an exponential is a much less painstaking problem, such as if you wanted to know when 2^x overtakes x^45, you just set them equal to each other and solve for x, and you know that the solution is greater than 45 in this case because 2^45 < 45^45.
+Cooper Gates The solution is a transcendental that's arguably non-trivial to compute, but my point was that you don't have to compute where 2^x overtakes *every* polynomial to know that it eventually will overtake *any* polynomial. That's what defines the rate of growth of a function, not where it overtakes another class of functions, but the fact that it does at some point. The proof that bb overtakes any computable function is extremely easy, even finding an upper bound like you did in your example is very easy. But yes, the exact point at which bb overtakes a particular computable function isn't easy.
+James Davis I said that I found a *lower* bound because 45 for x gives 2^45 for the exponential and 45^45 for the polynomial. Anyway, what about comparing the growth rates of the Xi, Rayo, and FOOT functions?
Finally I comprehend what the fuss is about the halting problem.
ok?
"Get yourself a super computer"- hmmm, I would love to, great insights Prof.
For shits and giggles, I calculated the amount of 5-card Turing machines. It's 63,403,380,965,376 machines, or 6.34*10^13 if that floats your boat.
But can they run crysis? XD
6-card is 232,218,265,089,212,416 machines
7-card is 1,180,591,620,717,411,303,424 machines
8-card is 7,958,661,109,946,400,884,391,936 machines
Can't imagine how the large the "best" machine is from any of those considering the growth rate.
@@jrg3213 The number isn't computing power tho
So if you had a supercomputer than can run a million Turing machines a second, it would take about 2 years (I'm not sure how fast they can really do it.)
That number is so large, there is no way my boat will stay afloat after it boards
Does this have to do with the fact that turing machines can run any algorithm? So the busy beaver problem is essentially "what is the largest number of 1s that can come out of an algorithm that only requires n cards", therefore after a point the busy beaver can always exceed a given algorithm, because it will at least include that algorithm (and every one that exceeds it).
The proof that the busy beaver function is incomputable is actually quite straightforward.
Let _BB_ denote the busy beaver function; _BB(n)_ is the busy beaver for _n_ states.
Now assume _BB_ is a computable function. In this case, we have a computable function _BBT_ which computes the maximum number of transitions that a *halting* Turing machine with _n_ states can go through.
This gives us a solution to the Halting Problem. Given a Turing machine with _n_ states, simply run it until either:
* it halts; or
* it has gone through _BBT(n)_ transitions.
If it is not in a halting state by the end of _BBT(n)_ transitions, then it will never halt, because _BBT(n)_ is the upper bound on the number of transitions that a *halting* Turing machine of _n_ states can go through.
But the Halting Problem has been proven to have no solution. Thus, by contradiction, the busy beaver function is not computable.
??
so if busy beaver is a function, say b(n), that grows faster than any computable function,
it also should grow faster than f(n) = b(b(n)).
how?
edit - I found out my error, appearently b(n) wouldn't be a computable function so f(n) is clearly faster but there's no problem about that.
I find it really hard to see that b overtakes TREE, not that I really understand how big of a number such as TREE(3) is anyways.
Another statistic that I think would be interesting would be for Turing machines that actually halt, what is the maximum excursion of the read/write pointer in both the positive and negative directions? In other words, how much tape was needed?
The wiki page on Busy beaver has a function S(n) for the maximum shifts. Its kind of close to what you have in mind.
??
Oh look, the ground is opening up beneath me. And there is Godel, Cantor, Turing, Conway, Mandelbrot, Heisenberg, Chaitin et al already falling, all screaming as they descend.
false.
The proof for why BB function eventually grows faster than any computable function is beautiful in its simplicity. Its because of the infinite tape. Because the beaver has infinite tape, it can create any computable function on the tape. It can create anything on the tape that can be expressed in binary code, like the works of Shakespeare.
Someone once explained to me that BB(768) is such an unimaginably huge number that if we dedicated all the atoms in the universe to computing it, we still wouldn't be able to - and yet it is a finite integer. That boggled my mind. Such a short description of such a gargantuan quantity.
Don't even need to go that high. It was proved BB(18) > Graham's number.
@@ben_spiller Whoah. Didn't know that, thanks.
I would have killed to have him as my compsci professor.
This man could make drying paint entertaining.
I could listen to David all day!
I love how the beaver teeth hang over the tape sometimes, a nice detail.
The reason that at some points, BB numbers simply cannot be calculated with our current understanding of computers, even with infinite time, space, and ressources, is precisely because of the halting problem, where you cannot know wether any given turing machine will stop or not.
Is there an equivalent problem that can be stated in terms of the lambda calculus instead of a Turing machine?
Given that this races up faster than any other computable process, could it be used for something other than as a curiosity?
+NeonsStyle Assuming all the laws of physics are Turing computable, which seems like a safe bet, then learning the size of the program our physics runs on lets us derive the size of the multiverse, and potentially even simulate ourselves.
Granted, that would be of somewhat limited value for the most part, but it would still be an awesome datum of which to claim possession.
+eodguy83 you can write a program for a turing machine that calculates primes. it would just be horribly inefficient.
jetison333 Actually, over on Numberphile it was mentioned that fairly recently there was discovered a computationally efficient perfect test for primality.
***** Well that's as clear as mud!
eodguy83 I believe Tor Diryc'Goyust is referring to an efficient primality test that only works for Mersenne primes (numbers of the form 2^p-1 where p is prime). I'm not sure if these kinds of primes are used in cryptography, but I would suspect not.
14:30 I love that you can see how many people pressed replay for this part XD
Consider Σ(n); where n= ackermann(g64,g64)
i watched this then built one of these things in R at work. great video, fascinating problem
Love the Busy Beaver Problem - a close relative to the Halting Problem. However when he says the machine head moves to the right on the tape, that was actually in the animation the Beaver's left so I was confused when he said "right" - was it our - the viewer's right - or the Beaver's right - I think he meant our right, but it was confusing lol.
I suspect I could be mind-blown by this if I understood.
Mmmhn, not really
Absolutely Glorious.
Who is here after watching George Hotz's busy beaver stream?
me
me too :D
same
Before about to watch :D
I have no way of computing whether I got here by that method
Some beavers are actually putting in a lot of work, but others are just repeating the same thing pretending they’re working every time the boss comes past.
Is there a more slowly growing incommutable function?
Are there any online groups dealing with the busy beaver problem? This video was made 8 years ago and at that time, he said that there were 40 five-card busy beaver machines still running. I was wondering if any of those machines had been resolved in these past 8 years.
leaving this here in case someone answers so i get a notification
Same
Yes there is he mentioned the club. I just forget the time stamp.
There's also T Rado's group if you search for it.
There are other independent researchers too.
Then there's also you and I which can create busy beaver programs.
@@asagiai4965 Thank You!
So are there Busy Beaver equivalents for other models of computing? Say, lambda calculus? Or process algebras? Or type theory? Or what ever other magical models there are out there?
What about Conway's game of life, for instance? You could have something like "Which n by k pattern, which actually becomes static in finite time (after some finite n, it may only contain periodic patterns or static ones (although I guess, busy beavers completely rule out dynamics, so perhaps periodic patterns are not allowed either)), produces the largest number of life cells?
I haven't heard of one.
I think that for any calculus the tape would be the axioms, the program the inference rules, the 1s are the number of theorems it can output and the halt state is a fixed point.
It would be a little weird for games of life. I think the board would be the tape, the program would be the ruleset, the 1s would be the live cells and the halt state ... I guess an idempotent board state like you said but that does seem a little arbitrary.
Ferroneoboron san actually, a quick google search suggested that this idea indeed exists for cellular automata of sorts - namely those which are also 2D turing machines. Turmites.
( ***** please do a video on those too :) )
Though I was unable to find something for other computational models, at least not from a 5min search.
Yep. en.wikipedia.org/wiki/Busy_beaver#Generalizations
"For any model of computation there exist simple analogs of the busy beaver."
Penny Lane damn, sometimes one should just try the obvious thing
Kram1032 Oh yeah, hi! You should get yourself a profile pic, good for recognition value :)
I guess they didn't go into detail because it's rather trivial to construct these things for any given model and the fundamental argument will be basically the same as for the original one.
i get that the tape is infinite, i get that the head moves in either direction. I get that the number can be a 1 or 0, what i dont get is what the tape starts with. is it all 1's all 0's or is it a random mixture of the two??
For the Busy Beaver problem, the tape starts with all zeros by definition.
So the longest thing to compute and print is Busy Beaver with the input as Busy Beaver output starting from 1
This is one charismatic prof.
I saw this and just said to myself "well it would loop forever", but when he said you have to find a finite number of 1's, I thought " well how the f**k am I supposed to do that?"
I'm still smiling about it now
ok?
I wanna know how long the longest running busy beaver machine of given amount of cards, runs?
siprus from wikipedia: 2-symbol621107≥ 47,176,870> 7.4 × 1036534 for number of steps
" join the Beaver club!
Get your own super computer! "
??
The Wikipedia entry for Busy Beaver has a (very weak) lower bound. It uses arrow notation, of course.
I wish this guy had been my tutor at Uni. What fun!
People have their pc's cracking hashes all day. We need them to be finding busy beavers!
They can’t; that’s rather the point
false.
Remember watching this for my A-level studies.
I graduated from university 2 years ago and remember learning about the term busy beaver all them years ago.
Came back to this video after all this time, after coming across the term "busy beaver" again but not quite remembering what it was about. Suffice to say this video jogged my memory.
love the glitching effect! how does one do that?!
Sure that it grows faster than Loader's number?
Edit: Loader's number is computable, so it's answered right there. But what about Rayo's Number? What about the FOOT function?
Slower than Rayo
Foot is ill-defined btw and has no output
That's because you can completely recreate the busy beaver game in rayo's function in like 2000 symbols so with like 3000 symbols you overtake every single bb number below
this video is underrated
I would be interested to learn a little more about Bremermann's limit.
@Marian Gherca You forgot to add: "in 24 bits" You can easily define a 48 bit RGB palette or a 12 bit one.
so ... If I have the busy beaver function BB(n) that for any n it calculates the maximum number of 1s writable by an n card halting Turing machine, would the function F(n) = BB(n)*n grow faster than the busy beaver function?
are there any numbers of cards for which any finite number of 1s may be produced?
I may have missed it in the video, but what is the point of solving the problem? What would it accomplish?
It's fun.
Welcome to mathematics.
maybe if we develop computers that are not turing machines we can calculate BB(n) and use that to solve the halting problem
Another excellent video!
Correct me if I'm wrong but 256*10^8 is 25.6 billion in the short scale and 25.6 milliards in the long scale? Not 25.6 trillion...
yeah he must have misspoke
In the instruction layout, for the 2-card case, how can the final column represent 3 possibilities -- 0 to halt, 1 to repeat the instruction on the current card, and 2 to go to the other card -- when a single bit can only represent two states?
There is no requirement that that final column be a single bit wide. For the 2 or 3 card cases, you would need a 2-bit wide field. for 4-7 cards, you would need a 3-bit wide field.
I think a difficulty I have with this video is that, when I hear "looping," I think of periodic behavior -- the machine returning to a previous state. What makes this so nasty is that it can turn out non-periodic. It might never repeat a state. If it got into a loop (as I think of loops) the condition could be detected. But if all programs that failed to halt periodically repeated their state, the halting problem would be solvable.
eNSWE
"I haven't ever really heard anyone referring to the tape as part of the machine's state,"
You have now. And the reasoning is simple. The contents of the tape can influence whether a Turing Machine halts or not and how long it takes, if it does halt. (It also influences the final "answer."
"these are loops in the sense that the machine is in a certain state c, reads some input symbol a, and the transition function returns the same state and moves the head so that the same thing will keep happening over and over again."
The problem with that analysis is the the machines that are still undecided don't quite do that. They only keep repeating their state number in the same sense as the decimal expansion of pi keeps repeating the digit '1'. Their behavior (as near as can be determined) is non-periodic.
Even if you have a "partial state" that you know must return after n moves, you still determine a type of periodic behavior and can rule that it does not halt. If all Turing Machines either halted or exhibited periodic behavior, the halting problem would be decidable.
look, I didn't make up the fact that what this video refers to as "cards" is actually what is referred to as "states" in the formal definition.
of course I understand that the input can affect whether the TM halts or not (if it is a parital recursive function, otherwise it doesn't matter), but the "state" of the machine is explicitly defined as what "rules" the machine is operating under at the moment (and thus how the transition function behaves). it's just terminology. the transition function takes the current state and the next character on the tape as it's arguments.
eNSWE
What you said is that you had not previously encountered anyone who regarded the contents of the tape as part of the state. I took that at face value and simply told you that you have now. I also told you why I consider it part of the state.
" it's just terminology."
Language is just terminology. The point of my original comment is that it is evident that he is trying to convey some idea other than what the words used mean to me. That is a difficulty as I am not able to determine conclusively what he is really trying to say.
Your apparent position is that it should be self-evident what he really means. That's the trouble with natural language. It doesn't always work that way.
Mat M
Well, there are a few "tricks." If you can identify a partial state that must repeat itself after some finite number of turns, you can prove non-halting. It was once believed that any set of shapes that could tile the plane could do so periodically. And then it was proven that that assumption would make the halting problem decidable.
Where does this problem lie in the P-NP space?
you take the next left, go a 2 blocks and then go into the nearest corner store. If you look in the aisle containing toilet paper, remove the third one in the fourth column. The problem will be waiting for you there.
P = problems computable in polynomial time. NP = checkable in poly time or computable in non-deterministic poly time. BB is not computable so does not live in any set of computable functions.
you can have infinite tape, but what about loops of tape?
+Julian Goulette Maybe use a massive loop of tape with a highly composite number of slots so that loops could be detected easily; the problem is that if the tape is linked to itself, the results will be different, since it can wrap around and reach 1s it made that would otherwise be very far away. A Mobius strip would be fun ;)
loop of tape = finite + unbounded topology
+Julian Goulette Doesn't take away from the fact that it changes the problem. First of all, a practical loop of tape wouldn't have enough slots for the winning machine with 7 cards - it would already need to print way more than a googolplex 1s.
I was just curious,
+Julian Goulette A looped tape is one way of expressing a function that terminates, if I recall correctly. I was doing some basic reading on the various types of Turning Machines and looped tape was a thing for some of them. I am very tired right now and am probably remembering exactly what they are used for wrong.
Great vid, thank you so much!!!
sounds like someone needs to create a function that grows so fast it puts the Busy Beaver to shame
There already exist a couple of functions that grow faster than Busy Beaver. The Busy Beaver is only the biggest function for a Turing machine and with that also for a computer.
But functions like Rayo's number are not even possible to calculate with any tool. We just know they exist because of formal logic.
I find it amazing that ZFC cannot prove what the 745th BB number is.
Yay, Professor Brailsford!
More of this please ;)
Take a shot every time he says Busy Beaver
LOL. This prof is awesome!
Can someone explain the multi card turing machine please?
I had a thought which went like: "Well, would a quantum computer here be much help? Could you program it to calculate the max number of 1's for a 10 card Turing machine, and it would take some time (but not like centuries) and give you the correct answer?
Or if so, is there maybe just some mathematical trick to the busy beaver problem, so that it can be put into a rather simple formula like "beaver of x" (with x being the number of cards) equals to y (with y being the max number of 1's)? I think such a thing should be possible (for a mathematical genius like Turing it might not even have posed much of a problem, but unfortunately not many people like him are alive today, if any)...
Seegal Galguntijak
You don't need a genius for that. Here BB(n). There is your mathematical formula.
Thing is that you can write formulas for anything, but that doesn't make them computable. What you are looking for is an algorithm and because algorithms are just as powerful as TMs, you cannot construct a computable function growing faster than BB by definition.
Why cant the head stay still, what will happen if the head stayed still? Can all move left, move right and stay still exists in busy beaver game?
What?
In Turing machine head can't stay still, because that would be redundant. You can simulate still head with other sequences of instructions. And because Turing machine is guaranteed to be simplest example of machine capable of computing all computable functions, you can't have redundancy.
Well you can make a busy beaver out of that.
It will be your own game version.
irl the programs that win the BB game are similar to ackermann functions
The David Attenborough of CS
I’d be interested to know if a compute problem is actual following a Mandelbrot/fractal compute when it goes on forever. We have a situation of self modifying code (which is what a Turing machine is) and if you change the program, you can in essence, setup a chain of events which can recursively become the original program (or become a form of cyclic code such as a CRC algorithm).
Shift left and right and change to a ‘1’or ‘0’ can be thought of as a logic function. If ‘1’ change to a ‘0’/If ‘0’ change to a ‘1’ is an XOR. Shifting is how a cyclic redundancy code (or LFSR) works. So surely a program can be broken down to logical and LFSR instructions and recompiled to figure out the order of LFSR.
By 3 card, does he mean 3 states?
Yeah, dunno why he says cards. States is much more obvious.
Can't we take integral or some other operation to calculate directly without need of turing machine?
Integrals are usually used on continuous functions, due to the need of multiplying differencials with the value of the original function at infinite points. BB(n) is a step function, so I don't think it would work. But you might be able to use other forms of infinite sums.
The professors face at 14:30 scared me a bit
reptile shapeshifter conspiracy theorists incoming ( ͡° ͜ʖ ͡°)
that was fascinating! great vid.
try to program them in piet
was this filmed in Mexico?
Is this concept similar to finite automata?
if numbers get too big exponentially do you not get buffer over flow or out of range error>?
Not if the tape is infinite.
Oh I see what you mean on a real computer. No you can write a program which extends the size of numbers which can be dealt with. You are not limited by the limits of the MPU or operating system.
So, would higher busy beavers eventually overtake something like TREE(TREE(TREE(TREE(TREE(...(X))))))?
Yeah easily. SCG is vastly bigger than TREE, Loader is vastly, vastly bigger than SCG, Busy Beaver is a whole other ballpark to Loader.
how would one express the time complexity of this function?
BB(n), remember that it's been proven that it is not computable so there's no formula for it that you can plug into a computer. besides, all algorithms are O(BB(n))
Well this function grows faster than any computable function, but it also means it's proven that it's uncomputable, correct?
Yes, it's uncomputable
Go to red light district if you want to find the busiest beaver.