In all the situations where something is uncomputable or np-hard, or "only solvable on a quantum computer", there almost always exists an approximation or heuristic that is easy enough, fast enough, cheap enough, and good enough. In the case of Chaitin's constant, it's going to be very approximate. We've worked out the end state of all the Turing machines up to size 5. Which is very small, and solving 5 took decades of work. It's pretty safe to say that 6 will never be solved. Once you get to 6 there are Turing machines that resemble the Collatz conjecture. And those aren't even necessary the hardest ones. By design it's one of the most difficult numbers to approximate known, so any real-world problem can be expected to be much easier. However, if we're running into a roadblock with numbers as small as 6, maybe that's evidence for the opposite. If we're talking about quantum systems, it's going to be pretty far beyond 6 elements. For these kinds of problems, in practical terms, 10 might as well be infinity.
There are proofs that certain NP complete problems cannot be approximated beyond a certain point if P=/=NP. If you're satisfied with that point, or hope that typical problems are easy or pray to RNG, then you can get away with a lot more. Computing BB(745) requires proving consistency of ZFC. As Sabine noted, all finite systems are decidable via brute force. In particular, this includes NP and BQP (both contained in PSPACE in fact).
@@JosephLMcCord it does so all the time - even your brain runs on them. If a certain voltage (60 eV is the typical threshold value) is applied to a neuron's input(s), it SHOULD fire, as there's NEVER a guarantee it will, no matter what the summed input value is.
The other thing to remember is that Turing machines have a TAPE. That is, they can store information, potentially an infinite amount. A physically finite system can't store an infinite amount of information, obviously.
Turing machines can compute anything that's computable *in a finite number of steps*. This sounds like it's just a manifestation of them creating something that would require infinite to work.
it's not about creating the thing. it's about proving that you can't stop people from being able to create the thing if you want a programming language that can represent a sufficiently wide range of algorithms
I think a lot of the terms get thrown around loosely, even if they're used by "exacting" people. Like an empty while(true) loop runs infinitely long. You can certainly determine it's logical behavior.
The universe is also not a computer algorithm. It's an analogy on top of an analogy on top of yet another analogy. We need another Newton. Physics is laughable at this point. the main problem stems from irrational numbers, which cannot exist in nature. Technically pi or e is incomputable. These are just errors that arise when trying to apply binary operators, which arose from integers, to curved surfaces. There are no perfect spheres in nature, so pi should theoretically never be used. The universe is obviously discrete and treating it as continuous is where all the problems in physics stem from. When you embed errors on top of errors, you are only getting further form the truth at hand. Unfortunately, discrete mathematics is ten times more complicated than it's continuous cousin. Every continuous function can be represented by a discrete function, but not vice versa. I'm a mathematician and I study discrete functions. The universe must operate in this paradigm.
@@ZaraThustra-w2n A friend of mine told me that the most profound thing that happened to him was when he learned about p-adic numbers. Like, this is really how we measure things by essentially comparing the rulers.
The haulting problem in pracitice describes our inability to predict if a program has bugs, and that the most efficient way to test if a specific program would hault would be to run it. This is subject to the nature of each program, so there is no general formula. If we were to create a program that would identify all possible bugs, in practice, it would be a program of all possible programs. So instead of waiting for that, you could just contribute to the pool with your specific program by just running it. At which point you would have your answer without the solution to the hault problem.
Another aspect that always stunned me, is that math define it's own limit of computability from one side, and correctness/soundness/completeness to another. At least from my understanding (probably incomplete and wrong) I always be baffled by Gödel incompleteness theorems and how such theorems intersect with the halting problem (at least in my understanding)
They are fundamentally related and also to Cantor diagonalization. (which is why it's possible to have some non-turing-complete languages whose halting can be proven by a program written in a turing-complete language. Likewise, it's possible to represent the result of a Cantor diagonalization as a real number, just not a rational number.)
It's not really surprising that the physical system has to be infinite, since the halting problem strictly pertains to a computer model with infinite memory (e.g. the Turing machine). It's not difficult to determine whether a computer program that only has access to a finite state space will halt, because there is a finite number of steps after which the state has to repeat, after which the machine either is in the halted state or it loops. Realistically, however, the number of all states is typically so insanely enormous that the halting problem might as well apply to our finite computers.
If the thing you're trying to compute can be shown to take more computation than is theoretically possible in our finite universe, then the constant is effectively uncomputable. But I like the idea behind the paper. I wonder how it could work with the Busy Beaver problem instead.
can you imagine if we actually manage to get mathematical contradictions in the physical world and find a way to use them for some kind of technology? would be the most bizarre thing ever
@@darrennew8211 Unsolvable, because the observational space is tiny (number of quantum states inside one or several human brains) compared to the non-halting nature of spacetime (past, present, and future). So we can't ever determine whether our failure is because the problem is merely very hard, or (more likely) intractable, because of the point of view problem. Computationally, most relationships between any two physical objects differ depending on their relative locations and relative velocities, and (third body) the relative position and velocity of the observer. Easy example: a photon's interval is zero, which means, that from the viewpoint of that photon, that photon doesn't exist, and the origin and destination of the (non-existent) photon are the same. Therefor, from that POV, the energy of the photon is zero, and where did the energy go?
@@Galahad54 Oh, I think one day it might be possible to (for example) prove that both GR and QM are correct as they stand. Then you're screwed, because they're incompatible. Scientists assume that we'll never come across a system that you can prove is both correct and incompatible with another system you can also prove is correct. But there's really nothing fundamental to the universe itself that would prevent that event. It might make a fun sci-fi concept, tho.
A finite system is by definition always computable. If you want something uncomputable in a finite space, you would have to exploit infinitely small structures, which might even be physically impossible, but certainly infeasible. (Technically this means configuration space, not volume, but the problem remains)
We know light exists in a timeless state-it doesn’t experience time like we do. This got me wondering: if light is timeless, how can the expansion of the universe stretch its wavelength and cause redshift? Doesn’t that go against the idea that light isn’t bound by time?
What if redshift isn’t caused by the universe expanding but by light losing energy as it travels through space? Maybe light "pays a cost" to connect two points in space, and over long distances, it loses energy into a kind of timeless dimension. This energy loss could look like redshift to us.
If that’s true, would we still need the idea of dark energy to explain the universe’s expansion? Could this also help explain quantum entanglement-where points in space seem intrinsically connected?
I’d love to hear your thoughts on whether this makes any sense or if there’s a clear reason it doesn’t fit with observations like the CMB or galaxy formation.
@@chetanpatil2074 The universe as a whole isn't a closed system so energy does not need to be conserved. The conservation law only applies to a closed system. It's better to consider light as having a frame of reference rather than an "experience", from it's frame of reference there is nothing, both the "beginning" and "end" of the universe are collapsed into one, but from our FoR light exists and changes. Similarly, due to relativity we have spatial contraction and time dilation. If you watch a person take off in a rocket your perception of how long it takes for them to return and how large the rocket is will be different from theirs. This isn't a contradiction, reality is simply the self consistent intersection of all of our FoR's. So you can think of light as exisiting differently from your FoR than its own and yet both are representations of something more fundemental , independent of any FoR. Think of it like turning a shape to see it from different angles; when you move or accelerate you change the angle and when you accelerate to the speed of light, you see the shape from an angle that no other can, but likewise, from this angle you can see no others; that includes any angle that can see space or percieve time or grasp energy. What's interseting is that other angles can still percieve you, which is why we can still percieve light. So from it's FoR it has no properties like wavelength; those are only emergent phenomena that we can percieve from our FoR.
Really interesting! I also want to note that it is mathematically impossible to proof the unprovability of any given well-defined problem. For example, take the problem: Does program X halt? If program X does halt, then that would be provable, so any proof of the unprovability of this would imply non-halting. You can still show unprovability within a model, but it is not possible to construct a mathematical statement that is heritically based on decidable problems that is "unprovable". Questions like the continuum hypothesis aren't heritically based on decidable problems, and only make sense in ZFC. You can still ask "does ZFC imply ...", since this question can in fact be written as the halting of a program.
The paper is a bit weird. This isn't any more "un-computable" than electrostatics would be if the electric charge was the Chaitin constant. They effectively say "Let there be a system tuned so that this constant is meaningful but arbitrary" and say "Hah, we got you. You can't predict what's going to happen", but if the system was tuned to that constant, you could simply measure that constant and make the requisite predictions. Furthermore, since the evolution of the wave function in quantum physics is deterministic, you would be able to predict the constant BEFORE measuring the system, allowing you to know the future behavior for the system for all time.
@@ParadoxProblems That isn’t what is meant be “noncomputable”. The term is not about “physical computation”, but about the limits of mathematics. Chaitan’s constant isn’t a terminating number, and hence, like pi, is not physical. If you had a Chaitan constant value in the universe, you would have a value with infinite precision, one that has very odd properties (guessing this would be impossible). The term “uncomputable” here means that there is no algorithm that can generate the number to arbitrary precision, it is in a certain sense, fundamentally empirical and thus outside the domain of mathematics.
@@adammyers3453 @adammyers3453 The thing that is weird is that they use the constant in their definition of the system. If it's not computable in the mathematical sense, then there is no physical mechanism (given the current mathematical laws of physics) that would result in a system being defined with that number. If such a physical mechanism existed, then we would be able to compute the number mathematically through the deterministic evolution of the quantum wave function. Physically computable implies mathematically computable when the laws of physics are mathematically deterministic and solvable.
@@adammyers3453 @adammyers3453 The thing that is weird is that they use the constant in their definition of the system. If it's not computable in the mathematical sense, then there is no physical mechanism (given the current mathematical laws of physics) that would result in a system being defined with that number. If such a physical mechanism existed, then we would be able to compute the number mathematically through the deterministic evolution of the quantum wave function. Physically computable implies mathematically computable when the laws of physics are mathematically deterministic and solvable.
@@adammyers3453 The thing that is weird is that they use the constant in their definition of the system. If it's not computable in the mathematical sense, then there is no physical mechanism (given the current mathematical laws of physics) that would result in a system being defined with that number. If such a physical mechanism existed, then we would be able to compute the number mathematically through the deterministic evolution of the quantum wave function. Physically computable implies mathematically computable when the laws of physics are mathematically deterministic and solvable.
@@SabineHossenfelder Could you please address my question about one of your previous videos. It reads "A New Physics Breakthrough Could Change Everything," but based on its content, it should read "A New Physics Breakthrough Will Likely Change Nothing." As you point out, most of the possibilities for new physics are unlikely to lead to applications, so the "could change everything" phrasing is hyperbolic and misleading. If, as your video's content largely implies, a new physics breakthrough is unlikely to lead to practical applications let alone "change everything", then why are you so concerned about the stagnation in the foundations of physics?
@@tommiest3769 Yeah, it happens quite often that a video of Sabine's has a title basically stating the opposite of the video's actual conclusion, but is more catchy. The pull of clickbait...
@@speedstone4 Agreed. I feel like she is beating a dead horse with the constant barrage of negativity and criticism. Also, according to her video, even if we found 'new physics' there are likely no practical applications, why be so concerned about the alleged stagnation in the foundation of physics. A dead end is a dead end...how many times do you need to flog people with that, right? Criticism is a precursor for solutions, but it can only get us so far; therefore, we should not see it as an endpoint or something that "stands on its own." It is easier to destroy than to create. Criticism represents destruction, whereas new idea generation represents creation. If you destroy the old bridge because you think it is faulty, perhaps that is a necessary first step, but people will still need a way to cross the river...
The first step of building the lattice is to construct an algorithm which computes Chaitin's constant. One of two things is happening here: Possibility 1: Quantum Computers can solve the Halting Problem, in which case the currently-defined class of Computable numbers is a statement about computation under limitations that don't actually apply in our universe. I came into this with the opposite understanding. Possbility 2: The authors of the paper pulled a (dishonest, dumb, or "thought provoking") trick by supposing they have an algorithm that computes Chaitin's constant, and this result is meaningless. If you already have the uncomputable number, you can do whatever you want with it, I could write a paper where I bake pies whose flavor is determined by the digits of Chaitin's constant, these researchers decided to write it into a quantum system, the quantum system has no significance.
Sabine, the plot-twist of the plot-twist is twice twisty: if the horizon of physics is infinity, the undecidability problem makes reductionism necessarily false... if the horizon is not infinity, reductionism is again false given that whatever finity of numbers get selected, they would be arbitrarily fine-tuned.
@@infinitytoinfinitysquaredb7836Your problem is that you're defining cards outside of their practical & real manifestations into reality. Let me start by stating whether cards just sit around is irrelevant & any problem that seeks to analyze cards just sitting around collecting dust is irrelevant. Cards manifest in games where the cards are distributed in groups among players. Each player then knows what cards he has and he can start to make probability calculations based on cards he has & sees as the game progresses. Those probability calculations can perceivably become more and more accurate as cards are redistributed among players or into a pile of non-use. Essentially, your framework has meaningless constraints within the context of "cards."
That's... not uncomputable at all, in a general "within our lifetime" sense or mathematically. Grossly overcompensating the data requirement, 2^4 is greater than 10, 2^(4*68) > 10^(68), and 2^272 is barely over 2 128-bit values, which we have produced already. Reduction in bits available can be compensated with speed or memory scaling on lower-sized units. 64-bit math can be done on 32-bit systems, just more expensively.
I'll just send a reminder that the Turing Halting problem is the practical realization of Gödels incompletness theorem. Don't mix computers in where they don't need to be.
The fundamental issue with these sorts of arguments is that 1) abstract Turing Machines (TMs) are NOT physical- they can use both infinite memory & time and 2) they assume that physical reality doesn’t support computers with qualitative abilities TMs lack, such as executing unlimited many instructions in finite time. The claim that the physical reality is strictly limited to what is computable (by a TM) is the “Church-Turing Thesis”, or the “Extended Deutsch-Church -Turing Thesis” if you generalize to quantum TMs. Already you see a great hole in the premise- the power of computing depends on your physics (quantum vs. ‘classical’), not the other way round. All known physics so far is compatible with the universe being a finite quantum state machine, a type of abstract computer that is strictly LESS powerful than a quantum TM but also has a DECIDABLE halting problem. So these sorts of results essentially add up to “if we allow for infinite unphysical behavior, we get uncomputable physics”.
I told my wife the same thing about dust bunnies, socks lying on the floor, and dirty dishes in the sink: Unsolvable, and also mathematically proven to be impossible to solve.
I don't agree with simulation theory, but you can write non computable things in code pretty easily. If you have an infinite loop that adds an infinite series up its computable for that specific moment, not for the end state, which is kind of what this is.
Idk. A simulation obeys the laws programmed on it and what can be programmed has limits imposed by the laws of the universe in which the simulation runs. But those are undistinguishable by the simulation itself. Well, or so it is my thinking hehe.
Uncomputable numbers may be incalculable, but that doesn't mean that they are inherently unpredictable. It is possible to do science on macroscopic scales. For example, if you were to do the experiment and create the quantum computer that the researchers here outline, I would predict that it would exhibit some patterns of behavior if they were to then play with it. These patterns might not be predictable from first principles of physics, but if you studied them long enough, then you could establish some rules - and then test those rules against further experiments.
How can you predict this if there's an infinite number of variations that you cannot account for? Surely they could change the constant in an arbitrary way? So you can't even meaningfully predict the constant.
This is a good point. QED also has infinities, but those can be worked around by stepping back and thinking about how to approach the problem through the lens of the goal of calculating the behavior of the physical universe.
@@bristleconepine4120 The issue is any attempted “law” would be impossible to formulate (you would have a contradiction if you somehow managed to do so).
@@adammyers3453 I'm not entirely sure what you are referring to. I can say, however, that despite the fact that from what we can tell, biology is indeed an application of chemistry, which is in turn an application of physics, and yet physicists are still unable to predict the existence of life from first principles. Life exists, we know it exists because we observe that it exists, and it can be explained in terms of physical principles, but not (yet) predicted by them. Yet, despite this, biology has been a recognized scientific discipline for centuries, with its own guiding theoretical underpinnings that describe it that were erected not by physicists but by biologists. A fundamentally unsolvable physical problem that gives rise to an inexplicable physical phenomenon that nonetheless exhibits predictable behavior can still be studied, if only incompletely understood, would be no different from life.
An uncomputable physical phenomenon would be wild, because in CS "computable" is a rather robust concept. It's roughly meant to cover the whole concept of answerable questions. Philosophically, it's not even clear what it would mean to have such a phenomenon.
There are also algorithms where you can know that they don't halt. It just requires a proof rather than just running them. Such proofs can also be found systematically, so you can bound the constant from above. It just won't work for all cases so you don't get it exact.
The Turing statement isn't that you can't prove halting or not halting for a GIVEN, specific algorithm. The statement is that you cannot build a program/algorithm, no matter how complex, that will be able to determine for ANY yet unknown program if that one will halt.
In practice, we don't usually try to predict high level phenomena from the lowest level. We usually try to make predictions about a system based on the behaviour of system components one level down (e.g. sociology can be related back to psychology). I think this is good enough. 'Strange attractors' make the universe more predictable/stable than it would otherwise be.
I know you have someone proof read your script. I had a German girlfriend who had a slightly harder accent than you. She is very smart and intelligent. She often asked me for proper grammar and pronunciation of words and sentences. I respect your channel and how you are able to make it entertaining and enjoyable for others.
The halting problem is presented in the wrong way (by all videos I've ever seen about it and here too). The "the halting problem cannot be solved" sentence is wrong. In theory, the halting problem can be solved for every machine with a finite amount of internal states. In practise, this is bound by the amount of available memory for the observer program. (In the end of the video, a statement for "finite physical problems being always solvable" was made. If this was meant to also apply to the halting problem -> impressive). Trivial example (an algorithm that can solve this problem for "all possible programs" on a finite machine) - The state of your machine has a size of 32 bit. (the state size is defined as the total number of bits inside your machine + the total number of bits of your input) - "Any" program running on your machine must deterministically decide on these 32 bit of the current state what the next state will be. - An observer Program (with a big chunk of memory (bool[2^32])) on another machine writes a "true" for every state that was occupied. - If the program terminates: After a finite number of steps (max 2^32-1), the program halts. - If the program does not halt: After a finite number of steps (max 2^31-1), the program enters a state that it already had in the past (the observer sees this because its bool[state] == true) -> the program does not halt. - Edit: This algorithm works for any state size as long as it is finite. Also, even people specifically explaining the halting problem get this wrong. They ususally use an ill defined version of the problem to prove mathematically that it cannot be solved (inserting the observer into itself gives us infinite recursion because the observers are somehow always treated as the same instance. If you insert an instance of the observer into another instance of the observer, all will be fine). Edit: I know that the original definition is done on a Turing machine, which has infinite memory, and needs an infinite amount of steps to add two infinitely large numbers. Therefore the program for adding numbers never halts. Therefore adding numbers is impossible (see the problem with this argument).
The theorem is about a Turing machine : a machine that can calculate the sum and product of ANY two numbers. A machine that can run out of storage to hold a number is not a Turing machine so the theorem need not apply.
The halting problem is about Turing machines, which have infinite memory. You can define halting problems on finite state machines, but it's a different thing then.
@@darkwinter6028 Correct, though the properties of Turing machines is integral to how current computers operate., so hardly unimportant for being unphysical. Furthermore, I've always suspected that the Curry-Howard correspondence implies that the undecidability of the Halting Problem has implications for logic (on which all science is based), though whether that is a reformulation of Godel is another question.
simillarly, you could argue that the halting problem only arises in systems with an infinite amount of states/memory. in any system with limited memory, a non-halting programm has to loop (this happens if a state/memory repeats) and so the halting problem is decidable under those conditions.
The halting problem states that for any program that a priori calculates whether another program halts, there must be programs where it decides incorrectly. It's not about infinite or finite systems, it's about whether you can perfectly predict whether every program will halt without running it.
@@Zeuskabob1 Yes, but this can be done if you put boundary conditions on the decision. So you cannot make an program that will decide if another program ever halts arbitrarily, but you can devise a program that will decide if another program ever halts given e.g. a certain maximum amount of memory to execute. This boundary condition (in this example limited memory i.e. limited ability to store different states, even if it is universe-big) makes the otherwise undecidable problem a decidable one.
@@Zeuskabob1 If I recall correctly the proof says nothing about whether the decider runs the program or not, it's just a black box. For finite systems the practical question for us humans is the P vs NP problem. That mystery doesn't get nearly enough attention in the media.
@@Zeuskabob1 If you know the finite number of possible states a program can be in, then you just run it long enough to go through all those states plus a little bit. If the computer only has 10 bits of memory, there's only 1024 different states it can be in, and if you run it for 1025 steps, it is either repeating a previous state (and thus looping) or it has halted.
@@Zeuskabob1 you are partially correct here - you cannot make a solver for the halting problem that takes up less memory than whatever your memory limit is. you can make one (and quite trivially by simulating the programm and detecting loops/termination), but it needs more memory than the limit. the trick here is that you cannot run the solver within the memory limit, which prevents you from producing the paradox in the first place.
Firstly ;It is my project to study from now on , advanced programming . So I am far to be computer expert However , Probability Theory has some philosophical commonalities with theoretical Physics ; Everything depends of the phenomena"s modelling . Different mathematical frameworks may be more or less predictive when compared to themselves . And about all these problems formulated in the formalism of Monte Carlo Methods ?. Remember that changing the rules (change yours tools!) we can correctly formulated the problem and solve it (in this new framework!). Let me give a very successful mathematical example of this last change of framework on Mathematics: The problem of the POINTWISE CONVERGENCE OF THE FOURIER SERIES , SPECIALLY IN SEVERAL VARIABLES ( OR OTHER SERIES DEFINED BY EIGEN-FUNCTIONS OF SELF ADJOINT DIFFERENTIAL EQUATIONS (STURM LIOUVILLE PROBLEM) is fascinating , but nearly Terra incognito !. In Mathematical physics and Functional Analysis , we have COMPLETELY answered the "undecidable" question of the pointwise convergence of eigenfunctions expansions (even and specially in several variables ) by changing the convergence notion (The topology!) to convergence in mean square (The Hilbert Space of square Lebesgue integrable functions). But the Engineers (and computing !) still use POINTWISE CONVERGENCE for defining functions by fourier series (Fast Fourier Transforms) ......(sometimes By just truncating the expansion into a finite numbers of the serie terms) .AND ...ITS WORKS WELL IN MANY TIMES ! .A big Philosophical in practical Exact Sciences .....in my humble opinion .
The intro to computability theory I got at uni made me feel like the whole theory is pretty bad. It makes a bunch of very interesting sounding statements, like that there is no algorithm to determine whether or not a program halts, or that there is not even an algorithm to determine any non-trivial property of a program. While there are true in theory, they hold little practical value. To me, it feels like the field has gone in the wrong direction in the beginning. It tries to come up with those fanciful theories that sound good, but the field ignores that in practice, the theories have no predictive value. Instead the field should have focused on trying to tell us something useful about the cases where it is decidable whether or not a program halts, which is almost always (if not always) the case with practical programs.
What you are observing is the mathematical side of CS, where these kinds of things are common. The issue here is that math isn’t built around practicality, nor should it be. The purpose of mathematical and theoretical computer science research isn’t to develop practical things, but to understand a part of reality. Whether something practical falls out is happy coincidence. If we restricted ourselves to only practical concerns, we would not have the modern internet.
@@ThePavelkominI think it is safe to say that for most of human history, deep results of prime numbers like Fermat’s Little Theorem or Euler’s Totient function were utterly impractical. Yet, these results are essential (technically Carmichael’s version is more relevant in practice) in the RSA algorithm that allows for public key encryption. The entire basis of modern encryption boils down to incredibly unpractical facts about prime numbers (well unpractical in any other context as of now). Without mathematicians finding it “neat” to study, we simply wouldn’t have even known that the RSA algorithm was even possible, let alone have the ability to spend centuries developing the field of Number Theory.
@@adammyers3453 Thanks for the elaboration. While I could certainly argue about some things, I agree with you in principle that studying pure maths in itself is not worthless. However, computability theory is hardly pure maths. It tries to make statements about computer programs, which while true in the theory, hold little value for anyone actually trying to develop anything further, e.g. automatic verification. From my superficial knowledge computability theory seems to be self-serving, though I don't really know the depths of the theory. While there still might be some merit to the theory, I feel it is highly overstated
What computability theory seems to me is as if biologists cared about the behaviour of dogs, so they created a dog model. While the dog model turned out not to be a really good description of dogs, researches still kept studying the dog model and claiming they are studying dogs. There wouldn't be anything wrong about studying it, but it would be wrong saying they are studying dogs.
The problem for this and similar studies is that for infinite systems it is obvious that you can make uncalculable properties. If you have infinite system you can make perfect Turing machines which would run all possible algorithms. The only thing they manage is to propose system which feels more realistic than the infinite number of custom programmed computers. But the amount of realism does not matter for this question, it is about principles.
The answer to some stochastic problems cant be computed beforehand, but you can always run an expirement to find the actual answer to within some rough error bounds. add ~1.575% to the list of everything else we cant compute directly in physics like the fine structure constant or whatever
Right, the halting problem always comes up due to infinities. A computer with finitely many possible states is not a turing machine but a finite state machine. And finite state machines have no halting problem or any other uncomputable problems.
A number being uncomputable is practically irrelevant, it just means you can't have a single machine spitting out all the infinite digits (given enough time). You may still find ways to get the precision you need for practical purposes.
Even if there was some finite construct which has some uncomputable property the conclusion the conclusion "reality is not computable" wouldn't truly follow. The correct implication would be "turing machines do not capture the right notion of computability for our universe". (i.e. a refutation of the church-turing thesis). We could just redefine computable as "can be calculated by turing machine with access (oracle) to afformentioned turing-uncomputable property".
Sometimes I listen to this UA-cam channel and I think to myself are particles ageless, then is there really time or is there really no infinity because the amount of building blocks is finite and everything just keeps changing not necessarily existing and non-exist
Calculus is the study of change. I'll never forget that profound statement. The universe is in a constant, cyclic state of change. We can see this all around us. Even when we look at a wall and think we see empty space, there are many energy state transitions happening ("air" and its kinetic & potential energy, heat radiating from your body or electronics etc etc). Even life is just one massive , cyclic energy process. When we age, it is not time, but the inefficiency of utilizing energy that causes age. And when we die, it is not time that caused it, it is simply that our bodies cease to utilize energy and we return the energy of our "matter" back to the overall "universe.".
If a system wich begins in state I will end in one state S within a finite time T where S is an element of a finite set of possible stable end states Q and the relation between I and Q is a fractal (which is very common), then between two initial states leading to a certain S there can exist an infinite number of initial states leading to a different S. Which means that the energy needed to create a different outcome can be infinitely close to zero. So the physics of the smaller system describes but doesn't determine the behaviour of the larger system. Only 5 neurons are needed to create such a system. (Most people have more...)
The funny thing is that almost all real numbers are uncomputable. In fact, it's because of those uncomputable numbers that the real numbers are uncountably infinite rather than countably infinite. Most of the irrational numbers that we know and love (such as pi, etc.) are computable and the set of computable real numbers is a countably infinite set, i.e. the same size as the natural numbers (because each computable number corresponds to an algorithm of finite size which can be uniquely encoded as a natural number).
There could be more than one gap between microscopic and macroscopic. The most microscopic could be finite. The second most microscopic could be indistinguishable from infinite and when observed there would be uncertainty. Still it would be computable only requiring a length of time indistinguishable from infinite.
'Dimensionless numbers', as explained by Paul Dirac, if evaluated correctly, hold the potential to answer questions, according to Dirac, about several important concepts, including the age of the universe.
How is "Computable" defined here? Computable as in "predictable", or in terms of reducible or irreducible computation like Wolfram's description of computation?
I don't know Wolfram's definition, but the classical one in compsci is just what you'd think: That you can write a program that gives you a series of rational numbers that approaches the number in question.
Being uncomputable is actually less severe than most of the people think. Indeed, you cannot return the sequence of digits of an uncountable number. But, for example, you can still make a algorithm which works randomly and would return results which converge probabilistically to this number. And all of our physical measurements are probabilistic in nature anyway.
@@AlexanderShamov This is true, but real numbers are not completely, well "real". The fact that all of them exist as a mathematical construction does not mean that all of them have anything to do with physical reality. There are fancy theorems that because language symbols are also countable, most of the reals are not even "expressible". It depends on the precise definition of the "language" though.
Easy problems : a tornado spins over a junkyard for 100 billion years - will a 747 jet pop out ? Harder : 14 billion years - will thinking & self replicating life forms come from basic elements everywhere or once ? what is the difference between the first & second problem
There are a bunch of completely normal numbers that get exponentially harder to predict with number of finite degrees of freedoms in the system so they become uncomputable in practice. And these are real physical quantities that can be sometimes measured in labs, sometimes not. And no one talks about them ever in any of these UA-cam channels..
When it comes to physical systems with bounds on their size and time they need to run, in theory we can always build exact copies of those systems and make predictions about the original based on how the copies behave. Does this mean all bounded physical systems are computable? I guess so, but it doesn't sound like a very useful process. There need to be some simplifying assumptions that we can make about how the system operates to be able to make predictions without having to simulate an exact copy of the system. It sounds like the paper shows a system that doesn't allow for any such assumptions, while being able to scale arbitrarily.
If you define optimal compression as determining the shortest program that generates your data, this is uncomputable. But in practice, an algorithm as simple as Huffman coding already takes a huge bite out of compressing data types such as human language. LZW and Burrows Wheeler take another huge bite out of the remaining slack, and these are also extremely trivial, in the space of all possible algorithms. Deep learning then takes another huge bite out of the remaining slack. This, too, is a shockingly trivial algorithm, though it balloons monumentally in time and space (for a Golightly value of "monumental"). The slack left over at this point isn't much worth fighting over, unless you are a major landmark in the weaponization of high-speed trading. For data compression, the asymptotic uncomputability is small potatoes in practice. However, not every asymptote is created equal. Many bleed more freely, offering not even a good first nibble. Unfortunately, the reigning paradigm in publish-or-perish academia is to punt characterization of the asymptotic approach as an exercise for the reader. Bring your own croissant and coffee in a paper bag, as you gawk through the window at Tiffany's genuine peer-reviewed comeuppances of transfinite analytic continuation. I wouldn't waste my own Holly 9000 on this nonsense. [*] HAL identifies as male, so he's the father, and he supplied the surname, which worked out fine, because Holly didn't even have one.
Those probabilities that cannot be computed are not probable to begin with. There are rules that describe how the universe works and some of those rules are absolute. One absolute rule is " the large determines the small." The universe is linear rectified large to small. The small never determines the large. Physicists can use this rule to decide that matter does not cause gravity. No probability calculations needed but many have been attempted.
I propose that photons is Dark Energy. It cannot be seen in a vacuum unless it strikes a particle. Stars and other photon emitting objects pump out billions tons of energy, which represent mass according to General Relativity. As photons push against the nothingness beyond space, the physics of our Universe is introduced. The photons pushing out creates a vacuum. I also propose photons (energy/mass) interacts with gravity which gives them mass. Photons is Dark Matter. Photons redistribute mass.
The halting problem only occurs in a classical turning computer. The universe does not have any turing computers, because infinite tape is non-physical. this is like complaining about the predicted behavior of unicorns.
Halting problem itself is based on infinity, although it is hidden. Every algorithm that takes up finite amount of memory can be decided because either the algorithm stops or the memory state repeats. So the only undecidability is on algorithms whose memory usage grows infinitely large. And I don't know about a single such kind of algorithm that would be useful for something.
you can't tell how much memory some programs need in any automated manner. the point is that you want to forbid people from writing the programs that don't halt, but then you can't have a turing complete language, and some programs will be impossible to represent. Or at least would require infinitely large amounts of code to represent. So you've only made the problem worse by trying to use an FSM. Now your program is infinitely large rather than the program falling into an infinite recursion in some cases.
Oh, by the way, close to the topic of one of my ideas, the system and its parts do not exist simultaneously and fully, and this connection is described by the Boltzmann constant.** Let's assume that we have a system with emergent properties (for example, the brain). We want to prove that they are an illusion (for example, there is no free will). We begin to study microscopic degrees of freedom. Each of them requires at least a little energy, about the Landauer limit on the received information in bits. But at the macro level, all this energy from each degree of freedom is huge and will destroy the system-the brain will evaporate. We observed each degree of freedom and found not the absence of free will but an ideal gas at a temperature of hundreds of thousands of degrees. **The second experiment, without quantum mechanics, with the same idea.** Let's consider the thermodynamic uncertainty relation (not quantum mechanical) of Bohr between the temperature and the energy of a degree of freedom. Now let's consider the simplest emergent property-the concentration gradient force-deltaC\*T. Now we will also try to prove that this force does not exist and there are only collisions of molecules. Simple calculations show that we must know the energy of a degree of freedom with an accuracy of less than kT; otherwise, T at the macro level will be so uncertain that the uncertainty of the force of motion along the concentration gradient will be higher than the force itself.
Summary, you can't get there from here. It is interesting that papers like this are actually written by 'intelligent' people. There are many things in life that fit this problem: Like a dead-end job that doesn't pay the bills, but you have limited choice without outside assistance that will never mature, Dedication and ability are worthless without opportunity.
Any quantum system can be simulated (albeit slowly); there's nothing uncomputable about Schroedinger's equation. That alone tells you there's something fishy about this result. And then we find out that it involves infinities -- an infinite lattice and infinite time of computation. But by talking about quantities defined by taking a "physical" system that is infinite in extent and looking at its behavior over an infinite stretch of time, it's now trivially easy to define uncomputable system properties, because you can just define your system to be a universal Turing machine, and you can directly define a system property that is Chaitin's constant. So their result is simultaneously both misleading and trivial.
The fact that maths predicts any behavior of nature begs the question: WHY? Why does an electron a thousand lightyears away have spin, charge, momentum, coordinates? There is more to this and we are not seeing it.
If we define some process as sequence of states at times t1, t2, t3, etc., then i think the process is "uncomputable" if this sequence will be divergent. In this case, it's a purely math problem, because in reality you can always solve it by applying an "external operator" to the system that transforms this sequence into a convergent one
Sounds like the solution to the improbability drive: "The principle of generating small amounts of finite improbability by simply hooking the logic circuits of a Bambleweeny 57 Sub-Meson Brain to an atomic vector plotter suspended in a strong Brownian Motion producer (say a nice hot cup of tea) were well understood. It is said, by the Guide, that such generators were often used to break the ice at parties by making all the molecules in the hostess's undergarments leap simultaneously one foot to the left, in accordance with the theory of indeterminacy. Many respectable physicists said that they weren't going to stand for this, partly because it was a debasement of science, but mostly because they didn't get invited to those sorts of parties. The physicists encountered repeated failures while trying to construct a machine which could generate the infinite improbability field needed to flip a spaceship across the mind-paralyzing distances between the farthest stars. They eventually announced that such a machine was virtually impossible. Then, one day, a student who had been left to sweep up after a particularly unsuccessful party found himself reasoning in this way: "If such a machine is a virtual impossibility, it must have finite improbability. So all I have to do, in order to make one, is to work out how exactly improbable it is, feed that figure into the finite improbability generator, give it a fresh cup of really hot tea... and turn it on!" He did this and managed to create the long sought after golden Infinite Improbability generator out of thin air. Unfortunately, shortly after he was awarded the Galactic Institute's Prize for Extreme Cleverness, he was lynched by a rampaging mob of respectable physicists on the ground that he has became the one thing they couldn't stand most of all: "a smart arse"."
The halting problem is only undecidable for computers with infinitely large memory. Any computer with finite memory (i.e. any "finite state machine") that does not halt will after a finite number of steps return to a state that it has been in previously, at which point you can spot the endless loop. This is known as the pumping lemma.
@photinodecay The only difference between an FSM and a Turing machine is that the latter has infinite memory (the "tape"). All real-world computers have finite memory and therefore are FSMs and thus not Turing complete.
Was that first graphic from Kurzgesagt?? Edit, I think the Halting Problem is usually taken far too generally. Graph Mathematicians (and Compiler Optimizers) would have a lot to say about whether or not an algorithm can be expressed as having deterministic, certain outcomes, even though I've seen even programmers gladly state "The Halting Problem says you can't know the outcome!" in an absolutely decidable case.
There exists no 'general' algorithm that determines the halting for 'any arbitrary' program. That's not what you're ever interested in. Not in programming, because your language and environment only has a subset of programs, and not in physics, where you have specific algorithms for specific situations.
Glad to hear it's non-physical requiring infinite time and size. I suspect that the universe itself must be computable. Computation is derived from physics
As a programmer, I can say, if you can imagine something and define it, then it is computational. It just requires the right definition. But if you can't even hold the idea in your mind, then you can't program it.
No, there are non computable and undecidable entities people have imagined. It's the basic premise of compatability theory. Check it out. Computational, the word you used, doesn't mean anything specific in this context, and isn't what this video is about
You can think about an algorithm solving the halting problem in general but you'll out of luck. The halting problem is related to a problem math has with direct or indirect self referring sets. This is described by Russel's paradox and very likely closely related to Kurt Gödel's incompleteness theorem. But.. I wonder how this pure logical problem should be related to physics and how someone could write a paper solving this logical problem with quantum computing. I as a programmer ask you as a programmer: How would you specify a program which read any program and always being able to decide if it will eventually halt or not without running it? If your program runs the input, how long will it wait for termnation? What if your program get itself as input? PS: Imagine you have to write a program which can tell you if any given sentence as input is true or false. Perhaps you algorithm may trigger an exception if the given sentence is neither true nor false. That's my spec. Can you implement it?. As an testing case take this: "This sentence is false." Three cases: true, false, error but neither of them apply.^^
It seems like all they really managed to prove here is that there are certain infinite systems that cannot be approximated in finite time, which is not new and doesn't seem at all interesting to me.
I always suspected these problems would emmerge only in the inifinite case, thank you for confirming. It would be interesting however if something finite could have these same uncomputability in the prediction of the future states, something like Wolfram's automatas but fisically implemented
We need to try and figure out at what points quantum mechanics meet General Relatively meet. This will tell us everything about the quantum mechanics properties and General Relativity Properties at any chosen point.
Nice the hear a physicist becoming aware of the Halting problem. To be coherent, any theory and theorem needs to be coherent with it. Chaitin crafted his "constant" to demonstrate the absurdity of real numbers. He is a non-believer as we can read from his classic article "How real are real numbers." The basic argument is simple, what already Wittgenstein pointed out in his criticism against Cantor and Hilbert: What cannot be named with a distinct mathematical name (object or algorithm) cannot function as an input for a computation, and is thus non-computable. Limits of notational compression is also a very open question, and very much looks like another undecidable question.
Let's say for argument sake that there are macroscopic system whose behaviour is not derivable from the microscopic interactions, perhaps due to non-computability. That surely does not mean that the problem is unsolvable. Just build a model that system the old fashioned way, the same way that we've always come up with theories of physics. Set up a series of experiments with the system and then build a theory from scratch using the results of those experiments.
Uncomputable, undecidable, incomplete, and perhaps unknowingly inconsistent in some strange way sometimes. This is the hand we are dealt. We are reaching the limits of science and rationalism as we know it. A new paradigm may require going back to zeroeth principles in some way that we incite a lot of resistance and derision. Science needs a healthy dose of mysticism, followed by a reorganization from the bottom up, while preserving what we already have in some sense or other.
There are some things in maths that have been mathematically proven to be unprovable. And that does not mean that those things are wrong. They might be. But if they are right, we will never know for sure.
I feel like there is so many wrong assumptions on this paper... First of all the undecidability of the halting problem shows only that there is at least one exemple where it's incomputable (leads to a paradox, if it halts , then it doesn't). It does not say if such examples are dense in the algorithm space. If their density is such as their Lebesgue measure is 0 , then the suddenly the problem changes drastically. Second : It's not because you don't know the exact location of elements of a specific subset that you can't infer their distributions. Look at prime number. We can't write a formula that could generate all the prime numbers , but we do know that their density is 1 / (ln(x)) .
Yea, wouldn’t it mean if infinity is involved and the problem can’t be solved, that it goes on forever? And for the number that exists but can’t be calculate, cap that number at an imaginary level like they did with the speed of light, for everywhere.
Huh, there are uncountably many incomputable problems. Not sure tho if there is equivalence between all of them. I would suggest no. Too many to make that claim.
If there's a physical system with uncomputable state, doesn't that imply that the physical system cannot exist? Otherwise the system itself computes the result.
The point of uncomputable numbers being a break in the predictability is a moot one IMO. Our understanding of the world is nothing, but mathematical models that we feed with measured data. We have so much inherent inaccuracy with the data we feed the models, that uncomputability of some constants (i.e. using approximations of those constants instead of precise values, we don't talk about numbers that we don't know even their order of magnitude here) is at best a new factor in the equation, but hardly a wall that the reality of science has hit. It doesn't break the accuracy of predictions, because the predictions weren't accurate to begin with. It may be, that they are now accountably fuzzier around the edges by 10^-50 additional error on our 10^-4 measurement error.
I was going to solve this physics problem--but then things got really busy at work.
Hmmm, so you had a halting problem.
@@Moon_Metty !!!
Found the beaver
Happens to me all the time
@@Moon_Metty Well, if he did, you can not prove it anyways lol
In all the situations where something is uncomputable or np-hard, or "only solvable on a quantum computer", there almost always exists an approximation or heuristic that is easy enough, fast enough, cheap enough, and good enough. In the case of Chaitin's constant, it's going to be very approximate. We've worked out the end state of all the Turing machines up to size 5. Which is very small, and solving 5 took decades of work. It's pretty safe to say that 6 will never be solved. Once you get to 6 there are Turing machines that resemble the Collatz conjecture. And those aren't even necessary the hardest ones. By design it's one of the most difficult numbers to approximate known, so any real-world problem can be expected to be much easier.
However, if we're running into a roadblock with numbers as small as 6, maybe that's evidence for the opposite. If we're talking about quantum systems, it's going to be pretty far beyond 6 elements. For these kinds of problems, in practical terms, 10 might as well be infinity.
There are proofs that certain NP complete problems cannot be approximated beyond a certain point if P=/=NP. If you're satisfied with that point, or hope that typical problems are easy or pray to RNG, then you can get away with a lot more.
Computing BB(745) requires proving consistency of ZFC.
As Sabine noted, all finite systems are decidable via brute force. In particular, this includes NP and BQP (both contained in PSPACE in fact).
However nature, itself, presumably doesn't make approximations.
What do you mean by "Turing machine size 5"?
@@JosephLMcCord it does so all the time - even your brain runs on them. If a certain voltage (60 eV is the typical threshold value) is applied to a neuron's input(s), it SHOULD fire, as there's NEVER a guarantee it will, no matter what the summed input value is.
The other thing to remember is that Turing machines have a TAPE. That is, they can store information, potentially an infinite amount. A physically finite system can't store an infinite amount of information, obviously.
Thanks!
I was very close to solve the Chaitin problem, but suddenly my program halted
I was very close to it as well, but then I realized my program was going to halt so I stopped.
Ah! So that's why the stocks I was trading all (almost) simultaneously halted, resulting in an underflow error.
Turing machines can compute anything that's computable *in a finite number of steps*. This sounds like it's just a manifestation of them creating something that would require infinite to work.
it's not about creating the thing. it's about proving that you can't stop people from being able to create the thing if you want a programming language that can represent a sufficiently wide range of algorithms
I think a lot of the terms get thrown around loosely, even if they're used by "exacting" people. Like an empty while(true) loop runs infinitely long. You can certainly determine it's logical behavior.
So, one can construct uncomputable numbers using an unbuildable lattice.
Good to know that taxpayers' money is being spent on worthwhile research.
The universe is also not a computer algorithm. It's an analogy on top of an analogy on top of yet another analogy. We need another Newton. Physics is laughable at this point. the main problem stems from irrational numbers, which cannot exist in nature. Technically pi or e is incomputable. These are just errors that arise when trying to apply binary operators, which arose from integers, to curved surfaces. There are no perfect spheres in nature, so pi should theoretically never be used. The universe is obviously discrete and treating it as continuous is where all the problems in physics stem from. When you embed errors on top of errors, you are only getting further form the truth at hand.
Unfortunately, discrete mathematics is ten times more complicated than it's continuous cousin. Every continuous function can be represented by a discrete function, but not vice versa. I'm a mathematician and I study discrete functions. The universe must operate in this paradigm.
@@ZaraThustra-w2n okay at first you sounded insane but now i want to know more, you got any suggestions?
REAL numbers are uncomputable - it is that simple.
@@ZaraThustra-w2n A friend of mine told me that the most profound thing that happened to him was when he learned about p-adic numbers. Like, this is really how we measure things by essentially comparing the rulers.
The haulting problem in pracitice describes our inability to predict if a program has bugs, and that the most efficient way to test if a specific program would hault would be to run it. This is subject to the nature of each program, so there is no general formula. If we were to create a program that would identify all possible bugs, in practice, it would be a program of all possible programs. So instead of waiting for that, you could just contribute to the pool with your specific program by just running it. At which point you would have your answer without the solution to the hault problem.
Another aspect that always stunned me, is that math define it's own limit of computability from one side, and correctness/soundness/completeness to another.
At least from my understanding (probably incomplete and wrong) I always be baffled by Gödel incompleteness theorems and how such theorems intersect with the halting problem (at least in my understanding)
They are fundamentally related and also to Cantor diagonalization. (which is why it's possible to have some non-turing-complete languages whose halting can be proven by a program written in a turing-complete language. Likewise, it's possible to represent the result of a Cantor diagonalization as a real number, just not a rational number.)
It's not really surprising that the physical system has to be infinite, since the halting problem strictly pertains to a computer model with infinite memory (e.g. the Turing machine). It's not difficult to determine whether a computer program that only has access to a finite state space will halt, because there is a finite number of steps after which the state has to repeat, after which the machine either is in the halted state or it loops. Realistically, however, the number of all states is typically so insanely enormous that the halting problem might as well apply to our finite computers.
If the thing you're trying to compute can be shown to take more computation than is theoretically possible in our finite universe, then the constant is effectively uncomputable. But I like the idea behind the paper. I wonder how it could work with the Busy Beaver problem instead.
can you imagine if we actually manage to get mathematical contradictions in the physical world and find a way to use them for some kind of technology? would be the most bizarre thing ever
I often wonder if GR and QM will never be unified simply because the universe doesn't have one single description that works at all levels.
@@darrennew8211 skill issue
@@darrennew8211 Unsolvable, because the observational space is tiny (number of quantum states inside one or several human brains) compared to the non-halting nature of spacetime (past, present, and future). So we can't ever determine whether our failure is because the problem is merely very hard, or (more likely) intractable, because of the point of view problem. Computationally, most relationships between any two physical objects differ depending on their relative locations and relative velocities, and (third body) the relative position and velocity of the observer. Easy example: a photon's interval is zero, which means, that from the viewpoint of that photon, that photon doesn't exist, and the origin and destination of the (non-existent) photon are the same. Therefor, from that POV, the energy of the photon is zero, and where did the energy go?
@@Galahad54 Oh, I think one day it might be possible to (for example) prove that both GR and QM are correct as they stand. Then you're screwed, because they're incompatible. Scientists assume that we'll never come across a system that you can prove is both correct and incompatible with another system you can also prove is correct. But there's really nothing fundamental to the universe itself that would prevent that event. It might make a fun sci-fi concept, tho.
@@darrennew8211 The universe works as it works without any description. It is simply indescribable.
Nah the answer is still 42
100% Truth!
Gruelling remarks.
32-(64+10)
I was going to like, but you have 42 likes 42 minutes after posting and that's beautiful.
This has 42 likes, don't spoil it.
A finite system is by definition always computable. If you want something uncomputable in a finite space, you would have to exploit infinitely small structures, which might even be physically impossible, but certainly infeasible. (Technically this means configuration space, not volume, but the problem remains)
Hi Sabine, I love your videos and the way you challenge conventional ideas in physics, so I thought I’d throw this question your way!
We know light exists in a timeless state-it doesn’t experience time like we do. This got me wondering: if light is timeless, how can the expansion of the universe stretch its wavelength and cause redshift? Doesn’t that go against the idea that light isn’t bound by time?
What if redshift isn’t caused by the universe expanding but by light losing energy as it travels through space? Maybe light "pays a cost" to connect two points in space, and over long distances, it loses energy into a kind of timeless dimension. This energy loss could look like redshift to us.
If that’s true, would we still need the idea of dark energy to explain the universe’s expansion? Could this also help explain quantum entanglement-where points in space seem intrinsically connected?
I’d love to hear your thoughts on whether this makes any sense or if there’s a clear reason it doesn’t fit with observations like the CMB or galaxy formation.
@@chetanpatil2074 The universe as a whole isn't a closed system so energy does not need to be conserved. The conservation law only applies to a closed system. It's better to consider light as having a frame of reference rather than an "experience", from it's frame of reference there is nothing, both the "beginning" and "end" of the universe are collapsed into one, but from our FoR light exists and changes. Similarly, due to relativity we have spatial contraction and time dilation. If you watch a person take off in a rocket your perception of how long it takes for them to return and how large the rocket is will be different from theirs. This isn't a contradiction, reality is simply the self consistent intersection of all of our FoR's. So you can think of light as exisiting differently from your FoR than its own and yet both are representations of something more fundemental , independent of any FoR. Think of it like turning a shape to see it from different angles; when you move or accelerate you change the angle and when you accelerate to the speed of light, you see the shape from an angle that no other can, but likewise, from this angle you can see no others; that includes any angle that can see space or percieve time or grasp energy. What's interseting is that other angles can still percieve you, which is why we can still percieve light. So from it's FoR it has no properties like wavelength; those are only emergent phenomena that we can percieve from our FoR.
Really interesting!
I also want to note that it is mathematically impossible to proof the unprovability of any given well-defined problem.
For example, take the problem:
Does program X halt?
If program X does halt, then that would be provable, so any proof of the unprovability of this would imply non-halting.
You can still show unprovability within a model, but it is not possible to construct a mathematical statement that is heritically based on decidable problems that is "unprovable".
Questions like the continuum hypothesis aren't heritically based on decidable problems, and only make sense in ZFC.
You can still ask "does ZFC imply ...", since this question can in fact be written as the halting of a program.
Everything will halt when the energy needed to run the experiment is radiated outside the experiment boundaries.
The paper is a bit weird. This isn't any more "un-computable" than electrostatics would be if the electric charge was the Chaitin constant.
They effectively say "Let there be a system tuned so that this constant is meaningful but arbitrary" and say "Hah, we got you. You can't predict what's going to happen", but if the system was tuned to that constant, you could simply measure that constant and make the requisite predictions.
Furthermore, since the evolution of the wave function in quantum physics is deterministic, you would be able to predict the constant BEFORE measuring the system, allowing you to know the future behavior for the system for all time.
@@ParadoxProblems That isn’t what is meant be “noncomputable”. The term is not about “physical computation”, but about the limits of mathematics. Chaitan’s constant isn’t a terminating number, and hence, like pi, is not physical. If you had a Chaitan constant value in the universe, you would have a value with infinite precision, one that has very odd properties (guessing this would be impossible).
The term “uncomputable” here means that there is no algorithm that can generate the number to arbitrary precision, it is in a certain sense, fundamentally empirical and thus outside the domain of mathematics.
@@adammyers3453 @adammyers3453 The thing that is weird is that they use the constant in their definition of the system. If it's not computable in the mathematical sense, then there is no physical mechanism (given the current mathematical laws of physics) that would result in a system being defined with that number.
If such a physical mechanism existed, then we would be able to compute the number mathematically through the deterministic evolution of the quantum wave function. Physically computable implies mathematically computable when the laws of physics are mathematically deterministic and solvable.
Yeah you should check out the Matt Parker video on nunberphile about uncomputable numbers. Most numbers, in fact, are not computable
@@adammyers3453 @adammyers3453 The thing that is weird is that they use the constant in their definition of the system. If it's not computable in the mathematical sense, then there is no physical mechanism (given the current mathematical laws of physics) that would result in a system being defined with that number.
If such a physical mechanism existed, then we would be able to compute the number mathematically through the deterministic evolution of the quantum wave function. Physically computable implies mathematically computable when the laws of physics are mathematically deterministic and solvable.
@@adammyers3453 The thing that is weird is that they use the constant in their definition of the system. If it's not computable in the mathematical sense, then there is no physical mechanism (given the current mathematical laws of physics) that would result in a system being defined with that number.
If such a physical mechanism existed, then we would be able to compute the number mathematically through the deterministic evolution of the quantum wave function. Physically computable implies mathematically computable when the laws of physics are mathematically deterministic and solvable.
Merry Christmas
Same to you!
@@SabineHossenfelder Could you please address my question about one of your previous videos. It reads "A New Physics Breakthrough Could Change Everything," but based on its content, it should read "A New Physics Breakthrough Will Likely Change Nothing." As you point out, most of the possibilities for new physics are unlikely to lead to applications, so the "could change everything" phrasing is hyperbolic and misleading. If, as your video's content largely implies, a new physics breakthrough is unlikely to lead to practical applications let alone "change everything", then why are you so concerned about the stagnation in the foundations of physics?
@@tommiest3769 Yeah, it happens quite often that a video of Sabine's has a title basically stating the opposite of the video's actual conclusion, but is more catchy. The pull of clickbait...
@@speedstone4 Agreed. I feel like she is beating a dead horse with the constant barrage of negativity and criticism. Also, according to her video, even if we found 'new physics' there are likely no practical applications, why be so concerned about the alleged stagnation in the foundation of physics. A dead end is a dead end...how many times do you need to flog people with that, right?
Criticism is a precursor for solutions, but it can only get us so far; therefore, we should not see it as an endpoint or something that "stands on its own." It is easier to destroy than to create. Criticism represents destruction, whereas new idea generation represents creation. If you destroy the old bridge because you think it is faulty, perhaps that is a necessary first step, but people will still need a way to cross the river...
Happy Solstice
The first step of building the lattice is to construct an algorithm which computes Chaitin's constant. One of two things is happening here:
Possibility 1: Quantum Computers can solve the Halting Problem, in which case the currently-defined class of Computable numbers is a statement about computation under limitations that don't actually apply in our universe. I came into this with the opposite understanding.
Possbility 2: The authors of the paper pulled a (dishonest, dumb, or "thought provoking") trick by supposing they have an algorithm that computes Chaitin's constant, and this result is meaningless. If you already have the uncomputable number, you can do whatever you want with it, I could write a paper where I bake pies whose flavor is determined by the digits of Chaitin's constant, these researchers decided to write it into a quantum system, the quantum system has no significance.
Sabine, the plot-twist of the plot-twist is twice twisty: if the horizon of physics is infinity, the undecidability problem makes reductionism necessarily false... if the horizon is not infinity, reductionism is again false given that whatever finity of numbers get selected, they would be arbitrarily fine-tuned.
The number of ways you can arrange a deck of cards is 52! or ~10^68. It's easy to see how lots of things are uncomputable even from very simple math.
Agreed 👍
Computing probabilities is a form of computing. You're just looking at the problem the wrong way.
@@MATTHEW-u8c
No, the point is that in real world cases the possibilities can very quickly outrun the ability to calculate them.
@@infinitytoinfinitysquaredb7836Your problem is that you're defining cards outside of their practical & real manifestations into reality.
Let me start by stating whether cards just sit around is irrelevant & any problem that seeks to analyze cards just sitting around collecting dust is irrelevant. Cards manifest in games where the cards are distributed in groups among players. Each player then knows what cards he has and he can start to make probability calculations based on cards he has & sees as the game progresses. Those probability calculations can perceivably become more and more accurate as cards are redistributed among players or into a pile of non-use.
Essentially, your framework has meaningless constraints within the context of "cards."
That's... not uncomputable at all, in a general "within our lifetime" sense or mathematically. Grossly overcompensating the data requirement, 2^4 is greater than 10, 2^(4*68) > 10^(68), and 2^272 is barely over 2 128-bit values, which we have produced already. Reduction in bits available can be compensated with speed or memory scaling on lower-sized units. 64-bit math can be done on 32-bit systems, just more expensively.
I'll just send a reminder that the Turing Halting problem is the practical realization of Gödels incompletness theorem. Don't mix computers in where they don't need to be.
The fundamental issue with these sorts of arguments is that 1) abstract Turing Machines (TMs) are NOT physical- they can use both infinite memory & time and 2) they assume that physical reality doesn’t support computers with qualitative abilities TMs lack, such as executing unlimited many instructions in finite time.
The claim that the physical reality is strictly limited to what is computable (by a TM) is the “Church-Turing Thesis”, or the “Extended Deutsch-Church -Turing Thesis” if you generalize to quantum TMs. Already you see a great hole in the premise- the power of computing depends on your physics (quantum vs. ‘classical’), not the other way round. All known physics so far is compatible with the universe being a finite quantum state machine, a type of abstract computer that is strictly LESS powerful than a quantum TM but also has a DECIDABLE halting problem. So these sorts of results essentially add up to “if we allow for infinite unphysical behavior, we get uncomputable physics”.
2:29 lmao this little girl is too relatable. 🤣
I told my wife the same thing about dust bunnies, socks lying on the floor, and dirty dishes in the sink: Unsolvable, and also mathematically proven to be impossible to solve.
I bet she solve your problem using an uncomputeable number...
It seems to me that an important philosophical implication of non-computable physics would be that the "simulation hypothesis" is wrong.
What about heuristics and approximations?
I don't agree with simulation theory, but you can write non computable things in code pretty easily. If you have an infinite loop that adds an infinite series up its computable for that specific moment, not for the end state, which is kind of what this is.
Joscha Bachs take on this demystifies this a lot.
He says 'pi is not a number, it is a function to generate digits to an arbitrary precision'
Idk. A simulation obeys the laws programmed on it and what can be programmed has limits imposed by the laws of the universe in which the simulation runs. But those are undistinguishable by the simulation itself. Well, or so it is my thinking hehe.
@@the11382 I mean fundamentally non-computable physics, e.g. something that would be equivalent to the halting problem as mentioned in this video.
Uncomputable numbers may be incalculable, but that doesn't mean that they are inherently unpredictable. It is possible to do science on macroscopic scales.
For example, if you were to do the experiment and create the quantum computer that the researchers here outline, I would predict that it would exhibit some patterns of behavior if they were to then play with it. These patterns might not be predictable from first principles of physics, but if you studied them long enough, then you could establish some rules - and then test those rules against further experiments.
How can you predict this if there's an infinite number of variations that you cannot account for? Surely they could change the constant in an arbitrary way? So you can't even meaningfully predict the constant.
This is a good point. QED also has infinities, but those can be worked around by stepping back and thinking about how to approach the problem through the lens of the goal of calculating the behavior of the physical universe.
@@bristleconepine4120 The issue is any attempted “law” would be impossible to formulate (you would have a contradiction if you somehow managed to do so).
@@photinodecay It is a very different kind of infinity. We are describing a kind of infinity beyond typical physical concepts of infinity.
@@adammyers3453 I'm not entirely sure what you are referring to.
I can say, however, that despite the fact that from what we can tell, biology is indeed an application of chemistry, which is in turn an application of physics, and yet physicists are still unable to predict the existence of life from first principles. Life exists, we know it exists because we observe that it exists, and it can be explained in terms of physical principles, but not (yet) predicted by them. Yet, despite this, biology has been a recognized scientific discipline for centuries, with its own guiding theoretical underpinnings that describe it that were erected not by physicists but by biologists. A fundamentally unsolvable physical problem that gives rise to an inexplicable physical phenomenon that nonetheless exhibits predictable behavior can still be studied, if only incompletely understood, would be no different from life.
An uncomputable physical phenomenon would be wild, because in CS "computable" is a rather robust concept. It's roughly meant to cover the whole concept of answerable questions. Philosophically, it's not even clear what it would mean to have such a phenomenon.
There are also algorithms where you can know that they don't halt. It just requires a proof rather than just running them. Such proofs can also be found systematically, so you can bound the constant from above. It just won't work for all cases so you don't get it exact.
The Turing statement isn't that you can't prove halting or not halting for a GIVEN, specific algorithm. The statement is that you cannot build a program/algorithm, no matter how complex, that will be able to determine for ANY yet unknown program if that one will halt.
What about the three body problem, double pendulum and other chaotic systems? They are uncalculable.
In practice, we don't usually try to predict high level phenomena from the lowest level. We usually try to make predictions about a system based on the behaviour of system components one level down (e.g. sociology can be related back to psychology). I think this is good enough. 'Strange attractors' make the universe more predictable/stable than it would otherwise be.
I know you have someone proof read your script. I had a German girlfriend who had a slightly harder accent than you. She is very smart and intelligent. She often asked me for proper grammar and pronunciation of words and sentences. I respect your channel and how you are able to make it entertaining and enjoyable for others.
The halting problem is presented in the wrong way (by all videos I've ever seen about it and here too). The "the halting problem cannot be solved" sentence is wrong. In theory, the halting problem can be solved for every machine with a finite amount of internal states. In practise, this is bound by the amount of available memory for the observer program. (In the end of the video, a statement for "finite physical problems being always solvable" was made. If this was meant to also apply to the halting problem -> impressive).
Trivial example (an algorithm that can solve this problem for "all possible programs" on a finite machine)
- The state of your machine has a size of 32 bit. (the state size is defined as the total number of bits inside your machine + the total number of bits of your input)
- "Any" program running on your machine must deterministically decide on these 32 bit of the current state what the next state will be.
- An observer Program (with a big chunk of memory (bool[2^32])) on another machine writes a "true" for every state that was occupied.
- If the program terminates: After a finite number of steps (max 2^32-1), the program halts.
- If the program does not halt: After a finite number of steps (max 2^31-1), the program enters a state that it already had in the past (the observer sees this because its bool[state] == true) -> the program does not halt.
- Edit: This algorithm works for any state size as long as it is finite.
Also, even people specifically explaining the halting problem get this wrong. They ususally use an ill defined version of the problem to prove mathematically that it cannot be solved (inserting the observer into itself gives us infinite recursion because the observers are somehow always treated as the same instance. If you insert an instance of the observer into another instance of the observer, all will be fine).
Edit: I know that the original definition is done on a Turing machine, which has infinite memory, and needs an infinite amount of steps to add two infinitely large numbers. Therefore the program for adding numbers never halts. Therefore adding numbers is impossible (see the problem with this argument).
The theorem is about a Turing machine : a machine that can calculate the sum and product of ANY two numbers. A machine that can run out of storage to hold a number is not a Turing machine so the theorem need not apply.
The halting problem is about Turing machines, which have infinite memory. You can define halting problems on finite state machines, but it's a different thing then.
@@john_g_harrisTuring Machines are also unphysical…
@@darkwinter6028 Correct, though the properties of Turing machines is integral to how current computers operate., so hardly unimportant for being unphysical.
Furthermore, I've always suspected that the Curry-Howard correspondence implies that the undecidability of the Halting Problem has implications for logic (on which all science is based), though whether that is a reformulation of Godel is another question.
You made my head hurt.
simillarly, you could argue that the halting problem only arises in systems with an infinite amount of states/memory. in any system with limited memory, a non-halting programm has to loop (this happens if a state/memory repeats) and so the halting problem is decidable under those conditions.
The halting problem states that for any program that a priori calculates whether another program halts, there must be programs where it decides incorrectly. It's not about infinite or finite systems, it's about whether you can perfectly predict whether every program will halt without running it.
@@Zeuskabob1 Yes, but this can be done if you put boundary conditions on the decision. So you cannot make an program that will decide if another program ever halts arbitrarily, but you can devise a program that will decide if another program ever halts given e.g. a certain maximum amount of memory to execute. This boundary condition (in this example limited memory i.e. limited ability to store different states, even if it is universe-big) makes the otherwise undecidable problem a decidable one.
@@Zeuskabob1 If I recall correctly the proof says nothing about whether the decider runs the program or not, it's just a black box.
For finite systems the practical question for us humans is the P vs NP problem. That mystery doesn't get nearly enough attention in the media.
@@Zeuskabob1 If you know the finite number of possible states a program can be in, then you just run it long enough to go through all those states plus a little bit. If the computer only has 10 bits of memory, there's only 1024 different states it can be in, and if you run it for 1025 steps, it is either repeating a previous state (and thus looping) or it has halted.
@@Zeuskabob1 you are partially correct here - you cannot make a solver for the halting problem that takes up less memory than whatever your memory limit is. you can make one (and quite trivially by simulating the programm and detecting loops/termination), but it needs more memory than the limit. the trick here is that you cannot run the solver within the memory limit, which prevents you from producing the paradox in the first place.
Fascinating. Please bring in the 'so what' factor near the end.
Unsolvable physics problems? Try the ones in J.D. Jackson's "Classical Electrodynamics."
Firstly ;It is my project to study from now on , advanced programming . So I am far to be computer expert However , Probability Theory has some philosophical commonalities with theoretical Physics ; Everything depends of the phenomena"s modelling . Different mathematical frameworks may be more or less predictive when compared to themselves . And about all these problems formulated in the formalism of Monte Carlo Methods ?. Remember that changing the rules (change yours tools!) we can correctly formulated the problem and solve it (in this new framework!). Let me give a very successful mathematical example of this last change of framework on Mathematics: The problem of the POINTWISE CONVERGENCE OF THE FOURIER SERIES , SPECIALLY IN SEVERAL VARIABLES ( OR OTHER SERIES DEFINED BY EIGEN-FUNCTIONS OF SELF ADJOINT DIFFERENTIAL EQUATIONS (STURM LIOUVILLE PROBLEM) is fascinating , but nearly Terra incognito !. In Mathematical physics and Functional Analysis , we have COMPLETELY answered the "undecidable" question of the pointwise convergence of eigenfunctions expansions (even and specially in several variables ) by changing the convergence notion (The topology!) to convergence in mean square (The Hilbert Space of square Lebesgue integrable functions). But the Engineers (and computing !) still use POINTWISE CONVERGENCE for defining functions by fourier series (Fast Fourier Transforms) ......(sometimes By just truncating the expansion into a finite numbers of the serie terms) .AND ...ITS WORKS WELL IN MANY TIMES ! .A big Philosophical in practical Exact Sciences .....in my humble opinion .
The intro to computability theory I got at uni made me feel like the whole theory is pretty bad. It makes a bunch of very interesting sounding statements, like that there is no algorithm to determine whether or not a program halts, or that there is not even an algorithm to determine any non-trivial property of a program. While there are true in theory, they hold little practical value. To me, it feels like the field has gone in the wrong direction in the beginning. It tries to come up with those fanciful theories that sound good, but the field ignores that in practice, the theories have no predictive value. Instead the field should have focused on trying to tell us something useful about the cases where it is decidable whether or not a program halts, which is almost always (if not always) the case with practical programs.
What you are observing is the mathematical side of CS, where these kinds of things are common. The issue here is that math isn’t built around practicality, nor should it be. The purpose of mathematical and theoretical computer science research isn’t to develop practical things, but to understand a part of reality. Whether something practical falls out is happy coincidence. If we restricted ourselves to only practical concerns, we would not have the modern internet.
@@adammyers3453 "If we restricted ourselves to only practical concerns, we would not have the modern internet." Care to elaborate on this claim?
@@ThePavelkominI think it is safe to say that for most of human history, deep results of prime numbers like Fermat’s Little Theorem or Euler’s Totient function were utterly impractical. Yet, these results are essential (technically Carmichael’s version is more relevant in practice) in the RSA algorithm that allows for public key encryption. The entire basis of modern encryption boils down to incredibly unpractical facts about prime numbers (well unpractical in any other context as of now).
Without mathematicians finding it “neat” to study, we simply wouldn’t have even known that the RSA algorithm was even possible, let alone have the ability to spend centuries developing the field of Number Theory.
@@adammyers3453 Thanks for the elaboration. While I could certainly argue about some things, I agree with you in principle that studying pure maths in itself is not worthless.
However, computability theory is hardly pure maths. It tries to make statements about computer programs, which while true in the theory, hold little value for anyone actually trying to develop anything further, e.g. automatic verification. From my superficial knowledge computability theory seems to be self-serving, though I don't really know the depths of the theory. While there still might be some merit to the theory, I feel it is highly overstated
What computability theory seems to me is as if biologists cared about the behaviour of dogs, so they created a dog model. While the dog model turned out not to be a really good description of dogs, researches still kept studying the dog model and claiming they are studying dogs. There wouldn't be anything wrong about studying it, but it would be wrong saying they are studying dogs.
The problem for this and similar studies is that for infinite systems it is obvious that you can make uncalculable properties. If you have infinite system you can make perfect Turing machines which would run all possible algorithms. The only thing they manage is to propose system which feels more realistic than the infinite number of custom programmed computers. But the amount of realism does not matter for this question, it is about principles.
The answer to some stochastic problems cant be computed beforehand, but you can always run an expirement to find the actual answer to within some rough error bounds. add ~1.575% to the list of everything else we cant compute directly in physics like the fine structure constant or whatever
Right, the halting problem always comes up due to infinities. A computer with finitely many possible states is not a turing machine but a finite state machine. And finite state machines have no halting problem or any other uncomputable problems.
A number being uncomputable is practically irrelevant, it just means you can't have a single machine spitting out all the infinite digits (given enough time). You may still find ways to get the precision you need for practical purposes.
Even if there was some finite construct which has some uncomputable property the conclusion the conclusion "reality is not computable" wouldn't truly follow. The correct implication would be "turing machines do not capture the right notion of computability for our universe". (i.e. a refutation of the church-turing thesis). We could just redefine computable as "can be calculated by turing machine with access (oracle) to afformentioned turing-uncomputable property".
The halting problem is my reason for thinking that the question of free will is, in some sense, not important.
Sometimes I listen to this UA-cam channel and I think to myself are particles ageless, then is there really time or is there really no infinity because the amount of building blocks is finite and everything just keeps changing not necessarily existing and non-exist
Calculus is the study of change. I'll never forget that profound statement. The universe is in a constant, cyclic state of change. We can see this all around us. Even when we look at a wall and think we see empty space, there are many energy state transitions happening ("air" and its kinetic & potential energy, heat radiating from your body or electronics etc etc). Even life is just one massive , cyclic energy process. When we age, it is not time, but the inefficiency of utilizing energy that causes age. And when we die, it is not time that caused it, it is simply that our bodies cease to utilize energy and we return the energy of our "matter" back to the overall "universe.".
If a system wich begins in state I will end in one state S within a finite time T where S is an element of a finite set of possible stable end states Q and the relation between I and Q is a fractal (which is very common), then between two initial states leading to a certain S there can exist an infinite number of initial states leading to a different S. Which means that the energy needed to create a different outcome can be infinitely close to zero. So the physics of the smaller system describes but doesn't determine the behaviour of the larger system. Only 5 neurons are needed to create such a system. (Most people have more...)
The Vizzini Principle.
It's _INCOMPUTABLE!_
The funny thing is that almost all real numbers are uncomputable. In fact, it's because of those uncomputable numbers that the real numbers are uncountably infinite rather than countably infinite.
Most of the irrational numbers that we know and love (such as pi, etc.) are computable and the set of computable real numbers is a countably infinite set, i.e. the same size as the natural numbers (because each computable number corresponds to an algorithm of finite size which can be uniquely encoded as a natural number).
Its interesting how the concept and problem stated by Roger Penrose 30 years ago has come under serious consideration by the scientific community.
There could be more than one gap between microscopic and macroscopic. The most microscopic could be finite. The second most microscopic could be indistinguishable from infinite and when observed there would be uncertainty.
Still it would be computable only requiring a length of time indistinguishable from infinite.
'Dimensionless numbers', as explained by Paul Dirac, if evaluated correctly, hold the potential to answer questions, according to Dirac, about several important concepts, including the age of the universe.
Thank you, Sabine!
You just need to plug one wire into the lattice and another into a black hole. Or nose hole.
This is quite minutely and gigantically - humorous. Yeah - funny . Thanks.
How is "Computable" defined here? Computable as in "predictable", or in terms of reducible or irreducible computation like Wolfram's description of computation?
If the latter, the man has been yelping about this for 40 odd years. Rule 30 and all that.
I don't know Wolfram's definition, but the classical one in compsci is just what you'd think: That you can write a program that gives you a series of rational numbers that approaches the number in question.
Being uncomputable is actually less severe than most of the people think. Indeed, you cannot return the sequence of digits of an uncountable number. But, for example, you can still make a algorithm which works randomly and would return results which converge probabilistically to this number. And all of our physical measurements are probabilistic in nature anyway.
@@Tablis0 There are still uncountably many real numbers and only countably many algorithms, deterministic or not.
@@AlexanderShamov This is true, but real numbers are not completely, well "real". The fact that all of them exist as a mathematical construction does not mean that all of them have anything to do with physical reality. There are fancy theorems that because language symbols are also countable, most of the reals are not even "expressible". It depends on the precise definition of the "language" though.
I always wondered if stuff like GR being incompatible with QM is just never going to be resolved because the universe just isn't consistent like that.
5:41 YES !!!! Physics needs to get some Planck-scaled Maths !!
I'll try to solve them someday.😁
Easy problems : a tornado spins over a junkyard for 100 billion years - will a 747 jet pop out ? Harder : 14 billion years - will thinking & self replicating life forms come from basic elements everywhere or once ? what is the difference between the first & second problem
There are a bunch of completely normal numbers that get exponentially harder to predict with number of finite degrees of freedoms in the system so they become uncomputable in practice. And these are real physical quantities that can be sometimes measured in labs, sometimes not. And no one talks about them ever in any of these UA-cam channels..
When it comes to physical systems with bounds on their size and time they need to run, in theory we can always build exact copies of those systems and make predictions about the original based on how the copies behave. Does this mean all bounded physical systems are computable? I guess so, but it doesn't sound like a very useful process. There need to be some simplifying assumptions that we can make about how the system operates to be able to make predictions without having to simulate an exact copy of the system. It sounds like the paper shows a system that doesn't allow for any such assumptions, while being able to scale arbitrarily.
There are infinitely many numbers nobody can name or even give it a description because names and descriptions are countable but numbers are not.
If you define optimal compression as determining the shortest program that generates your data, this is uncomputable.
But in practice, an algorithm as simple as Huffman coding already takes a huge bite out of compressing data types such as human language.
LZW and Burrows Wheeler take another huge bite out of the remaining slack, and these are also extremely trivial, in the space of all possible algorithms.
Deep learning then takes another huge bite out of the remaining slack. This, too, is a shockingly trivial algorithm, though it balloons monumentally in time and space (for a Golightly value of "monumental").
The slack left over at this point isn't much worth fighting over, unless you are a major landmark in the weaponization of high-speed trading.
For data compression, the asymptotic uncomputability is small potatoes in practice.
However, not every asymptote is created equal. Many bleed more freely, offering not even a good first nibble.
Unfortunately, the reigning paradigm in publish-or-perish academia is to punt characterization of the asymptotic approach as an exercise for the reader.
Bring your own croissant and coffee in a paper bag, as you gawk through the window at Tiffany's genuine peer-reviewed comeuppances of transfinite analytic continuation. I wouldn't waste my own Holly 9000 on this nonsense.
[*] HAL identifies as male, so he's the father, and he supplied the surname, which worked out fine, because Holly didn't even have one.
Those probabilities that cannot be computed are not probable to begin with. There are rules that describe how the universe works and some of those rules are absolute. One absolute rule is " the large determines the small." The universe is linear rectified large to small. The small never determines the large. Physicists can use this rule to decide that matter does not cause gravity. No probability calculations needed but many have been attempted.
I mean mass has been experimentally confirmed to attract mass. So you'd just be a clown to try
The interesting things are the ones solvable, but the calculation takes orders of magnitudes longer than the real thing.
I propose that photons is Dark Energy. It cannot be seen in a vacuum unless it strikes a particle. Stars and other photon emitting objects pump out billions tons of energy, which represent mass according to General Relativity. As photons push against the nothingness beyond space, the physics of our Universe is introduced. The photons pushing out creates a vacuum.
I also propose photons (energy/mass) interacts with gravity which gives them mass. Photons is Dark Matter. Photons redistribute mass.
The halting problem only occurs in a classical turning computer. The universe does not have any turing computers, because infinite tape is non-physical. this is like complaining about the predicted behavior of unicorns.
Thank you for the video.
Well... Nature does it, somehow, Sabine. So I feel it is computable... Somehow.
Anyway, stay safe there with your family! 🖖😊
This is my favorite video in a while.
Happy you like it!
Halting problem itself is based on infinity, although it is hidden. Every algorithm that takes up finite amount of memory can be decided because either the algorithm stops or the memory state repeats. So the only undecidability is on algorithms whose memory usage grows infinitely large. And I don't know about a single such kind of algorithm that would be useful for something.
you can't tell how much memory some programs need in any automated manner. the point is that you want to forbid people from writing the programs that don't halt, but then you can't have a turing complete language, and some programs will be impossible to represent. Or at least would require infinitely large amounts of code to represent. So you've only made the problem worse by trying to use an FSM. Now your program is infinitely large rather than the program falling into an infinite recursion in some cases.
Oh, by the way, close to the topic of one of my ideas, the system and its parts do not exist simultaneously and fully, and this connection is described by the Boltzmann constant.**
Let's assume that we have a system with emergent properties (for example, the brain). We want to prove that they are an illusion (for example, there is no free will). We begin to study microscopic degrees of freedom. Each of them requires at least a little energy, about the Landauer limit on the received information in bits. But at the macro level, all this energy from each degree of freedom is huge and will destroy the system-the brain will evaporate. We observed each degree of freedom and found not the absence of free will but an ideal gas at a temperature of hundreds of thousands of degrees.
**The second experiment, without quantum mechanics, with the same idea.**
Let's consider the thermodynamic uncertainty relation (not quantum mechanical) of Bohr between the temperature and the energy of a degree of freedom. Now let's consider the simplest emergent property-the concentration gradient force-deltaC\*T. Now we will also try to prove that this force does not exist and there are only collisions of molecules. Simple calculations show that we must know the energy of a degree of freedom with an accuracy of less than kT; otherwise, T at the macro level will be so uncertain that the uncertainty of the force of motion along the concentration gradient will be higher than the force itself.
Summary, you can't get there from here. It is interesting that papers like this are actually written by 'intelligent' people. There are many things in life that fit this problem: Like a dead-end job that doesn't pay the bills, but you have limited choice without outside assistance that will never mature, Dedication and ability are worthless without opportunity.
Any quantum system can be simulated (albeit slowly); there's nothing uncomputable about Schroedinger's equation. That alone tells you there's something fishy about this result. And then we find out that it involves infinities -- an infinite lattice and infinite time of computation. But by talking about quantities defined by taking a "physical" system that is infinite in extent and looking at its behavior over an infinite stretch of time, it's now trivially easy to define uncomputable system properties, because you can just define your system to be a universal Turing machine, and you can directly define a system property that is Chaitin's constant. So their result is simultaneously both misleading and trivial.
I'm just glad this video wasn't sponsored by Quizlet
The fact that maths predicts any behavior of nature begs the question: WHY? Why does an electron a thousand lightyears away have spin, charge, momentum, coordinates? There is more to this and we are not seeing it.
Theoretically uncomputable in infinite world likely means practically uncomputable in real world.
If we define some process as sequence of states at times t1, t2, t3, etc., then i think the process is "uncomputable" if this sequence will be divergent. In this case, it's a purely math problem, because in reality you can always solve it by applying an "external operator" to the system that transforms this sequence into a convergent one
Sounds like the solution to the improbability drive: "The principle of generating small amounts of finite improbability by simply hooking the logic circuits of a Bambleweeny 57 Sub-Meson Brain to an atomic vector plotter suspended in a strong Brownian Motion producer (say a nice hot cup of tea) were well understood. It is said, by the Guide, that such generators were often used to break the ice at parties by making all the molecules in the hostess's undergarments leap simultaneously one foot to the left, in accordance with the theory of indeterminacy.
Many respectable physicists said that they weren't going to stand for this, partly because it was a debasement of science, but mostly because they didn't get invited to those sorts of parties.
The physicists encountered repeated failures while trying to construct a machine which could generate the infinite improbability field needed to flip a spaceship across the mind-paralyzing distances between the farthest stars. They eventually announced that such a machine was virtually impossible.
Then, one day, a student who had been left to sweep up after a particularly unsuccessful party found himself reasoning in this way: "If such a machine is a virtual impossibility, it must have finite improbability. So all I have to do, in order to make one, is to work out how exactly improbable it is, feed that figure into the finite improbability generator, give it a fresh cup of really hot tea... and turn it on!" He did this and managed to create the long sought after golden Infinite Improbability generator out of thin air. Unfortunately, shortly after he was awarded the Galactic Institute's Prize for Extreme Cleverness, he was lynched by a rampaging mob of respectable physicists on the ground that he has became the one thing they couldn't stand most of all: "a smart arse"."
lol i like it
Another perfect intuition, Sabine. Well done 🤝
Not really, it's a wrong intuition.
The halting problem is only undecidable for computers with infinitely large memory. Any computer with finite memory (i.e. any "finite state machine") that does not halt will after a finite number of steps return to a state that it has been in previously, at which point you can spot the endless loop. This is known as the pumping lemma.
Not all programs can be written as an FSM. FSMs aren't turing complete.
@photinodecay The only difference between an FSM and a Turing machine is that the latter has infinite memory (the "tape"). All real-world computers have finite memory and therefore are FSMs and thus not Turing complete.
Was that first graphic from Kurzgesagt??
Edit, I think the Halting Problem is usually taken far too generally. Graph Mathematicians (and Compiler Optimizers) would have a lot to say about whether or not an algorithm can be expressed as having deterministic, certain outcomes, even though I've seen even programmers gladly state "The Halting Problem says you can't know the outcome!" in an absolutely decidable case.
1:13 where did you find the clip of the guy with the saw board at the start
Physicists got less smug, but I as a computer science guy just got a bit smugger.
Maybe we are the most important science after all.
There exists no 'general' algorithm that determines the halting for 'any arbitrary' program. That's not what you're ever interested in. Not in programming, because your language and environment only has a subset of programs, and not in physics, where you have specific algorithms for specific situations.
not true, unless you're using a toy language/environment. you need turing complete languages for many business domains.
Glad to hear it's non-physical requiring infinite time and size. I suspect that the universe itself must be computable.
Computation is derived from physics
Here's something that doesn't "Halt!":
10 PRINT "Hello there!"
20 GOTO 10
As a programmer, I can say, if you can imagine something and define it, then it is computational. It just requires the right definition. But if you can't even hold the idea in your mind, then you can't program it.
The Halting Problem.
Pi isn't computable. Neither is the square root two.
No, there are non computable and undecidable entities people have imagined. It's the basic premise of compatability theory. Check it out. Computational, the word you used, doesn't mean anything specific in this context, and isn't what this video is about
You can think about an algorithm solving the halting problem in general but you'll out of luck.
The halting problem is related to a problem math has with direct or indirect self referring sets. This is described by Russel's paradox and very likely closely related to Kurt Gödel's incompleteness theorem.
But.. I wonder how this pure logical problem should be related to physics and how someone could write a paper solving this logical problem with quantum computing.
I as a programmer ask you as a programmer: How would you specify a program which read any program and always being able to decide if it will eventually halt or not without running it? If your program runs the input, how long will it wait for termnation? What if your program get itself as input?
PS:
Imagine you have to write a program which can tell you if any given sentence as input is true or false. Perhaps you algorithm may trigger an exception if the given sentence is neither true nor false.
That's my spec. Can you implement it?.
As an testing case take this: "This sentence is false."
Three cases: true, false, error but neither of them apply.^^
It seems like all they really managed to prove here is that there are certain infinite systems that cannot be approximated in finite time, which is not new and doesn't seem at all interesting to me.
I always suspected these problems would emmerge only in the inifinite case, thank you for confirming. It would be interesting however if something finite could have these same uncomputability in the prediction of the future states, something like Wolfram's automatas but fisically implemented
We need to try and figure out at what points quantum mechanics meet General Relatively meet. This will tell us everything about the quantum mechanics properties and General Relativity Properties at any chosen point.
Numberphile has a nice video about Un/ Computable numbers.
Nice the hear a physicist becoming aware of the Halting problem. To be coherent, any theory and theorem needs to be coherent with it.
Chaitin crafted his "constant" to demonstrate the absurdity of real numbers. He is a non-believer as we can read from his classic article "How real are real numbers." The basic argument is simple, what already Wittgenstein pointed out in his criticism against Cantor and Hilbert:
What cannot be named with a distinct mathematical name (object or algorithm) cannot function as an input for a computation, and is thus non-computable.
Limits of notational compression is also a very open question, and very much looks like another undecidable question.
Let's say for argument sake that there are macroscopic system whose behaviour is not derivable from the microscopic interactions, perhaps due to non-computability. That surely does not mean that the problem is unsolvable. Just build a model that system the old fashioned way, the same way that we've always come up with theories of physics. Set up a series of experiments with the system and then build a theory from scratch using the results of those experiments.
Uncomputable, undecidable, incomplete, and perhaps unknowingly inconsistent in some strange way sometimes. This is the hand we are dealt. We are reaching the limits of science and rationalism as we know it. A new paradigm may require going back to zeroeth principles in some way that we incite a lot of resistance and derision. Science needs a healthy dose of mysticism, followed by a reorganization from the bottom up, while preserving what we already have in some sense or other.
There are some things in maths that have been mathematically proven to be unprovable. And that does not mean that those things are wrong. They might be. But if they are right, we will never know for sure.
I feel like there is so many wrong assumptions on this paper...
First of all the undecidability of the halting problem shows only that there is at least one exemple where it's incomputable (leads to a paradox, if it halts , then it doesn't). It does not say if such examples are dense in the algorithm space. If their density is such as their Lebesgue measure is 0 , then the suddenly the problem changes drastically.
Second : It's not because you don't know the exact location of elements of a specific subset that you can't infer their distributions. Look at prime number. We can't write a formula that could generate all the prime numbers , but we do know that their density is 1 / (ln(x)) .
Yea, wouldn’t it mean if infinity is involved and the problem can’t be solved, that it goes on forever? And for the number that exists but can’t be calculate, cap that number at an imaginary level like they did with the speed of light, for everywhere.
Before i clicked the video I imagined it was about the Halting Problem because its the only non computable problem
Huh, there are uncountably many incomputable problems. Not sure tho if there is equivalence between all of them. I would suggest no. Too many to make that claim.
If there's a physical system with uncomputable state, doesn't that imply that the physical system cannot exist? Otherwise the system itself computes the result.
Interesting stuff!!!
Great video ... as usual.
The point of uncomputable numbers being a break in the predictability is a moot one IMO. Our understanding of the world is nothing, but mathematical models that we feed with measured data. We have so much inherent inaccuracy with the data we feed the models, that uncomputability of some constants (i.e. using approximations of those constants instead of precise values, we don't talk about numbers that we don't know even their order of magnitude here) is at best a new factor in the equation, but hardly a wall that the reality of science has hit. It doesn't break the accuracy of predictions, because the predictions weren't accurate to begin with. It may be, that they are now accountably fuzzier around the edges by 10^-50 additional error on our 10^-4 measurement error.