Many thanks to everyone that has taken the time to watch this video. Eric Hehner is arguably the world's leading expert on the halting problem. I am very pleased to say that there is now a link to my video from Eric Hehner's page of halting problem links. In one of Eric Hehner's many papers on the halting problem he poses the question of how can we know when a program, written in an undecipherable language, has terminated. He poses this thought experiment as a way to convince the reader that termination is not a visible event. He believes that we could have a model of computation in which a program explicitly stopping and a simple loop such as '10 goto 10' are effectively the same thing in terms of halting. We might infer that it suggests that if we have a machine (or model of computing) which lacks an explicit HALT command (e.g a machine 'shutdown' instruction), then we could possibly manufacture one by 'defining' a simple loop to be a HALT instruction. However, personally I don't like this approach. We already have a widespread meaning for halting that we would be abusing by 'defining' a loop to be a halt. Also even if it is difficult to distinguish between the two scenarios from an external perspective, internally there will still be energy being used on the activity of continually going around in the simple loop. And so I prefer to imagine a computing machine as consisting of moving parts, and where halting means that the parts stop. We could open the lid of the machine and look inside to check if the machine has halted or not. If it was doing a simple loop then we would see movement and so I would not classify this as halting. Whereas if the program has encountered an instruction that says "stop now, don't execute any more instructions" then I would classify that as halting. The third option is that the program simply reaches a point where there are no further instructions. Some people believe that this should be called halting but I disagree. I call this 'exiting', or at best an implied halt. This is because control is passed back to the machine itself which might have been designed to halt in such a scenario, or it might have been designed to restart from the first line of code. In other words, whether or not it halts or loops would be machine-specific rather than program-specific. Now if such a thing as a Turing machine could exist, and if it could do anything that a modern-day real world computer can do, then it would be able to do an explicit HALT. However, if it were to lack this capability, then it would be inferior to real world computers in many aspects, not least of which is that it would be defeated by the logic of the halting problem proof. And of course, it could not be considered to be the only model of computing that is required because it lacks this key feature (of an explicit halt) that is present in so many real world computers. A so-called 'Turing Machine' is neither use nor ornament if it can't be used in relation to real world computation tasks. It is often claimed that Turing's proof of undecidability tells us that it must be impossible to develop a halt/loop decider program on any real world computer. I hope this video has cast some doubt on the validity of that claim. So the specification of the halting problem is flawed if it does not cater for machine-level halt commands because many real computers can perform this type of operation. Therefore the objection that this 'explicit halt' argument does not follow the specification is a moot point. Nobody should be following the specification if the specification is flawed. A real computer can write its output to a text file (or a printer or whatever) and then halt the machine. What matters is that H produces the correct prediction, not whether or not it is capable of being called by another program or whether or not it can return a true/false value to a calling program. Note that that we can have very low-level languages that are so-called 'Turing complete' which do not inherently contain any mechanisms for defining procedures or functions or returning values from these things. These days we take it for granted that we can call functions and get values returned, but in relation to the simplest of instruction sets these are high-level mechanisms that might involve moving data around in various registers and to and from designated areas in memory that have been given specific names such as stacks and heaps etc. I expect that a machine-level halt command would be a very simple operation in comparison to these complicated mechanisms. Should the real program H ever exist, another program would be free to initiate it (or 'call' it). However, this would present a problem if the initiating program wanted to do something else afterwards, because the real program H will end with a machine-level halt (or at least it will if its input program tries to execute equivalent logic to real H). And so say that program Z invokes H(X, X), then, given that X tries to execute the logic of real H, then H(X, X) will decide "It will halt" and then it will do a machine-level halt. There is nothing to stop us from starting the machine up again, feeding the output produced by H into functionality Z and executing the rest of the logic Z so that it uses the decision made by the real H. Of course it would be very inconvenient to have to re-start the machine every time after H has run, and even more inconvenient to have to feed its result into another program rather than it all being done automatically. But H only needs to do a machine-level halt if its input tries to execute the logic of H. Also, the question that the halting problem asks, i.e. "does it halt", could be considered to be a very bad question if the type of halt it refers to is only the implied type of halt. Note that this is similar to the Barber paradox in that it appears that the two options on offer are mutually exclusive and cover all options, when in reality they don't. The barber himself is not catered for in the Barber paradox just as an explicit halt is not catered for in the halting problem logic. This is why the halting problem question is a bad question. Some people might still claim that the halting problem remains undecidable even with a real world machine equipped with the 'shutdown' instruction because, by definition, the program H could never use this instruction. However, I believe this is like saying it is impossible to switch on a real world computer because someone, a long time ago, wrote a specification that says you are not allowed to press a button that would switch on an imaginary computer. In other words, in my opinion it is not good logic.
Great video, glad to see you are back, I really like the idea of the intelligent alien, it is a great way to eliminate bias. I question the big bang, I don't feel the evidence is strong enough to support the conclusion, So I say could you convince an intelligent alien based on observations alone that the big bang (or anything like it) happened? it makes you start with evidence and first principles as opposed to working backwards from a conclusion.
The Big Bang theory, if you look into it, is actually an incoherently phrased piece of nonsense. Not even wrong, just a word game. The main equivocation is on the terms "universe" and "space." Read Konstantin Antonopoulos's article A Bang into Nowhere for a tour of the semantic problems with the notion of the Big Bang.
@@ThePallidor I have to agree completely with you, and I have looked into big bang science for many years. (from a huge supporter of the BB models). But the more you look into it, the less sense it makes, redshift and CMB is all they have and nothing in redshift or the CMB indicates a big bang. That the CMB is 'picture of the start of the universe permanently painted in the sky with light', simple does not work according to anything we know about science or physics. Expanding space makes no sense. Redshift is clearly gravitational shift, that light 'expands' as it moves through space is not supported by any known physics, and we can test gravitational shift is a high rise building. BB models are swiss cheese of models and it's all holes. An eternal and dynamic but steady state universe makes far more sense. I amazes me that mainstream cosmology is so biased for bb models, I guess that is where the grant money is. I prefer to know the truth. Thanks for your response.
I imagined that we made some new programs: Program L - Loops for any given input Program D - Does whatever its input says. If it receives "loop", it loops, and if it receives "halt", it halts. Real H - As previously defined, it decides whether a program, given a certain input, will halt or loop. If the program given to Real H executes an accurate version of the functionality of Real H, it outputs "halt" and performs a machine-level halt (shut-down). Now I made Program Q, which takes any input i. Q works as follows: - Input i and Program L are fed into Real H - The output of Real H is fed into Program D In Program Q, the Real H will always output "loop", as Program L loops for any given input. This output will then be fed into Program D, which does whatever the output says. Since the output of Real H will always be "loop", Program D will always loop, and thus Program Q must always loop. Now I take Program Q, and any Input i, and feed that into a stand-alone Real H. Real H "simulates" Program Q, and observes that it executes an accurate version of its own functionality (Program Q contains Real H). So Real H is forced to output "halt". But we know that Program Q always loops, so Real H was wrong. There is no issue here with the H in Q being fake, as the H in Q never analyzes a program that contains yet another H - it only analyzes Program L. I'm barely a dabbler in these topics, so maybe I'm missing something fundamental in how you defined Real H. To me, however, it seems like you can't buttress the definition of H in such a way that will eliminate all logical contradictions; new inconsistencies will pop up no matter what additional rules you pile on the system.
@litteral7179 I find your suggestion very interesting. There might well be a loophole if Real H does not always end with a proper halt (like a shutdown). However, if Real H does always end with a proper halt, then you cannot construct your Program Q to operate in the way that you have described. If Real H always halts properly, then since Program Q invokes Real H, then Program Q will always halt. This would mean that Real H could not be used by any calling program. Thank you for a very thought-provoking comment :) You may be interested that I aim to release another video on this topic in the near future. I was never that happy with my assumption that Real H could somehow determine if it was contained within the input program because many people might consider this to be a suspect claim. So in my next video I intent to go even further by outlining a simple scenario where it is (for all practical purposes) beyond any doubt that Real H can be constructed and will always be correct. I will also go through Turing's original paper, the Wikipedia descriptions, and how all of this is associated with the mathematical concept of real numbers and the weird notion of infinites of different sizes.
In your reasoning, you propose to distinguish between 2 different types of "halting" for a turing machine or program (to put it simply: an "explicit" halt like calling a "shutdown" OS function, and an "implicit" halt like reaching the end of a script). However, in the context of turing machines, it seems that this distinction does not exist and I would need further explanation. My understanding is that a turing machine will either loop forever, or halt because there is no transition defined from its current state for the current read symbol. In the latter case, we say that the machine "accepts" if its final state was defined as accepting, else the machine "rejects". Can you precisely define, for turing machines, the difference between your proposed 2 different types of "halting"? Let's say I executed your "Real Program H" (from 12:58), how would I know (as the caller) if the turing machine halted "explicitly" or "implicitly"? Regards
In one of Eric Hehner's many papers on the halting problem he poses the question of how can we know when a program, written in an undecipherable language, has terminated. He poses this thought experiment as a way to convince the reader that termination is not a visible event. He believes that we could have a model of computation in which a program explicitly stopping and a simple loop such as '10 goto 10' are effectively the same thing in terms of halting. To relate this to your question it suggests that if we have a machine (or model of computing) which lacks an explicit HALT command, then we could possibly manufacture one by 'defining' a simple loop to be a HALT instruction. However, personally I don't like this approach. We already have a widespread meaning for halting that we would be abusing by 'defining' a loop to be a halt. Also even if it is difficult to distinguish between the two scenarios from an external perspective, internally there will still be energy being used on the activity of continually going around in the simple loop. And so I prefer to imagine a computing machine as consisting of moving parts, and where halting means that the parts stop. We could open the lid of the machine and look inside to check if the machine has halted or not. If it was doing a simple loop then we would see movement and so I would not classify this as halting. Whereas if the program has encountered an instruction that says "stop now, don't execute any more instructions" then I would classify that as halting. The third option is that the program simply reaches a point where there are no further instructions. Some people believe that this should be called halting but I disagree. I call this 'exiting', or at best an implied halt. Now if such a thing as a Turing machine could exist, and if it could do anything that a modern-day real world computer can do, then it would be able to do an explicit HALT. However, if it were to lack this capability, as you suggest, then it would be inferior to real world computers in many aspects, not least of which is that it would be defeated by the logic of the halting problem proof. And of course, it could not be considered to be the only model of computing that is required because it lacks this key feature (of an explicit halt) that is present in so many real world computers. A so-called 'Turing Machine' is neither use nor ornament if it can't be used in relation to real world computation tasks. It is often claimed that Turing's proof of undecidability tells us that it must be impossible to develop a halt/loop decider program on any real world computer. Hopefully this video has cast some doubt on the validity of that claim.
@@KarmaPeny I see, so we're dealing with a "real-world" computer, supposedly more powerful than a strict turing machine because the computer is equipped with a "shutdown" instruction, which allows the program to stop it. Now I think there is a flaw in the logic shown in the video. Your "Real Program H" (from 12:58) is incorrect because it does not follow the specification of the program H: it does not decide the halting problem. The program H must by definition decide the halting problem, given any program/input pair: - it returns 'True' if the given program will halt for the given input - else it returns 'False' However, your "Real Program H" does not follow this specification: it has a different behavior in some cases (for example, when we feed it the program X), where it will shutdown the machine instead of deciding the halting problem. In fact, this change in the specification is truly problematic if we want to use the program H without trying to deceive it. Let's say we have a new "program Z", which simply wants to call H(X, X) (meaning asking the program H if the program X will halt if we execute it with itself as input), and then perform some other tasks. Even though the program Z was not trying to deceive the program H, your "Real Program H" would actually shutdown the machine, breaking the "contract" given in the actual specification of H. With this in mind, I conclude that the halting problem is still undecidable even with a real-world machine equipped with the "shutdown" instruction, because by definition the program H could never use this instruction. What do you think?
The specification of the halting problem is flawed if it does not cater for machine-level halt commands because many real computers can perform this type of operation. Therefore your objection that I am not following the specification is a moot point. Nobody should be following the specification if the specification is flawed. A real computer can write its output to a text file (or a printer or whatever) and then halt the machine. What matters is that H produces the correct prediction, not whether or not it is capable of being called by another program or whether or not it can return a true/false value to a calling program. Note that that we can have very low-level languages that are so-called 'Turing complete' which do not inherently contain any mechanisms for defining procedures or functions or returning values from these things. These days we take it for granted that we can call functions and get values returned, but in relation to the simplest of instruction sets these are high-level mechanisms that might involve moving data around in various registers and to and from designated areas in memory that have been given specific names such as stacks and heaps etc. I expect that a machine-level halt command would be a very simple operation in comparison to these complicated mechanisms. You said: Real Program H ... has a different behaviour in some cases (for example, when we feed it the program X), where it will shutdown the machine instead of deciding the halting problem. This scenario is pretty much the main topic of the video. The real H will say "It will halt" (which is its decision) and then it will do a machine-level halt. Please re-watch the video if you are still unclear on this point. You said: this change in the specification is truly problematic if we want to use the program H without trying to deceive it. I disagree, you can freely initiate the real program H (should it ever exist) whether you are trying to deceive it or not. It is only a problem if you want to do something else afterwards, because the real program H will end with a machine-level halt. You said: Even though the program Z was not trying to deceive the program H, your "Real Program H" would actually shutdown the machine, breaking the "contract" given in the actual specification of H Here program Z is invoking H(X, X), and if the code of X would, when executed, try to invoke the real H (or equivalent functionality to real H) then H(X, X) will decide "It will halt" and then it will do a machine-level halt. But there is nothing to stop you from starting the machine up again, feeding the output produced by H into your functionality Z and executing the rest of the logic Z so that it uses the decision made by the real H. Of course it would be very inconvenient to have to re-start the machine every time after H has run, and even more inconvenient to have to feed its result into another program rather than it all being done automatically. But you have to try to see the bigger picture which is that the question that the halting problem asks, i.e. "does it halt", is a very very bad question. It is highly unlikely that this is really the question that you should be trying to answer in the 1st place because it is such a problematic question. Please see my other halting problem video ("The Problem with the Halting Problem") which covers this very topic. Note, this is similar to the Barber paradox in that it appears that the two options are mutually exclusive and cover all options, when they don't (the barber himself is not catered for just as an explicit halt is not catered for in the halting problem logic). This is why the halting problem question is a bad question. You said: I conclude that the halting problem is still undecidable even with a real-world machine equipped with the "shutdown" instruction, because by definition the program H could never use this instruction. What do you think? I think this is like saying it is impossible to switch on a real world computer because someone, a long time ago, wrote a specification that says you are not allowed to press a button that would switch on an imaginary computer.
I came across your channel today because I was looking for someone who also thought like me and not just followed the "herd effect", since I was 16 years old I realized through advanced studies in real analysis that the theory of infinite series and itself " infinity" are completely ill-defined and wrongly applied throughout mathematical development, expressions like 0.999... were never definitely equal to 1 and the supposed arguments are all flawed and inconsistent, I am a 19 year old Brazilian student and even today with my current knowledge of mathematics, I still find such inconsistencies being repeated over and over again in various results. I loved your channel and I'm sorry if I made any mistakes in writing, my English is basic.
Two themes keep coming up in the error-ridden fields of math/logic, and you dismantled both perfectly again here. Since that job is done, I'll instead describe the general form of the two and then try to shed light on them from analytical, psychological, sociocultural, and practical angles: Theme 1) A vague term leads to an equivocation between two or more interpretations (e.g., of the term "halt"), which leads to an invalid proof that appears watertight in terms of propositional logic. Analytically, we can even say it *is* watertight in terms of propositional logic because propositional logic is built solely of words. Relentlessly perfect logical rigor with lackadaisical semantic rigor, one might say. Psychologically, this creates a perfect trap for wordcels, those who see the world as solely made of words. A magician would call it misdirection: focus on the sentential manipulations of the propositions (perfectly done propositional logic), not the definitions of the key terms (ignore the sloppy semantics). However, this magic trick is self-working and doesn't even require the presenter to be aware of the trick; it is generally as much a trick on oneself as on others. Socioculturally, we can say that tribal or cultural logic is logic where what counts as valid inference is enforced by the thinker, thus if they are smart the reasoning is verbally flawless, but where the semantic rules are not enforced by the logical thinker's own relentless circumspection but instead left to the tribe/culture (in this case the culture of math/compsci departments). If the thought to question the ambiguous term starts to bubble up in the thinker's mind, they are subtly cajoled against doing so, so that certain tribally/culturally beneficial equivocations are systematically preserved. To question them would be gadflyish, "wasting time on frivolities that everyone already knows the meaning of." In terms of practicality, keeping careful track of levels of analysis, or zoom levels as Steve Patterson calls them, is an indispensable rationality skill. The error pattern that slips by thinkers less astute than yourself in this regard is "taking a term of everyday talk into a deeper investigation without ensuring it is rigorously defined enough for the task." The ambiguous term "halt" is fine for most other analyses in computer science, but when commencing this deeper investigation on which "halt" becomes the key term on which everything hinges, Turing and the rest were remiss in not doing the standard "zoom check," the check that the term relied on from a less rigorous context was up for the task at this closer level of inspection. In fact the term "halt" clearly needed refinement, as is the norm when carrying a trusty old term into a deeper investigation than any previously attempted. In this case, as is typical, it needed a disambiguation between two subtly (and up until then unimportantly) different senses, which you named here "halt" and "exit." Theme 2) This personal and cultural self-fooling process frequently involves creating (usually unwittingly) sufficient complexity so that the people involved cannot detect the error. Tribally, once this is decided by "the experts" almost no one else will delve into the matter with much of a critical eye, because they will be too preoccupied in trying to understand the fascinating conundrum, too concerned with just trying to keep up with the "greats." They feel they ought to know their role in the tribal division of labor and stay in their lane. Psychologically they feel on the back foot and intimidated by what appears dodgy but in fact, they are assured, only appears dodgy because it is so profound and because they are too small of mind to glimpse the profundity that the giants of the field do. Were they to undertake the kind of analysis necessary to explode the fallacy in almost diagrammatic form as you have done here, they would be plagued every step of the way with the feeling they are pretending to greatness, a rapidly growing pit in their stomach as they check the "logic" every which way and find it checks out, further ensuring that they never regain the subtlety of mind to see that they've been tricked at the much more basic level of the key term itself. The tribe extends beyond the math/compsci/logic departments by means of popular science books, and we find that almost everyone exposed to the problem is either inducted into the tribe as a fan or simply does not care about the apparent paradox because they're in applied compsci or applied math or outside the industry entirely. Truly rare are those who take an interest in these conundrums yet somehow manage to avoid the sociocultural, psychological, and analytic traps that ensnare other minds. And then to present them gently to the public. Kudos, Karma Peny, you are one of the few leading a small but growing band of thinkers out of the ashen wastelands of the current Intellectual Dark Age. Other arguments I believe you haven't covered that follow this same pattern include Nick Bostrom's simulation argument. Particularly topical as his flavor of "superintelligence" doomerism is booming today, without the benefit of this key term "intelligence" being clearly defined. And of course Gödel's incompleteness theorems and many other pieces of sophisticated wordcelery in math, physics, and analytic philosophy in general (which is treated as cultural philosophy, going back to all I said here, but I'll talk about that another time as this comment is getting long).
Even if you modify the definition of Turing Machines with an extra "EXIT" state, the same construction of the X machine still works and shows that it is impossible to build H such that H detects "HALT or EXIT".
@tomekczajka Thank you for your comments. I'll provide more details on my answers, but first I'll give you the short versions. You say that my definition of 'EXIT' makes no sense for Turing Machines. But my argument applies to any real-world computer including any real-world implementation of a Turing Machine (such as one I will describe later). I see no point in any discussion about a fictional Turing Machine consisting of nothing more than abstract mathematical concepts which might not have any validity in the real-world. Next you claim that even with an 'EXIT' state, we could prove that it is impossible to build H such that H detects 'Halt' or 'EXIT'. But the halting problem proof is one that relies on a 'decision problem' which is a problem that only has the mutually exclusive possibilities of either 'yes' or 'no'. It asks "does it halt(y/n)?" but then the argument of the proof assumes that H can 'exit' and that other instructions can be performed afterwards. It allows H to print its Y/N decision but it does not allow H to halt. Your suggestion of a proof that deals with 'HALT' or 'EXIT' does not meet the criteria of a decision problem because without the 'HALT' possibility it has not covered all options. Now for more details... The terminology used by Turing might have created a smokescreen that obscured the issues and made it difficult for other people to identify them. In particular we have the concepts of 'state' and 'machine'. We all know about the front-end of the imaginary Turing machine with its 'infinite' tape and read-write head, but at the back-end we have the instruction cards which, in today's terminology, might be called 'program code'. A 'state' was merely the number/identifier of one of these cards, or else it might be zero for the HALT state. For a binary tape, each of the cards might contain a block of functionality such as the following: Instruction Card #1 current_char = read-tape(); IF (current_char == 1) THEN WRITE 1 {or 0} MOVE LEFT {or RIGHT} GOTO #whatever {i.e. goto another instruction card} ELSE WRITE 1 {or 0} MOVE LEFT {or RIGHT} GOTO #whatever {i.e. goto another instruction card} And so we can see the similarities to modern day computers and programming languages. But this was all devised in the days before computers were invented and the terms of 'state' and 'machine' relate to the concept of finite-state machines. In order to relate these things to reality I prefer to say 'instruction card' instead of 'state' and I prefer to say 'program' instead of 'machine'. Finite-state machines are often imagined as diagrams with the states as nodes and with arrows to show the direction of processing (where a change of state is called a 'transition'). A normal state might appear as a label inside a circle whereas the halt state might appear with an extra circle around it. As such, it might appear reasonable to assume that we could easily incorporate one of these 'machines' inside another machine and that it would not affect the functionality of the first machine if we just remove the extra circle that identifies its 'final state'. Indeed, the argument that we are just changing the diagram to reflect that it is no longer the final state might sound perfectly reasonable. It is so subtle that it obscures the fact that we could be altering the key functionality of the first machine. Furthermore, with the apparent 'abstract' nature of finite-state machines and Turing Machines people might imagine that programs can somehow be thought of as being able to execute all by themselves. They might believe that there is no need to include a device that processes the instructions. But program instructions don't just work by magic. And so when we relate the halting problem proof to a real-world computing scenario then we should soon realise that there is a clear distinction between the program and the machine upon which the program runs. We could then get further clarity on what the HALT instruction (or 'state') actually does. We all know that 'halt' means 'stop' but in a real-world computing scenario are we talking about stopping the machine, stopping the program, or even just exiting from the program (& thus implying that processing stops)? Here we have three different possible interpretations of what 'halt' might mean. The halting problem specification and proof seem to assume that 'halt' only means the third of these options. And so the Halting problem specification only allows the choice between the two options of 'exit the program & allow further processing' (which it calls halting) or 'go into an unending loop'. These are not mutually exclusive because there are other types of halting that are not catered for, such as 'optionally produce some output and then stop the machine'. If the specification of the halting problem allowed for all possible types of halting then, as explained in the video, the proof of undecidability would no longer work. Now for a summary of my main argument... The halting problem proof makes the assumption that if a 'halting predictor' could exist then it could be implemented as some kind of subroutine or called function within another program which could take its prediction and use it in a way so as to contradict the prediction. I contest this assumption. If a 'halting predictor' could exist then it could print its prediction and then HALT. By going to the HALT state it would prevent any calling program from contradicting its prediction. I argue that if we change the program to prevent it from halting then we no longer have the 'halting predictor', we have some other program. As a really super-simple example, consider a 'self halt/loop predictor' program which is a program that predicts its own halting nature. It simply prints "I WILL HALT" and then goes to the HALT state. Now, if we change this program so that instead of going to the HALT state it goes to another section of code that loops, then it is obvious that by amending the program we have changed its functionality. The original functionality of the 'self halt/loop predictor' no longer exists inside the updated program. I claim that the same argument can be applied to a universal halt-or-loop predictor program, for which it might use the HALT state as an important integral part of its functionality. In order to use its functionality within another program we would need to alter it in such a way that it would no longer be the same functionality. The logic of the halting problem proof goes like this: If 'functionality H could exist' then it would be possible to construct program X with the following nature: If I loop then I will not loop and if I don't loop then I will loop Therefore 'functionality H' can't exist This shows that the halting problem proof is a proof by contradiction, which means that it highlights that a problem exists in our logic somewhere. We then make a best guess at what that problem is, but this best guess might be wrong. And so all proofs 'by contradiction' should be considered to be 'best guesses' rather than 'undeniable truths'. It occurs to me that the premise described in the first line ignores the fact that functionality H cannot be included in program X without changing its halting nature. Therefore perhaps the argument should go like this: If we could include the functionality of H within X then it would be possible to construct program X with the following nature: If I loop then I will not loop and if I don't loop then I will loop Therefore 'functionality H' can't be faithfully reproduced in X
@@KarmaPenyI agree with you that the undecidability of the halting problem does not extend to the undecidability of whether real world computers will ever stop calculating. In fact, it seems that it's trivial to decide whether real world computers will stop working. Given a real world computer, the solution is "yes, it will someday stop working", if nothing else then because of the heat death of the universe. I thought you were disputing the undecidability of halting for Turing Machines (i.e. by definition a mathematical idealization). For example, at 8:10 you say "even with Alan Turing's imagined computer, which we call a Turing Machine" etc. I disagree with you that the mathematical theorem about mathematical computers has no practical applications. It has plenty of practical applications, for example compiler writers sometimes use it to argue on its basis that a certain feature of their compiler isn't going to be possible to implement. The usefulness of the theorem to the real world is to show that if you're going to check whether a program halts, you will have to rely on something that Turing Machines don't have (e.g. a time limit), which is useful practical knowledge.
@tomekczajka You said: Given a real world computer, the solution is "yes, it will someday stop working", if nothing else then because of the heat death of the universe. The halting problem, as applied to real-world computers, is not about if they will someday stop working. Ideally it should be about can we determine beforehand if the execution of a given algorithm will, if given enough resources such as time and storage, eventually go into an inescapable loop (yes or no)? Note that I say "should be about" the question of "does it loop(Y/N)?" whereas unfortunately it is usually expressed as "does it halt (Y/N)?" This creates the illusion that any given algorithm must either go into an inescapable loop or reach the end of instructions thereby allowing the further instructions of a subsequent algorithm to proceed. It hides the possibility that the algorithm might end via an instruction to STOP or HALT. You said: I thought you were disputing the undecidability of halting for Turing Machines (i.e. by definition a mathematical idealization). For example, at 8:10 you say "even with Alan Turing's imagined computer, which we call a Turing Machine" etc. I don't believe that the mathematical idealism of the imagined Turing Machine, such as the 'infinite' nature of the tape, is an essential requirement of the proof. Instead of having an infinite tape we could simply assume that we have enough tape that is required before the algorithm reaches an inescapable loop or else otherwise finishes (via a HALT/STOP instruction or by EXITing as it has reached the end of its 'states', where a 'state' = an instruction card = a section of code). And so I believe that we can do away with all the wishy-washy abstract mathematical concepts without changing the underlying arguments. In this sense you are right to think that I am contesting the original proof. You said: I disagree with you that the mathematical theorem about mathematical computers has no practical applications. It has plenty of practical applications, for example compiler writers sometimes use it to argue on its basis that a certain feature of their compiler isn't going to be possible to implement. I would say this is the opposite of being useful as it suggests that certain features would be impossible to implement whereas in actual fact they might well be possible. You said: The usefulness of the theorem to the real world is to show that if you're going to check whether a program halts, you will have to rely on something that Turing Machines don't have (e.g. a time limit), which is useful practical knowledge. The halting problem deliberately avoids the issue of how long tasks will take. It doesn't inform us about any aspect of how long anything will take. Such consideration is dealt with in the field of theoretical computer science called 'time complexity'.
@tomekczajka Furthermore, for an example of the confusing nature of what a 'state' is I refer you back to my previous example of the functionality that might be contained on an instruction card. A 'state' refers to a particular 'card identifier' which might just be a number such as 1, 2, 3, and so on. But in this scenario the number 0 has a special meaning. It means "go to the halt state" which is problematic because if 'state' = 'instruction card' then we should have an instruction card containing the instruction "HALT". Conceptually we might think of this 'HALT' instruction card as being hard-coded in the machine. And so when we execute a program on a Turing Machine, if any of the instruction cards cause a jump to state 0, then the HALT state has been explicitly selected. However, if we just think of the instruction cards as being the only states then we might misunderstand the nature of halting and we might claim that Turing Machines don't have an explicit HALT command. Indeed, Turing Machine algorithms can ONLY end by encountering an explicit HALT. This means that my definition of 'EXIT' cannot apply to a complete program, but it can apply to a small section of code within the program. In the halting problem proof it assumes that a program X can be constructed that contains or somehow calls the functionality of H. In other words, that H can exist as a section of code within X. But this is trivially impossible because H must end with a HALT instruction which forces the Turing Machine to HALT. If we alter H to prevent it from halting then we no longer have the true functionality of H; we have some different program that cannot be trusted to give the same result as H. For example, consider this logic in which 'Card_0' represents the HALT instruction: Card_1: if Card_2 does not contain a 'goto Card_0' instruction then goto Card_1 else goto Card_2 Card_2: goto Card_0 The above logic will act in a completely different way if we change 'goto Card_0' on Card_2 to 'goto Card_3' or whatever, and we add more cards. Perhaps if we had not had this confusion about 'states' then it would have been obvious all along that the halting problem proof was not valid.
Many thanks to everyone that has taken the time to watch this video.
Eric Hehner is arguably the world's leading expert on the halting problem. I am very pleased to say that there is now a link to my video from Eric Hehner's page of halting problem links.
In one of Eric Hehner's many papers on the halting problem he poses the question of how can we know when a program, written in an undecipherable language, has terminated. He poses this thought experiment as a way to convince the reader that termination is not a visible event. He believes that we could have a model of computation in which a program explicitly stopping and a simple loop such as '10 goto 10' are effectively the same thing in terms of halting. We might infer that it suggests that if we have a machine (or model of computing) which lacks an explicit HALT command (e.g a machine 'shutdown' instruction), then we could possibly manufacture one by 'defining' a simple loop to be a HALT instruction.
However, personally I don't like this approach. We already have a widespread meaning for halting that we would be abusing by 'defining' a loop to be a halt. Also even if it is difficult to distinguish between the two scenarios from an external perspective, internally there will still be energy being used on the activity of continually going around in the simple loop.
And so I prefer to imagine a computing machine as consisting of moving parts, and where halting means that the parts stop. We could open the lid of the machine and look inside to check if the machine has halted or not. If it was doing a simple loop then we would see movement and so I would not classify this as halting. Whereas if the program has encountered an instruction that says "stop now, don't execute any more instructions" then I would classify that as halting. The third option is that the program simply reaches a point where there are no further instructions. Some people believe that this should be called halting but I disagree. I call this 'exiting', or at best an implied halt. This is because control is passed back to the machine itself which might have been designed to halt in such a scenario, or it might have been designed to restart from the first line of code. In other words, whether or not it halts or loops would be machine-specific rather than program-specific.
Now if such a thing as a Turing machine could exist, and if it could do anything that a modern-day real world computer can do, then it would be able to do an explicit HALT. However, if it were to lack this capability, then it would be inferior to real world computers in many aspects, not least of which is that it would be defeated by the logic of the halting problem proof. And of course, it could not be considered to be the only model of computing that is required because it lacks this key feature (of an explicit halt) that is present in so many real world computers.
A so-called 'Turing Machine' is neither use nor ornament if it can't be used in relation to real world computation tasks. It is often claimed that Turing's proof of undecidability tells us that it must be impossible to develop a halt/loop decider program on any real world computer. I hope this video has cast some doubt on the validity of that claim.
So the specification of the halting problem is flawed if it does not cater for machine-level halt commands because many real computers can perform this type of operation. Therefore the objection that this 'explicit halt' argument does not follow the specification is a moot point. Nobody should be following the specification if the specification is flawed. A real computer can write its output to a text file (or a printer or whatever) and then halt the machine.
What matters is that H produces the correct prediction, not whether or not it is capable of being called by another program or whether or not it can return a true/false value to a calling program.
Note that that we can have very low-level languages that are so-called 'Turing complete' which do not inherently contain any mechanisms for defining procedures or functions or returning values from these things. These days we take it for granted that we can call functions and get values returned, but in relation to the simplest of instruction sets these are high-level mechanisms that might involve moving data around in various registers and to and from designated areas in memory that have been given specific names such as stacks and heaps etc. I expect that a machine-level halt command would be a very simple operation in comparison to these complicated mechanisms.
Should the real program H ever exist, another program would be free to initiate it (or 'call' it). However, this would present a problem if the initiating program wanted to do something else afterwards, because the real program H will end with a machine-level halt (or at least it will if its input program tries to execute equivalent logic to real H).
And so say that program Z invokes H(X, X), then, given that X tries to execute the logic of real H, then H(X, X) will decide "It will halt" and then it will do a machine-level halt. There is nothing to stop us from starting the machine up again, feeding the output produced by H into functionality Z and executing the rest of the logic Z so that it uses the decision made by the real H.
Of course it would be very inconvenient to have to re-start the machine every time after H has run, and even more inconvenient to have to feed its result into another program rather than it all being done automatically. But H only needs to do a machine-level halt if its input tries to execute the logic of H. Also, the question that the halting problem asks, i.e. "does it halt", could be considered to be a very bad question if the type of halt it refers to is only the implied type of halt.
Note that this is similar to the Barber paradox in that it appears that the two options on offer are mutually exclusive and cover all options, when in reality they don't. The barber himself is not catered for in the Barber paradox just as an explicit halt is not catered for in the halting problem logic. This is why the halting problem question is a bad question.
Some people might still claim that the halting problem remains undecidable even with a real world machine equipped with the 'shutdown' instruction because, by definition, the program H could never use this instruction. However, I believe this is like saying it is impossible to switch on a real world computer because someone, a long time ago, wrote a specification that says you are not allowed to press a button that would switch on an imaginary computer. In other words, in my opinion it is not good logic.
Great video, glad to see you are back, I really like the idea of the intelligent alien, it is a great way to eliminate bias.
I question the big bang, I don't feel the evidence is strong enough to support the conclusion, So I say could you convince an intelligent alien based on observations alone that the big bang (or anything like it) happened?
it makes you start with evidence and first principles as opposed to working backwards from a conclusion.
Many thanks, & I think you have a valid point about the big bang.
The Big Bang theory, if you look into it, is actually an incoherently phrased piece of nonsense. Not even wrong, just a word game. The main equivocation is on the terms "universe" and "space."
Read Konstantin Antonopoulos's article A Bang into Nowhere for a tour of the semantic problems with the notion of the Big Bang.
@@ThePallidor I have to agree completely with you, and I have looked into big bang science for many years. (from a huge supporter of the BB models).
But the more you look into it, the less sense it makes, redshift and CMB is all they have and nothing in redshift or the CMB indicates a big bang.
That the CMB is 'picture of the start of the universe permanently painted in the sky with light', simple does not work according to anything we know about science or physics. Expanding space makes no sense.
Redshift is clearly gravitational shift, that light 'expands' as it moves through space is not supported by any known physics, and we can test gravitational shift is a high rise building.
BB models are swiss cheese of models and it's all holes. An eternal and dynamic but steady state universe makes far more sense.
I amazes me that mainstream cosmology is so biased for bb models, I guess that is where the grant money is. I prefer to know the truth.
Thanks for your response.
I imagined that we made some new programs:
Program L - Loops for any given input
Program D - Does whatever its input says. If it receives "loop", it loops, and if it receives "halt", it halts.
Real H - As previously defined, it decides whether a program, given a certain input, will halt or loop. If the program given to Real H executes an accurate version of the functionality of Real H, it outputs "halt" and performs a machine-level halt (shut-down).
Now I made Program Q, which takes any input i. Q works as follows:
- Input i and Program L are fed into Real H
- The output of Real H is fed into Program D
In Program Q, the Real H will always output "loop", as Program L loops for any given input. This output will then be fed into Program D, which does whatever the output says. Since the output of Real H will always be "loop", Program D will always loop, and thus Program Q must always loop.
Now I take Program Q, and any Input i, and feed that into a stand-alone Real H. Real H "simulates" Program Q, and observes that it executes an accurate version of its own functionality (Program Q contains Real H). So Real H is forced to output "halt". But we know that Program Q always loops, so Real H was wrong.
There is no issue here with the H in Q being fake, as the H in Q never analyzes a program that contains yet another H - it only analyzes Program L.
I'm barely a dabbler in these topics, so maybe I'm missing something fundamental in how you defined Real H. To me, however, it seems like you can't buttress the definition of H in such a way that will eliminate all logical contradictions; new inconsistencies will pop up no matter what additional rules you pile on the system.
@litteral7179 I find your suggestion very interesting. There might well be a loophole if Real H does not always end with a proper halt (like a shutdown). However, if Real H does always end with a proper halt, then you cannot construct your Program Q to operate in the way that you have described. If Real H always halts properly, then since Program Q invokes Real H, then Program Q will always halt.
This would mean that Real H could not be used by any calling program. Thank you for a very thought-provoking comment :)
You may be interested that I aim to release another video on this topic in the near future. I was never that happy with my assumption that Real H could somehow determine if it was contained within the input program because many people might consider this to be a suspect claim. So in my next video I intent to go even further by outlining a simple scenario where it is (for all practical purposes) beyond any doubt that Real H can be constructed and will always be correct.
I will also go through Turing's original paper, the Wikipedia descriptions, and how all of this is associated with the mathematical concept of real numbers and the weird notion of infinites of different sizes.
In your reasoning, you propose to distinguish between 2 different types of "halting" for a turing machine or program (to put it simply: an "explicit" halt like calling a "shutdown" OS function, and an "implicit" halt like reaching the end of a script).
However, in the context of turing machines, it seems that this distinction does not exist and I would need further explanation. My understanding is that a turing machine will either loop forever, or halt because there is no transition defined from its current state for the current read symbol. In the latter case, we say that the machine "accepts" if its final state was defined as accepting, else the machine "rejects".
Can you precisely define, for turing machines, the difference between your proposed 2 different types of "halting"? Let's say I executed your "Real Program H" (from 12:58), how would I know (as the caller) if the turing machine halted "explicitly" or "implicitly"?
Regards
In one of Eric Hehner's many papers on the halting problem he poses the question of how can we know when a program, written in an undecipherable language, has terminated. He poses this thought experiment as a way to convince the reader that termination is not a visible event. He believes that we could have a model of computation in which a program explicitly stopping and a simple loop such as '10 goto 10' are effectively the same thing in terms of halting. To relate this to your question it suggests that if we have a machine (or model of computing) which lacks an explicit HALT command, then we could possibly manufacture one by 'defining' a simple loop to be a HALT instruction.
However, personally I don't like this approach. We already have a widespread meaning for halting that we would be abusing by 'defining' a loop to be a halt. Also even if it is difficult to distinguish between the two scenarios from an external perspective, internally there will still be energy being used on the activity of continually going around in the simple loop.
And so I prefer to imagine a computing machine as consisting of moving parts, and where halting means that the parts stop. We could open the lid of the machine and look inside to check if the machine has halted or not. If it was doing a simple loop then we would see movement and so I would not classify this as halting. Whereas if the program has encountered an instruction that says "stop now, don't execute any more instructions" then I would classify that as halting. The third option is that the program simply reaches a point where there are no further instructions. Some people believe that this should be called halting but I disagree. I call this 'exiting', or at best an implied halt.
Now if such a thing as a Turing machine could exist, and if it could do anything that a modern-day real world computer can do, then it would be able to do an explicit HALT. However, if it were to lack this capability, as you suggest, then it would be inferior to real world computers in many aspects, not least of which is that it would be defeated by the logic of the halting problem proof. And of course, it could not be considered to be the only model of computing that is required because it lacks this key feature (of an explicit halt) that is present in so many real world computers.
A so-called 'Turing Machine' is neither use nor ornament if it can't be used in relation to real world computation tasks. It is often claimed that Turing's proof of undecidability tells us that it must be impossible to develop a halt/loop decider program on any real world computer. Hopefully this video has cast some doubt on the validity of that claim.
@@KarmaPeny
I see, so we're dealing with a "real-world" computer, supposedly more powerful than a strict turing machine because the computer is equipped with a "shutdown" instruction, which allows the program to stop it.
Now I think there is a flaw in the logic shown in the video. Your "Real Program H" (from 12:58) is incorrect because it does not follow the specification of the program H: it does not decide the halting problem.
The program H must by definition decide the halting problem, given any program/input pair:
- it returns 'True' if the given program will halt for the given input
- else it returns 'False'
However, your "Real Program H" does not follow this specification: it has a different behavior in some cases (for example, when we feed it the program X), where it will shutdown the machine instead of deciding the halting problem.
In fact, this change in the specification is truly problematic if we want to use the program H without trying to deceive it. Let's say we have a new "program Z", which simply wants to call H(X, X) (meaning asking the program H if the program X will halt if we execute it with itself as input), and then perform some other tasks. Even though the program Z was not trying to deceive the program H, your "Real Program H" would actually shutdown the machine, breaking the "contract" given in the actual specification of H.
With this in mind, I conclude that the halting problem is still undecidable even with a real-world machine equipped with the "shutdown" instruction, because by definition the program H could never use this instruction. What do you think?
The specification of the halting problem is flawed if it does not cater for machine-level halt commands because many real computers can perform this type of operation. Therefore your objection that I am not following the specification is a moot point. Nobody should be following the specification if the specification is flawed. A real computer can write its output to a text file (or a printer or whatever) and then halt the machine.
What matters is that H produces the correct prediction, not whether or not it is capable of being called by another program or whether or not it can return a true/false value to a calling program.
Note that that we can have very low-level languages that are so-called 'Turing complete' which do not inherently contain any mechanisms for defining procedures or functions or returning values from these things. These days we take it for granted that we can call functions and get values returned, but in relation to the simplest of instruction sets these are high-level mechanisms that might involve moving data around in various registers and to and from designated areas in memory that have been given specific names such as stacks and heaps etc. I expect that a machine-level halt command would be a very simple operation in comparison to these complicated mechanisms.
You said: Real Program H ... has a different behaviour in some cases (for example, when we feed it the program X), where it will shutdown the machine instead of deciding the halting problem.
This scenario is pretty much the main topic of the video. The real H will say "It will halt" (which is its decision) and then it will do a machine-level halt. Please re-watch the video if you are still unclear on this point.
You said: this change in the specification is truly problematic if we want to use the program H without trying to deceive it.
I disagree, you can freely initiate the real program H (should it ever exist) whether you are trying to deceive it or not. It is only a problem if you want to do something else afterwards, because the real program H will end with a machine-level halt.
You said: Even though the program Z was not trying to deceive the program H, your "Real Program H" would actually shutdown the machine, breaking the "contract" given in the actual specification of H
Here program Z is invoking H(X, X), and if the code of X would, when executed, try to invoke the real H (or equivalent functionality to real H) then H(X, X) will decide "It will halt" and then it will do a machine-level halt. But there is nothing to stop you from starting the machine up again, feeding the output produced by H into your functionality Z and executing the rest of the logic Z so that it uses the decision made by the real H.
Of course it would be very inconvenient to have to re-start the machine every time after H has run, and even more inconvenient to have to feed its result into another program rather than it all being done automatically. But you have to try to see the bigger picture which is that the question that the halting problem asks, i.e. "does it halt", is a very very bad question. It is highly unlikely that this is really the question that you should be trying to answer in the 1st place because it is such a problematic question. Please see my other halting problem video ("The Problem with the Halting Problem") which covers this very topic.
Note, this is similar to the Barber paradox in that it appears that the two options are mutually exclusive and cover all options, when they don't (the barber himself is not catered for just as an explicit halt is not catered for in the halting problem logic). This is why the halting problem question is a bad question.
You said: I conclude that the halting problem is still undecidable even with a real-world machine equipped with the "shutdown" instruction, because by definition the program H could never use this instruction. What do you think?
I think this is like saying it is impossible to switch on a real world computer because someone, a long time ago, wrote a specification that says you are not allowed to press a button that would switch on an imaginary computer.
I came across your channel today because I was looking for someone who also thought like me and not just followed the "herd effect", since I was 16 years old I realized through advanced studies in real analysis that the theory of infinite series and itself " infinity" are completely ill-defined and wrongly applied throughout mathematical development, expressions like 0.999... were never definitely equal to 1 and the supposed arguments are all flawed and inconsistent, I am a 19 year old Brazilian student and even today with my current knowledge of mathematics, I still find such inconsistencies being repeated over and over again in various results. I loved your channel and I'm sorry if I made any mistakes in writing, my English is basic.
Two themes keep coming up in the error-ridden fields of math/logic, and you dismantled both perfectly again here. Since that job is done, I'll instead describe the general form of the two and then try to shed light on them from analytical, psychological, sociocultural, and practical angles:
Theme 1) A vague term leads to an equivocation between two or more interpretations (e.g., of the term "halt"), which leads to an invalid proof that appears watertight in terms of propositional logic.
Analytically, we can even say it *is* watertight in terms of propositional logic because propositional logic is built solely of words. Relentlessly perfect logical rigor with lackadaisical semantic rigor, one might say.
Psychologically, this creates a perfect trap for wordcels, those who see the world as solely made of words.
A magician would call it misdirection: focus on the sentential manipulations of the propositions (perfectly done propositional logic), not the definitions of the key terms (ignore the sloppy semantics). However, this magic trick is self-working and doesn't even require the presenter to be aware of the trick; it is generally as much a trick on oneself as on others.
Socioculturally, we can say that tribal or cultural logic is logic where what counts as valid inference is enforced by the thinker, thus if they are smart the reasoning is verbally flawless, but where the semantic rules are not enforced by the logical thinker's own relentless circumspection but instead left to the tribe/culture (in this case the culture of math/compsci departments). If the thought to question the ambiguous term starts to bubble up in the thinker's mind, they are subtly cajoled against doing so, so that certain tribally/culturally beneficial equivocations are systematically preserved. To question them would be gadflyish, "wasting time on frivolities that everyone already knows the meaning of."
In terms of practicality, keeping careful track of levels of analysis, or zoom levels as Steve Patterson calls them, is an indispensable rationality skill. The error pattern that slips by thinkers less astute than yourself in this regard is "taking a term of everyday talk into a deeper investigation without ensuring it is rigorously defined enough for the task."
The ambiguous term "halt" is fine for most other analyses in computer science, but when commencing this deeper investigation on which "halt" becomes the key term on which everything hinges, Turing and the rest were remiss in not doing the standard "zoom check," the check that the term relied on from a less rigorous context was up for the task at this closer level of inspection. In fact the term "halt" clearly needed refinement, as is the norm when carrying a trusty old term into a deeper investigation than any previously attempted. In this case, as is typical, it needed a disambiguation between two subtly (and up until then unimportantly) different senses, which you named here "halt" and "exit."
Theme 2) This personal and cultural self-fooling process frequently involves creating (usually unwittingly) sufficient complexity so that the people involved cannot detect the error.
Tribally, once this is decided by "the experts" almost no one else will delve into the matter with much of a critical eye, because they will be too preoccupied in trying to understand the fascinating conundrum, too concerned with just trying to keep up with the "greats." They feel they ought to know their role in the tribal division of labor and stay in their lane.
Psychologically they feel on the back foot and intimidated by what appears dodgy but in fact, they are assured, only appears dodgy because it is so profound and because they are too small of mind to glimpse the profundity that the giants of the field do.
Were they to undertake the kind of analysis necessary to explode the fallacy in almost diagrammatic form as you have done here, they would be plagued every step of the way with the feeling they are pretending to greatness, a rapidly growing pit in their stomach as they check the "logic" every which way and find it checks out, further ensuring that they never regain the subtlety of mind to see that they've been tricked at the much more basic level of the key term itself.
The tribe extends beyond the math/compsci/logic departments by means of popular science books, and we find that almost everyone exposed to the problem is either inducted into the tribe as a fan or simply does not care about the apparent paradox because they're in applied compsci or applied math or outside the industry entirely.
Truly rare are those who take an interest in these conundrums yet somehow manage to avoid the sociocultural, psychological, and analytic traps that ensnare other minds. And then to present them gently to the public. Kudos, Karma Peny, you are one of the few leading a small but growing band of thinkers out of the ashen wastelands of the current Intellectual Dark Age.
Other arguments I believe you haven't covered that follow this same pattern include Nick Bostrom's simulation argument. Particularly topical as his flavor of "superintelligence" doomerism is booming today, without the benefit of this key term "intelligence" being clearly defined. And of course Gödel's incompleteness theorems and many other pieces of sophisticated wordcelery in math, physics, and analytic philosophy in general (which is treated as cultural philosophy, going back to all I said here, but I'll talk about that another time as this comment is getting long).
Great comment J J, many thanks.
Ahhhh yes!! Finally someone addresses this nonsense.
Thanks J J, I'm glad you enjoyed it.
Turing Machines don't have transitions into non-existent states, by definition. So your definition of EXIT doesn't make sense for Turing Machines.
Even if you modify the definition of Turing Machines with an extra "EXIT" state, the same construction of the X machine still works and shows that it is impossible to build H such that H detects "HALT or EXIT".
@tomekczajka Thank you for your comments. I'll provide more details on my answers, but first I'll give you the short versions.
You say that my definition of 'EXIT' makes no sense for Turing Machines. But my argument applies to any real-world computer including any real-world implementation of a Turing Machine (such as one I will describe later). I see no point in any discussion about a fictional Turing Machine consisting of nothing more than abstract mathematical concepts which might not have any validity in the real-world.
Next you claim that even with an 'EXIT' state, we could prove that it is impossible to build H such that H detects 'Halt' or 'EXIT'. But the halting problem proof is one that relies on a 'decision problem' which is a problem that only has the mutually exclusive possibilities of either 'yes' or 'no'. It asks "does it halt(y/n)?" but then the argument of the proof assumes that H can 'exit' and that other instructions can be performed afterwards. It allows H to print its Y/N decision but it does not allow H to halt. Your suggestion of a proof that deals with 'HALT' or 'EXIT' does not meet the criteria of a decision problem because without the 'HALT' possibility it has not covered all options.
Now for more details...
The terminology used by Turing might have created a smokescreen that obscured the issues and made it difficult for other people to identify them.
In particular we have the concepts of 'state' and 'machine'. We all know about the front-end of the imaginary Turing machine with its 'infinite' tape and read-write head, but at the back-end we have the instruction cards which, in today's terminology, might be called 'program code'. A 'state' was merely the number/identifier of one of these cards, or else it might be zero for the HALT state. For a binary tape, each of the cards might contain a block of functionality such as the following:
Instruction Card #1
current_char = read-tape();
IF (current_char == 1) THEN
WRITE 1 {or 0}
MOVE LEFT {or RIGHT}
GOTO #whatever {i.e. goto another instruction card}
ELSE
WRITE 1 {or 0}
MOVE LEFT {or RIGHT}
GOTO #whatever {i.e. goto another instruction card}
And so we can see the similarities to modern day computers and programming languages. But this was all devised in the days before computers were invented and the terms of 'state' and 'machine' relate to the concept of finite-state machines. In order to relate these things to reality I prefer to say 'instruction card' instead of 'state' and I prefer to say 'program' instead of 'machine'.
Finite-state machines are often imagined as diagrams with the states as nodes and with arrows to show the direction of processing (where a change of state is called a 'transition'). A normal state might appear as a label inside a circle whereas the halt state might appear with an extra circle around it.
As such, it might appear reasonable to assume that we could easily incorporate one of these 'machines' inside another machine and that it would not affect the functionality of the first machine if we just remove the extra circle that identifies its 'final state'. Indeed, the argument that we are just changing the diagram to reflect that it is no longer the final state might sound perfectly reasonable. It is so subtle that it obscures the fact that we could be altering the key functionality of the first machine.
Furthermore, with the apparent 'abstract' nature of finite-state machines and Turing Machines people might imagine that programs can somehow be thought of as being able to execute all by themselves. They might believe that there is no need to include a device that processes the instructions. But program instructions don't just work by magic. And so when we relate the halting problem proof to a real-world computing scenario then we should soon realise that there is a clear distinction between the program and the machine upon which the program runs. We could then get further clarity on what the HALT instruction (or 'state') actually does.
We all know that 'halt' means 'stop' but in a real-world computing scenario are we talking about stopping the machine, stopping the program, or even just exiting from the program (& thus implying that processing stops)? Here we have three different possible interpretations of what 'halt' might mean. The halting problem specification and proof seem to assume that 'halt' only means the third of these options.
And so the Halting problem specification only allows the choice between the two options of 'exit the program & allow further processing' (which it calls halting) or 'go into an unending loop'. These are not mutually exclusive because there are other types of halting that are not catered for, such as 'optionally produce some output and then stop the machine'.
If the specification of the halting problem allowed for all possible types of halting then, as explained in the video, the proof of undecidability would no longer work.
Now for a summary of my main argument...
The halting problem proof makes the assumption that if a 'halting predictor' could exist then it could be implemented as some kind of subroutine or called function within another program which could take its prediction and use it in a way so as to contradict the prediction. I contest this assumption.
If a 'halting predictor' could exist then it could print its prediction and then HALT. By going to the HALT state it would prevent any calling program from contradicting its prediction. I argue that if we change the program to prevent it from halting then we no longer have the 'halting predictor', we have some other program.
As a really super-simple example, consider a 'self halt/loop predictor' program which is a program that predicts its own halting nature. It simply prints "I WILL HALT" and then goes to the HALT state.
Now, if we change this program so that instead of going to the HALT state it goes to another section of code that loops, then it is obvious that by amending the program we have changed its functionality. The original functionality of the 'self halt/loop predictor' no longer exists inside the updated program.
I claim that the same argument can be applied to a universal halt-or-loop predictor program, for which it might use the HALT state as an important integral part of its functionality. In order to use its functionality within another program we would need to alter it in such a way that it would no longer be the same functionality.
The logic of the halting problem proof goes like this:
If 'functionality H could exist' then
it would be possible to construct program X with the following nature:
If I loop then I will not loop
and if I don't loop then I will loop
Therefore 'functionality H' can't exist
This shows that the halting problem proof is a proof by contradiction, which means that it highlights that a problem exists in our logic somewhere. We then make a best guess at what that problem is, but this best guess might be wrong. And so all proofs 'by contradiction' should be considered to be 'best guesses' rather than 'undeniable truths'.
It occurs to me that the premise described in the first line ignores the fact that functionality H cannot be included in program X without changing its halting nature. Therefore perhaps the argument should go like this:
If we could include the functionality of H within X then
it would be possible to construct program X with the following nature:
If I loop then I will not loop
and if I don't loop then I will loop
Therefore 'functionality H' can't be faithfully reproduced in X
@@KarmaPenyI agree with you that the undecidability of the halting problem does not extend to the undecidability of whether real world computers will ever stop calculating.
In fact, it seems that it's trivial to decide whether real world computers will stop working. Given a real world computer, the solution is "yes, it will someday stop working", if nothing else then because of the heat death of the universe.
I thought you were disputing the undecidability of halting for Turing Machines (i.e. by definition a mathematical idealization). For example, at 8:10 you say "even with Alan Turing's imagined computer, which we call a Turing Machine" etc.
I disagree with you that the mathematical theorem about mathematical computers has no practical applications. It has plenty of practical applications, for example compiler writers sometimes use it to argue on its basis that a certain feature of their compiler isn't going to be possible to implement. The usefulness of the theorem to the real world is to show that if you're going to check whether a program halts, you will have to rely on something that Turing Machines don't have (e.g. a time limit), which is useful practical knowledge.
@tomekczajka
You said:
Given a real world computer, the solution is "yes, it will someday stop working", if nothing else then because of the heat death of the universe.
The halting problem, as applied to real-world computers, is not about if they will someday stop working. Ideally it should be about can we determine beforehand if the execution of a given algorithm will, if given enough resources such as time and storage, eventually go into an inescapable loop (yes or no)?
Note that I say "should be about" the question of "does it loop(Y/N)?" whereas unfortunately it is usually expressed as "does it halt (Y/N)?" This creates the illusion that any given algorithm must either go into an inescapable loop or reach the end of instructions thereby allowing the further instructions of a subsequent algorithm to proceed. It hides the possibility that the algorithm might end via an instruction to STOP or HALT.
You said:
I thought you were disputing the undecidability of halting for Turing Machines (i.e. by definition a mathematical idealization). For example, at 8:10 you say "even with Alan Turing's imagined computer, which we call a Turing Machine" etc.
I don't believe that the mathematical idealism of the imagined Turing Machine, such as the 'infinite' nature of the tape, is an essential requirement of the proof. Instead of having an infinite tape we could simply assume that we have enough tape that is required before the algorithm reaches an inescapable loop or else otherwise finishes (via a HALT/STOP instruction or by EXITing as it has reached the end of its 'states', where a 'state' = an instruction card = a section of code).
And so I believe that we can do away with all the wishy-washy abstract mathematical concepts without changing the underlying arguments. In this sense you are right to think that I am contesting the original proof.
You said:
I disagree with you that the mathematical theorem about mathematical computers has no practical applications. It has plenty of practical applications, for example compiler writers sometimes use it to argue on its basis that a certain feature of their compiler isn't going to be possible to implement.
I would say this is the opposite of being useful as it suggests that certain features would be impossible to implement whereas in actual fact they might well be possible.
You said:
The usefulness of the theorem to the real world is to show that if you're going to check whether a program halts, you will have to rely on something that Turing Machines don't have (e.g. a time limit), which is useful practical knowledge.
The halting problem deliberately avoids the issue of how long tasks will take. It doesn't inform us about any aspect of how long anything will take. Such consideration is dealt with in the field of theoretical computer science called 'time complexity'.
@tomekczajka
Furthermore, for an example of the confusing nature of what a 'state' is I refer you back to my previous example of the functionality that might be contained on an instruction card. A 'state' refers to a particular 'card identifier' which might just be a number such as 1, 2, 3, and so on.
But in this scenario the number 0 has a special meaning. It means "go to the halt state" which is problematic because if 'state' = 'instruction card' then we should have an instruction card containing the instruction "HALT". Conceptually we might think of this 'HALT' instruction card as being hard-coded in the machine.
And so when we execute a program on a Turing Machine, if any of the instruction cards cause a jump to state 0, then the HALT state has been explicitly selected. However, if we just think of the instruction cards as being the only states then we might misunderstand the nature of halting and we might claim that Turing Machines don't have an explicit HALT command.
Indeed, Turing Machine algorithms can ONLY end by encountering an explicit HALT. This means that my definition of 'EXIT' cannot apply to a complete program, but it can apply to a small section of code within the program. In the halting problem proof it assumes that a program X can be constructed that contains or somehow calls the functionality of H. In other words, that H can exist as a section of code within X. But this is trivially impossible because H must end with a HALT instruction which forces the Turing Machine to HALT. If we alter H to prevent it from halting then we no longer have the true functionality of H; we have some different program that cannot be trusted to give the same result as H. For example, consider this logic in which 'Card_0' represents the HALT instruction:
Card_1: if Card_2 does not contain a 'goto Card_0' instruction then goto Card_1 else goto Card_2
Card_2: goto Card_0
The above logic will act in a completely different way if we change 'goto Card_0' on Card_2 to 'goto Card_3' or whatever, and we add more cards.
Perhaps if we had not had this confusion about 'states' then it would have been obvious all along that the halting problem proof was not valid.