- 27
- 336 370
Karma Peny
United Kingdom
Приєднався 3 жов 2014
Karma Peny is an anagram of my (very common) real name, Mark Payne. My aim is to challenge outrageous claims where I consider the logic behind the claims to be flawed.
Halting, Gödel Incompleteness, Cantor's Infinites, Real Numbers & Infinity: Unravelled & Debunked?
The unravelling of the Halting Problem takes us on a journey of discovery through the history of Real Numbers. We hit absurdities right from the start and they just keep coming. On this journey we encounter fundamental problems with the notion of 'infinitely many parts', Cantor's Diagonal Argument (with his infinities of different sizes), Richard's Paradox, Gödel Incompleteness, and Alan Turing's 1936 proof of undecidability.
Переглядів: 510
Відео
What Computers Can't Do ... or Can They? (Tweaked re-release of my latest halting problem video)
Переглядів 707Рік тому
This video explains the halting problem in simple terms, and the proof of undecidability is contested by an alien robot. Who do you think is right, your fellow humans or Tara the alien robot? It covers how David Hilbert's "is mathematics decidable?" challenge was met by Kurt Gödel, Alonso Church and Alan Turing. Turing's proof relied on his concept of a 'Turing machine' which is now considered ...
The Halting Problem Explained & Contested by an Alien Robot
Переглядів 988Рік тому
This video explains the halting problem in simple terms, and the proof of undecidability is contested by an alien robot. Who do you think is right, your fellow humans or Tara the alien robot? It covers how David Hilbert's "is mathematics decidable?" challenge was met by Kurt Gödel, Alonso Church and Alan Turing. Turing's proof relied on his concept of a 'Turing machine' which is now considered ...
The 0.999...= 1 Controversy & Is Mathematics Fundamentally Flawed?
Переглядів 1,7 тис.2 роки тому
It's the biggest controversy in mathematics... the 0.999...=1 debate just won't go away. We examine and analyse all the common arguments and proofs for 0.999...=1. The controversial aspects are highlighted and the foundations of mathematics itself are questioned. Feel free to discuss the points raised in this video on your chosen forums and social media platforms. Nobody can explain how the inf...
Does 0.9999... = 1? NO : Why All Proofs for 0.999...=1 are Wrong. Please Discuss On Social Media
Переглядів 4,1 тис.3 роки тому
This video explains in detail why all proofs for 0.999...=1 are wrong. It presents both sides of the argument but with a bias towards the position that 0.999... does not equal 1. It points out the problems with the foundations of mathematics that are highlighted by this dispute. Please go ahead and discuss the issues raised in this video on your chosen forums and social media platforms. Video G...
Bad Mathematics (The Disbeliever)
Переглядів 1,2 тис.3 роки тому
Most mathematicians refuse to question the foundations of their subject. Even though the issues won't go away, they still believe that they are right to theorise about abstract concepts that have no counterpart in physical reality. The less friendly among them will mock and ridicule anyone that complains about foundational issues. Ironically they will refuse to consider the arguments being put ...
How to Argue with Crackpots / Cranks who Reject Non-Physical Existence (The Disbeliever, Part 10)
Переглядів 2,4 тис.3 роки тому
This video describes how believers in the supernatural talk down to disbelievers (just in case anyone didn’t already know). It is the tenth in a series of ten videos in which a ‘disbeliever’ (of supernatural things) reveals the reasons behind his disbelief. He complains about the basis of mathematics, how the Ancient Greeks messed up, why infinity is an invalid concept, how maths could be much ...
Quantum Computers Explained & Why They Don’t Work (The Disbeliever, Part 9)
Переглядів 1,8 тис.3 роки тому
This video provides a beginners introduction to quantum computing and highlights the fact that there is still no strong evidence that they are really doing what they claim to be doing (Oops - the 'control' and 'target' qubits are the wrong way around on the 'Operations: Pauli...' silde, sorry!). It is the ninth in a series of ten videos in which a ‘disbeliever’ (of supernatural things) reveals ...
Dimensions, Gravity & Quantum Waves: How Mathematics Corrupted Science (The Disbeliever, Part 8)
Переглядів 9033 роки тому
Topics covered include spatial dimensions, gravity, and quantum waves. In this video it is argued that it is not scientific to claim that there are things in this world that can only be understood in terms of mathematics. It is the eighth in a series of ten videos in which a ‘disbeliever’ (of supernatural things) reveals the reasons behind his disbelief. He complains about the basis of mathemat...
Are there Infinities of Different Sizes? Of Course Not! Cantor was Wrong (The Disbeliever, Part 7)
Переглядів 3,1 тис.3 роки тому
This video highlights some of the many issues with Cantor’s arguments that there are infinities of different sizes. It is the seventh in a series of ten videos in which a ‘disbeliever’ (of supernatural things) reveals the reasons behind his disbelief. He complains about the basis of mathematics, how the Ancient Greeks messed up, why infinity is an invalid concept, how maths could be much better...
Should the Foundations of Mathematics be based on Physics? (The Disbeliever, Part 6)
Переглядів 1 тис.3 роки тому
In this video it is suggested that mathematics could be re-invented with foundations based on physical reality instead of supernatural beliefs. It is the sixth in a series of ten videos in which a ‘disbeliever’ (of supernatural things) reveals the reasons behind his disbelief. He complains about the basis of mathematics, how the Ancient Greeks messed up, why infinity is an invalid concept, how ...
Does Infinity Exist? No, Infinity Does Not Exist (The Disbeliever, Part 5)
Переглядів 2,8 тис.3 роки тому
This video explores if ‘not finite’ can have a coherent meaning, and if anything can really be infinite. It is the fifth in a series of ten videos in which a ‘disbeliever’ (of supernatural things) reveals the reasons behind his disbelief. He complains about the basis of mathematics, how the Ancient Greeks messed up, why infinity is an invalid concept, how maths could be much better without the ...
The History of Infinity in Ancient Greece (The Disbeliever, Part 4)
Переглядів 6173 роки тому
This video describes how the Ancient Greeks chose mysticism over physics. It is the fourth in a series of ten videos in which a ‘disbeliever’ (of supernatural things) reveals the reasons behind his disbelief. He complains about the basis of mathematics, how the Ancient Greeks messed up, why infinity is an invalid concept, how maths could be much better without the mysticism, the flaws in Cantor...
0.999… Does Not Equal 1 : Popular Arguments For & Against (The Disbeliever, Part 3)
Переглядів 1,3 тис.3 роки тому
This video examines why 0.999... cannot equal 1 according to the popular arguments for and against the statement that 0.999... (zero point nine recurring) equals 1. It is the third in a series of ten videos in which a ‘disbeliever’ (of supernatural things) reveals the reasons behind his disbelief. He complains about the basis of mathematics, how the Ancient Greeks messed up, why infinity is an ...
0.999… Does Not Equal 1 : Intuitive Arguments For & Against (The Disbeliever, Part 2)
Переглядів 7353 роки тому
This video examines why 0.999... cannot equal 1 according to the intuitive arguments for and against the statement that 0.999... (zero point nine recurring) equals 1. It is the second in a series of ten videos in which a ‘disbeliever’ (of supernatural things) reveals the reasons behind his disbelief. He complains about the basis of mathematics, how the Ancient Greeks messed up, why infinity is ...
Is Mathematics Fundamentally Flawed? (The Disbeliever, Part 1)
Переглядів 1,1 тис.3 роки тому
Is Mathematics Fundamentally Flawed? (The Disbeliever, Part 1)
Quantum Entanglement Bell Tests Part 5: Has Proper Science Been Abandoned?
Переглядів 2,9 тис.4 роки тому
Quantum Entanglement Bell Tests Part 5: Has Proper Science Been Abandoned?
Quantum Entanglement Bell Tests Part 4: Delft - The 1st Loophole-free Bell Test
Переглядів 3,9 тис.4 роки тому
Quantum Entanglement Bell Tests Part 4: Delft - The 1st Loophole-free Bell Test
Quantum Entanglement Bell Tests Part 3: The CRAP Loophole
Переглядів 4,8 тис.4 роки тому
Quantum Entanglement Bell Tests Part 3: The CRAP Loophole
Quantum Entanglement Bell Tests Part 2: How QM Became Mainstream (1st Bell Test)
Переглядів 8 тис.4 роки тому
Quantum Entanglement Bell Tests Part 2: How QM Became Mainstream (1st Bell Test)
Quantum Entanglement Bell Tests Part 1: Bell's Inequality (My Best Explanation)
Переглядів 43 тис.4 роки тому
Quantum Entanglement Bell Tests Part 1: Bell's Inequality (My Best Explanation)
What Is Mathematics? Does Anyone Know? (An Infinity Crisis)
Переглядів 2,3 тис.5 років тому
What Is Mathematics? Does Anyone Know? (An Infinity Crisis)
What Is A Number? Does Anyone Know? (An Infinity Crisis)
Переглядів 1,3 тис.5 років тому
What Is A Number? Does Anyone Know? (An Infinity Crisis)
Halting Problem: Finally Revealed - The (Logic) Problem with the Halting Problem
Переглядів 2,3 тис.5 років тому
Halting Problem: Finally Revealed - The (Logic) Problem with the Halting Problem
Quantum Entanglement: Explained & Debunked - Quantum Entanglement & Bell Test Experiments
Переглядів 157 тис.6 років тому
Quantum Entanglement: Explained & Debunked - Quantum Entanglement & Bell Test Experiments
1+2+3+...=-1/12 Proof Debunked & what -1/12 Really Means (Response to Numberphile's 1+2+3+...=-1/12)
Переглядів 70 тис.9 років тому
1 2 3 ...=-1/12 Proof Debunked & what -1/12 Really Means (Response to Numberphile's 1 2 3 ...=-1/12)
0.999... does not equal 1 (Part 1: The Problem)
Переглядів 15 тис.9 років тому
0.999... does not equal 1 (Part 1: The Problem)
Here is a simple argument that shows why the key logic of the halting problem proof doesn't work. It is a simple scenario in which the construction of a fully working 'decider' program is trivial but in which the construction of a fully working program that would contradict its decision is obviously impossible. Note that the core logic behind the halting problem proof is that decider can always be contradicted. By Rice's theorem, the same logic can be applied to all non-trivial semantic properties of programs, such as whether or not the last character printed by a program is an "H" or an "L" character. This allows us to create a simplified scenario in which 'looping' is completely removed. This also makes use of the fact that programs can be force-stopped such as by performing a 'shutdown' instruction or by the operating system interruption upon detection of a runtime error. The scenario: Scenario: criteria 1. The programming language has built-in runtime error detection for any attempted recursion including inside a simulation. If one is detected, it will print "H" & it will HALT preventing further processing. Scenario: criteria 2. Our scenario only consists of programs that end by doing one of the following (so no programs actually loop): a) Print "L" & end by simply exiting the code b) Print "H" & end by simply exiting the code c) Print "L" & force-HALT thus preventing further processing d) Print "H" & force-HALT thus preventing further processing Construction of the halt-or-loop decider... The functionality of the halt-or-loop decider in this case is trivial since all it needs to do is the following: Set up the program + data so that it can be executed... ... then execute it! About why it would be impossible to contradict the decider... Since the decider simply executes its input program, the functionality required to contradict its decision would need to be: If I print L then I print H, and if I print H then I print L This is clearly impossible to construct as it would need to contradict its own behaviour. Therefore we have demonstrated a scenario in which a decider can be constructed and cannot be contradicted.
What an inspiring video! I'm off to go do my own research on the diagonalization arguments. 🤪
Your comment is very much appreciated, many thanks. If you browse my channel you will find a short video about Cantor's diagonal argument that explains it in simple terms, so you might find that useful. Good luck with your endeavour.
I agree, Bell's Inequality (Specifically RHCH) is flawed. Bell has digitized the incoming photons and forced them into a state of horizontal or vertical alignment. Of course, when we place those digital constraints on the analog polarization, we fill Bell's Equation with digital approximations of the actual polarization angle. However, we can prove with simple mainstream physics that it's incorrect. If we replace the binary downconversion with simple analog polarization measurements, we can see that the deterministic correlation is 2√2. I've written a paper about it and posted it to Zenodo. It's called, "Reinterpreting Bell's Inequality: Eliminating Digitization with Deterministic Correlations via Malus's Law"
He's alive! I'm so happy you finally made a new video.
I'm glad that you're happy. Many thanks.
I subscribed to this channel 2 days ago from seeing a UA-cam comment and get graced with a new upload. Such fate
Many thanks. I'll try to produce a few more this year.
The GOAT modern math debunker is BACK!🎉
Many many thanks (as always).
I imagined that we made some new programs: Program L - Loops for any given input Program D - Does whatever its input says. If it receives "loop", it loops, and if it receives "halt", it halts. Real H - As previously defined, it decides whether a program, given a certain input, will halt or loop. If the program given to Real H executes an accurate version of the functionality of Real H, it outputs "halt" and performs a machine-level halt (shut-down). Now I made Program Q, which takes any input i. Q works as follows: - Input i and Program L are fed into Real H - The output of Real H is fed into Program D In Program Q, the Real H will always output "loop", as Program L loops for any given input. This output will then be fed into Program D, which does whatever the output says. Since the output of Real H will always be "loop", Program D will always loop, and thus Program Q must always loop. Now I take Program Q, and any Input i, and feed that into a stand-alone Real H. Real H "simulates" Program Q, and observes that it executes an accurate version of its own functionality (Program Q contains Real H). So Real H is forced to output "halt". But we know that Program Q always loops, so Real H was wrong. There is no issue here with the H in Q being fake, as the H in Q never analyzes a program that contains yet another H - it only analyzes Program L. I'm barely a dabbler in these topics, so maybe I'm missing something fundamental in how you defined Real H. To me, however, it seems like you can't buttress the definition of H in such a way that will eliminate all logical contradictions; new inconsistencies will pop up no matter what additional rules you pile on the system.
@litteral7179 I find your suggestion very interesting. There might well be a loophole if Real H does not always end with a proper halt (like a shutdown). However, if Real H does always end with a proper halt, then you cannot construct your Program Q to operate in the way that you have described. If Real H always halts properly, then since Program Q invokes Real H, then Program Q will always halt. This would mean that Real H could not be used by any calling program. Thank you for a very thought-provoking comment :) You may be interested that I aim to release another video on this topic in the near future. I was never that happy with my assumption that Real H could somehow determine if it was contained within the input program because many people might consider this to be a suspect claim. So in my next video I intent to go even further by outlining a simple scenario where it is (for all practical purposes) beyond any doubt that Real H can be constructed and will always be correct. I will also go through Turing's original paper, the Wikipedia descriptions, and how all of this is associated with the mathematical concept of real numbers and the weird notion of infinites of different sizes.
Consider….the natural numbers are listed (say from 0 out to infinity) ignoring any possibilities of their subdivisions as with the segmenting of the span between 0 and 1. This is not considered with the real numbers. In a list which implies extension such as with natural numbers, without which suggestions of size becomes impossible, one might consider the list of real numbers includes all subdivisions between 0 and any arbitrarily defined point such as 1. So between 0 and 1 there are infinite real numbers. The notion of a “list” in extension of these numbers then evaporates and it becomes clear that there is no extension, infinite or otherwise. The theoretical line for the real numbers becomes by definition a line segment, again, its length arbitrarily defined. We see then that we have destroyed the inference of one extension being larger or smaller than the other because there is truly only one extension, that of the natural numbers and not another to which to compare it. The only means of comparison of the two then becomes that they are both simply a pile of unit members, both infinite with no other means of a suggestion of the size of one to the other. Infinity cannot be paired with concepts born of material constraints. This is nonsense.
@jamestagge3429 It does seem strange to try to consider how so-called real numbers might be placed into a list when any given subdivision of them is supposedly infinitely divisible. It sounds like a magicians trick. A magician might keep producing more and more sponge balls, seemingly out of nowhere. And just when you think there can't possibly be any more sponge balls, they magically produce some more! Alan Turing was one of the first mathematicians to relate real numbers to computer algorithms. Consider a computer program that, when executed, will calculate a given real number (such as 0.333..., √2, π, etc.) to a certain number of decimal places (the programs might take an input 'n' for this purpose). Any such program can be said to relate directly to the real number for which it will calculate the first 'n' digits. Also, any computer program is effectively made up of a string of binary bits (ones and zeroes). We could insert a dummy '1' bit at the start of this string and then convert it to a (very big) natural number. It follows that we can convert programs to natural numbers and so we have some correspondence between computer programs and natural numbers. Programs can be placed in order of increasing size, and then alphabetically for those of the same size. Then given that programs can relate to real numbers and that these can each be converted to a unique natural numbers it seems like there must be less real numbers than there are natural numbers. Mathematicians try to get around this by claiming that most real numbers cannot be calculated. They claim it is IMPOSSIBLE to write a program that can calculate the first 'n' digits of these non-computable numbers.
@jamestagge3429 It does seem strange to try to consider how so-called real numbers might be placed into a list when any given subdivision of them is supposedly infinitely divisible. It sounds like a magicians trick. A magician might keep producing more and more sponge balls, seemingly out of nowhere. And just when you think there can't possibly be any more sponge balls, they magically produce some more! Alan Turing was one of the first mathematicians to relate real numbers to computer algorithms. Consider a computer program that, when executed, will calculate a given real number (such as 0.333..., √2, π, etc.) to a certain number of decimal places (the programs might take an input 'n' for this purpose). Any such program can be said to relate directly to the real number for which it will calculate the first 'n' digits. Also, any computer program is effectively made up of a string of binary bits (ones and zeroes). We could insert a dummy '1' bit at the start of this string and then convert it to a (very big) natural number. It follows that we can convert programs to natural numbers and so we have some correspondence between computer programs and natural numbers. Programs can be placed in order of increasing size, and then alphabetically for those of the same size. Then given that programs can relate to real numbers and that these can each be converted to a unique natural number, it seems like there must be less real numbers than there are natural numbers. Mathematicians try to get around this by claiming that most real numbers cannot be calculated. They claim it is IMPOSSIBLE to write a program that can calculate the first 'n' digits of these non-computable numbers.
Another ‘proof’ that I think it’s not equal is: What is the maximum number of x < 1. Logically this is 0,9 repeating but since that equals 1 it is not < 1, so then it must be 0,999…8 ? No! 0,9 repeating is simply not equal to 1.
There was great concern in the early 20th century that a 1-to-1 correspondence between natural numbers and real numbers might have been uncovered. In 1905, Jules Richard devised a real number construction / specification paradox. It seemed like real numbers could be ordered in terms of the length of their English language description. Richard claimed his specification of a real number 'r' produced by a Cantor-like diagonal argument could be described as unambiguously defined, and from his argument he concluded that the diagonal is not well-defined. If other mathematicians were to agree with Richard that the diagonal is not well-defined, it would suggest that Cantor's diagonal could not be defined, thus rendering the diagonal argument invalid. And so they typically claim that Richard's argument falls down because there is no well-defined notion of when an English phrase defines a real number. In 1912, Emile Borel devised the concept of 'computable real number‘. A real number is considered to be 'computable' if we can devise an algorithm that could calculate its nth digit. With this definition we can say that pi (π) and √2 are computable, even though these so-called numbers can never be computed in entirety. Terms like "computable number" and "computable function" don't match the common usage of the word "computable". The common usage is "capable of being computed". This implies a computation process leading to a definite and precise answer. A term like "n-th digit computable" would be much less contentious as it would convey a very clear and unambiguous meaning. But instead of clarity, "computable" is given a mathematical definition that suggests that we can compute functions like √2 to infinite precision, which is clearly absurd. Based on the common meaning of "computable" it is blatantly obvious that √2 is NOT computable, and yet in mathematics we say that it is. If all real numbers were 'computable' then it would suggest that they can have a 1-to-1 correspondence with natural numbers. The only way out for the supporters of Cantor is to claim that most real numbers are not computable. This means that we can't even define them in a way that would allow their nth digits to be calculated. Therefore Cantor's infinites of different sizes can be said to depend entirely on the strange idea that inadequately defined so-called real numbers (that can never have their nth digits calculated) can be said to exist. In Alan Turing's 1936 paper he explained how each natural number in turn could be converted to a computing machine (which we might call a computer program). Then a decider program D could examine that machine and say whether or not it would effectively represent a real number by endlessly printing a mixture of the symbols '0' and '1'. He proposed that a machine/program H could find the first natural number that produces a machine/program that endlessly prints '0' and '1' symbols. H could then simulate it to find its 1st digit, and then it could print that digit. It could then do the same for the second natural number that produces a machine/program that endlessly prints '0' and '1' symbols. H could then simulate it to find its 2nd digit, and then it could print that digit. It could continue doing this for all subsequent natural numbers that meet the same requirement. The question is then should H itself be considered to be a machine/program that endlessly prints '0' and '1' symbols? If so, then when it processes the natural number that produces the H machine/program, what digit would be printed? If we say that H is the N-th successfully identified machine/program, then all its digits up to (but not including) its N-th digit would be well-defined, as would all the digits above its N-th digit. But its N-th digit itself would not be defined at all. Since its N-th digit has not been specified/defined anywhere, it would seem to follow that H should not be considered as being a program that endlessly prints '0' and '1' symbols. And so the decider algorithm D should not choose H at all. However, this was not Turing's conclusion. He concluded that the decider algorithm D could not possibly exist. It was Turing's intention to prove that something could be 'undecidable' in response to a challenge put forward by David Hilbert. Turing appeared to have concocted this scenario based on a belief that some real numbers could exist that were not 'computable'. And so the issue is whether or not it is reasonable to claim that there are so-called real numbers for which we can't determine their n-th digits. The whole of Cantor's argument about the real numbers being 'uncountable' relies on an acceptance of these inaccessible numbers. One way of trying to justify the existence of these numbers it to name one of them (like Chaitin's Omega) and to provide some kind of description that, on first encounter, can sound like a plausible definition. But on closer inspection it soon becomes obvious that these descriptions are ill-defined and usually contain absurdities. And so the big question is can there exists real numbers that cannot be clearly specified in programmatic form? If such things CAN be said to exist then this seems to conflict with the concept of mathematical formalism. Formalism asserts that mathematical statements concern the manipulation of strings (alphanumeric sequences of symbols, often as equations) using established rules of manipulation. If there's a notion that non-specifiable entities are to be incorporated into mathematics, then it can no longer be considered a formal system. If we are to accept the existence of non-computable real numbers then we must accept that the discipline of mathematics requires us to abandon proper formalism and to accept a mystical belief in the existence of non-physical ill-defined entities.
I think that all would find this interesting. I would greatly appreciate comments…………The means which Cantor employed in his proposition of diagnalization, i.e., about the infinite string of real numbers being larger than that of natural numbers is discussed and considered in a context in which it is ignored that there is no such thing as infinity in material reality for it defies the means and manner of existence which is that anything that does exist must be distinct, delineable and quantifiable. This understanding includes the products of the realm of the abstract as well in that there is none which is not ultimately the product of the material, contextual referents in reality, that context from which they arise. For example, the abstraction of a pink flying elephant is one formed of the fusion of the material colour pink, the material phenomenon flying and the material entity, elephant. What mathematicians such as Cantor have done is employ the most general understanding of infinity as a concept but ignore the inevitable contradictions which arise, muddying the waters of the context in which their propositions are formulated and presented. 1. Consider that the infinite string of natural numbers is a progression, that which extends outward (forever). Each unit member is a value, the progression advancing by that value plus 1 each time. However, that to which it is being compared, i.e., the infinite set of real numbers is structurally the opposite within the boundaries of the proposition. • In the infinite string or natural numbers, the span between any two unit members is ignored and the line proceeds from each value to the next, extending out forever. • In the proposed infinite string of real numbers, the list of unit members from the first unit member designated to the next, any other which might be identified (e.g., 1 to 2 or perhaps 1 to 1.00000001, etc.) is itself infinite. For this reason, the string cannot exist beyond its consideration as a line segment which is still problematic for reasons I point out below, its overall length a value arbitrarily assigned but finite. So, in the case of the real numbers, the infinite line of unit members would be contained within two designated units with infinite points between and could extend beyond that. The string of numbers does NOT extend outward but rather within itself. This is comparing apples to oranges. - There could be no list of real numbers for the designation of the very first in the list would never be completed or would just be impossible for it would have infinite digits. None of the real numbers could be designated and thus, nor could the list. This is not unlike the problems that arise with line segments in which it is claimed that they are composed of infinite points, yet they cannot be because if of finite length, each end would have to be designated by a point beyond which there was no other which by definition would mean that those points would have to have scope and dimension which would mean that there could not be infinite points composing the line segment. However, if these end points had scope and dimension, what would that be? If 10x, why not 5x then why not 1x, ad infinitum. Thus, the line segment could NOT be composed of infinite points but at the same time would have to be, demonstrating that infinity cannot be paired with material concepts due to such inevitable contradictions. • What then would be the measure by which the string of real numbers was determined to be larger than that of natural numbers? Here we see that the string of natural numbers was being considered by Cantor as such per its unending length, that length forced by the denial of consideration of the span between specified unit members, e.g., 1 to 2 to 3, etc. However the string of real numbers could not be judged in its size in comparison to the string of natural numbers because it would have infinite members within the span of the first two unit members specified. What this means is that both must be considered in terms of unit members only and not by the abstractions of their lengths, as if each at any specified length would have different quantities of unit members. Instead, the string of natural numbers could not be considered in terms of the span between two designated members and the string of real numbers must be considered in and only in that manner. Since length of the string then is not a consideration, we are left to consider only and compare the number of unit members in which case they are equal, their “quantity” being infinite. • I would venture that because of the above, we can only conclude that the list of natural numbers which is infinite, “stretched” along an infinite line could be “aligned” with the real numbers which are infinite while the “quantity” of them is contained within a finite distance, i.e., the length of a line segment arbitrarily defined. So, the comparison of the one quantity with the other is apart from the means of containment of each. This proposition of Cantor’s seems to be a bad analogy to make a mathematical point and is very sloppy in its disregard for the true nature of these concepts of infinity he employs.
There was great concern in the early 20th century that a 1-to-1 correspondence between natural numbers and real numbers might have been uncovered. In 1905, Jules Richard devised a real number construction / specification paradox. It seemed like real numbers could be ordered in terms of the length of their English language description. Richard claimed his specification of a real number 'r' produced by a Cantor-like diagonal argument could be described as unambiguously defined, and from his argument he concluded that the diagonal is not well-defined. If other mathematicians were to agree with Richard that the diagonal is not well-defined, it would suggest that Cantor's diagonal could not be defined, thus rendering the diagonal argument invalid. And so they typically claim that Richard's argument falls down because there is no well-defined notion of when an English phrase defines a real number. In 1912, Emile Borel devised the concept of 'computable real number‘. A real number is considered to be 'computable' if we can devise an algorithm that could calculate its nth digit. With this definition we can say that pi (π) and √2 are computable, even though these so-called numbers can never be computed in entirety. Terms like "computable number" and "computable function" don't match the common usage of the word "computable". The common usage is "capable of being computed". This implies a computation process leading to a definite and precise answer. A term like "n-th digit computable" would be much less contentious as it would convey a very clear and unambiguous meaning. But instead of clarity, "computable" is given a mathematical definition that suggests that we can compute functions like √2 to infinite precision, which is clearly absurd. Based on the common meaning of "computable" it is blatantly obvious that √2 is NOT computable, and yet in mathematics we say that it is. If all real numbers were 'computable' then it would suggest that they can have a 1-to-1 correspondence with natural numbers. The only way out for the supporters of Cantor is to claim that most real numbers are not computable. This means that we can't even define them in a way that would allow their nth digits to be calculated. Therefore Cantor's infinites of different sizes can be said to depend entirely on the strange idea that inadequately defined so-called real numbers (that can never have their nth digits calculated) can be said to exist. In Alan Turing's 1936 paper he explained how each natural number in turn could be converted to a computing machine (which we might call a computer program). Then a decider program D could examine that machine and say whether or not it would effectively represent a real number by endlessly printing a mixture of the symbols '0' and '1'. He proposed that a machine/program H could find the first natural number that produces a machine/program that endlessly prints '0' and '1' symbols. H could then simulate it to find its 1st digit, and then it could print that digit. It could then do the same for the second natural number that produces a machine/program that endlessly prints '0' and '1' symbols. H could then simulate it to find its 2nd digit, and then it could print that digit. It could continue doing this for all subsequent natural numbers that meet the same requirement. The question is then should H itself be considered to be a machine/program that endlessly prints '0' and '1' symbols? If so, then when it processes the natural number that produces the H machine/program, what digit would be printed? If we say that H is the N-th successfully identified machine/program, then all its digits up to (but not including) its N-th digit would be well-defined, as would all the digits above its N-th digit. But its N-th digit itself would not be defined at all. Since its N-th digit has not been specified/defined anywhere, it would seem to follow that H should not be considered as being a program that endlessly prints '0' and '1' symbols. And so the decider algorithm D should not choose H at all. However, this was not Turing's conclusion. He concluded that the decider algorithm D could not possibly exist. It was Turing's intention to prove that something could be 'undecidable' in response to a challenge put forward by David Hilbert. Turing appeared to have concocted this scenario based on a belief that some real numbers could exist that were not 'computable'. And so the issue is whether or not it is reasonable to claim that there are so-called real numbers for which we can't determine their n-th digits. The whole of Cantor's argument about the real numbers being 'uncountable' relies on an acceptance of these inaccessible numbers. One way of trying to justify the existence of these numbers it to name one of them (like Chaitin's Omega) and to provide some kind of description that, on first encounter, can sound like a plausible definition. But on closer inspection it soon becomes obvious that these descriptions are ill-defined and usually contain absurdities. And so the big question is can there exists real numbers that cannot be clearly specified in programmatic form? If such things CAN be said to exist then this seems to conflict with the concept of mathematical formalism. Formalism asserts that mathematical statements concern the manipulation of strings (alphanumeric sequences of symbols, often as equations) using established rules of manipulation. If there's a notion that non-specifiable entities are to be incorporated into mathematics, then it can no longer be considered a formal system. If we are to accept the existence of non-computable real numbers then we must accept that the discipline of mathematics requires us to abandon proper formalism and to accept a mystical belief in the existence of non-physical ill-defined entities.
@@KarmaPeny Wow. First, thank you for responding and for putting so much effort into it. But you do leave me behind. I could never study math due to dyslexia, a heavy learning disability. I would never be able to keep up with you in a response like this. But what I do understand that if one is to employ the concepts of infinity (and there is no material manifestation of infinity in materiality so they are concepts only) to formulate a context/understanding from which certain mathematical propositions could be made to arise, he cannot violate the structural logic by which he attempts to form that context in the structure of the propositions. AS PRESENTED I the videos I have seen of Cantor’s diagnalization, the error mentioned above is made as far as I can see. As I mentioned in my post, Cantor sets up a context of consideration as to whether an infinite set of natural numbers when compared to real numbers is smaller. As I see it, even the term “smaller” cannot be paired with infinity because it specifically refers to any value or measure which is quantifiable or there is no reference of size to which the term might connect. This alone is sloppy. Then there is the fact, as I mentioned in the previous post that (if I may employ material terms) the infinity of natural numbers “extends” outward but the infinity of real numbers “extends” inward. The former is a line and the latter a line segment. They are not equivalent so I don’t see them as properly employed to make Cantor’s point. I realize I am being pedantic but I think in making a mathematical point that would be required. Also, it is suggested in Cantor’s proposition that it begs that we consider these two infinities we are obliged to compare that we do so in a “sense” of their respective lengths. We cannot for as mentioned above, one is a line segment and the other a line so we can only consider them in terms of unit members. This being the case, the means by which to distinguish the effect of the type of unit member of which each is composed being larger or smaller, is eliminated. Without the gage of their respective extensions, they are merely two collections of unit members and could only be considered equal in their infiniteness. Any language used to define propositions like infinity which cannot be otherwise expressed, must have a continuity of the logic employed. This does not, at least as far as the video presentations are concerned. Were I a mathematician, I would first remedy this before I continued with more complex mathematical claims. What do you think?
@@KarmaPeny To further my point in the previous post, consider Hilbert’s hotel which unless I am mistaken, is absurd. He proposes an hotel with infinite rooms and infinite guests and that “all the rooms are full”. He then claims that he could shift the guests in their rooms to free one up for a single new guest or an infinite number of new guests. We then employ an analogy to validate this such that there is a ladder, infinitely tall. In one stanchion the holes represent the infinite rooms and the holes in the other stanchion represent the infinite guests and the rungs which extend from each hole in the one to the corresponding hole in the other, satisfying that all the rooms are full. In the wake of the above, we know that no guests could be shifted and no new guests given a room. He may have premiered important math in the context of this analogy but the analogy is a fraud, sophomoric in my estimation. This is the peril of trying to pair infinity with concepts born of material considerations. What do you think?
Hi @jamestagge3429, I find it refreshing that you are questioning the strange arguments presented in mainstream videos, such as Hilbert's Hotel, rather than just accepting how 'wondrous' they are. Most people seem to just accept that the true nature of infinity is being revealed to them! I like to think that I adopt the same approach as you in that I try to think for myself and I try to find an explanation that makes sense to me. There are many strange or 'counter-intuitive' things in mathematics and pretty much all of them involve the concept of infinity. Even though it causes so many issues, we find it difficult to reject the concept because of statements like these: - How can there not be infinitely many counting numbers? - How can the universe not be infinite? I tackle questions like these in a video called "Does Infinity Exist? No, Infinity Does Not Exist". This title is a bit of a spoiler when it comes to my opinion on the matter. I believe that you are right to question the use of the word 'smaller' with respect to so-called infinite sets. I agree that the logic used in Hilbert’s hotel is totally absurd. In my opinion, anything with supposedly no basis in reality is inherently nonsensical. If our human brains are merely finite biological computing devices, then as with traditional computing devices, quantities would only exist as data items within the device. Quantities would then be created as-and-when required. There would be no reason to delude ourselves that all numbers must already exist somehow in some mysterious metaphysical way. It would then be trivially obvious that only a finite amount of 'numbers' could possibly exist at any possible time. The problem of implied completed infinities started around 2 and a half thousand years ago in Ancient Greece when they discovered that the diagonal of a unit square could not be expressed as a multiple of any known length. They could have interpreted this as evidence that '√2' cannot evaluate to a constant and thus it must be wrong to believe that perfect shapes can exist (even imaginary ones). Instead they refused to contemplate that perfect shapes might not be possible; after all, their Gods must have perfect forms. So they decided that irrational lengths must exist, and that this was a secret previously only known by the Gods. A fixed/static length is by definition an unchanging value. An unending addition of non-zero values is, by definition, a changing value. These two cannot be the same thing since a changing value cannot equal an unchanging value. And so whereas '√2' can be interpreted as an algorithm (or computer program) it cannot be said to be a constant. But this trivial contradiction was either not acknowledged or it was ignored (and still is today). A vast amount of flawed logic has followed in an attempt to prop up this ancient mistake. This includes the axiomatic approach in which axioms and rules of logic need not have any basis in physical reality, as well as the belief that we can work with the concept of completed infinities. This is cloaked by claims from authority figures that success is all that matters, and that nothing more is required to justify the mathematical approach to reasoning. An alien from another world might conclude that humans are deluding themselves since they blindly refuse to accept that their foundational principles are complete nonsense! For example, the multiplication of so-called complex numbers and even quaternions operate in the same way as signs. If they operate in the same way as signs then they are signs. Complex numbers consist of 4 signs (as indicated by the dimensions of the lookup table) and quaternions consist of 8 signs. But due to the weird approach taken by humans they came across these concepts in haphazard ways. They don't even realise these are just signs and instead they believe that mathematics has mysteriously revealed these strange concepts to them. They reinforce these weird beliefs by dismissing the fact the lookup tables were devised by humans to give desirable results. Then all that remains is the apparent magical power of abstract mathematics that must (they conclude) form the underlying fabric of the universe. These beliefs are held by people considered to be amongst the cleverest people on the planet. The alien can only conclude that all humans are crazy creatures that are incapable of letting go of their primitive supernatural belief systems! I hope you will continue to nurture your own opinions and continue to question the absurdities in the mainstream videos. I also hope that you will continue to watch and re-watch my videos :)
Sorry, you lost me with the slide after 6:40. I don't know why, but I think there must be a better way to explain. But I don't give up and will still consume videos on that issue till I find the holy grail.
Note to N J Wildberger... In my opinion we should attack the arguments behind real numbers and infinite objects on the basis of their ridiculousness in the real world. Any so-called 'abstract' objects sound 'other worldly' and should be clearly identified as our enemy for their obvious absurdness. Anything with supposedly no basis in reality is inherently nonsensical. But you have set about creating your own alternative fundamental theories that are also heavily abstract in nature instead of having a basis that is entirely founded in physical reality. Given that you appear to not insist on a real-world foundation, I presume you must disagree with part or all of the following: If our human brains are merely finite biological computing devices, then as with traditional computing devices, quantities would only exist as data items within the device. Quantities would then be created as-and-when required. There would be no reason to delude ourselves that all numbers must already exist somehow in some mysterious metaphysical way. It would then be trivially obvious that only a finite amount of 'numbers' could possibly exist at any possible time. The problem of implied completed infinities started around 2 and a half thousand years ago in Ancient Greece when they discovered that the diagonal of a unit square could not be expressed as a multiple of any known length. They could have interpreted this as evidence that '√2' cannot evaluate to a constant and thus it must be wrong to believe that perfect shapes can exist (even imaginary ones). Instead they refused to contemplate that perfect shapes might not be possible; after all, their Gods must have perfect forms. So they decided that irrational lengths must exist, and that this was a secret previously only known by the Gods. A fixed/static length is by definition an unchanging valuesequencending addition of non-zero values is, by definition, a changing value. These two cannot be the same thing since a changing value cannot equal an unchanging value. And so whereas '√2' can be interpreted as an algorithm (or computer program) it cannot be said to be a constant. But this trivial contradiction was either not acknowledged or it was ignored (and still is today). A vast amount of flawed logic has followed in an attempt to prop up this ancient mistake. This includes the axiomatic approach in which axioms and rules of logic need not have any basis in physical reality, as well as the belief that we can work with the concept of completed infinities. This is cloaked by claims from authority figures that success is all that matters, and that nothing more is required to justify the mathematical approach to reasoning. An alien from another world might conclude that most of Earth's mathematics is pure garbage. It might say humans are deluding themselves since they blindly accept the status quo and refuse to accept the blatantly obvious fact that it has foundational principles based on complete nonsense! It might point out that if we apply the expression "1 + 1 = 2" to the question of how many standard-sized violins will fit inside a standard-sized violin case, it forms an invalid statement. If mathematical expressions must relate to specific physical scenarios, their validity can be scrutinised and tested against empirical evidence. However, if we perceive mathematical expressions like "1 + 1 = 2" as existing independently of reality, their validity relies solely on subjective interpretations. One person might construct an imaginary scenario where the statement is false, while another argues it's true. Does it truly make sense to allow multiple interpretations of the 'mathematics' game, where the same statement can be valid in one version but invalid in another? This means that we can never be certain about which version/framework is the best version. The choice of which version should be adopted by the mainstream becomes a popularity contest rather than a rigorous evaluation. There is no way to ensure that the prevailing system is the best system and none of its assertions can be considered to hold any claim to being superior to the negation of that same assertion. The visiting alien might add that it is bizarre that humans don't prefer the narrative of mathematics to mirror the symbol manipulations they're actually performing. Weirdly they seem to prefer for it to weave a mysterious fairytale that is supposedly only loosely tied to reality! For instance, after seeing how we humans use negative and positive numbers, an alien might think that it should be obvious to humans that when they use the symbols '+' and '-', they are assigning particular meanings to the signs depending on context, such as forwards and backwards, credit and debt and so on. It should be obvious to humans that there is nothing in the real world that inherently corresponds to a negative sign or a positive sign. These are just arbitrary symbols that we attribute real world meanings to. For example, if a person takes some steps backwards then it makes no difference if we call this the positive direction or the negative direction. This example shows that the signs themselves have no meaning until we assign meanings to them. But the alien might despair when it discovers that humans appear to believe in the existence of non-physical mathematical concepts including the concepts of negative and positive. Humans don't appear to even realise what they are actually doing when they claim to be multiplying two signed numbers. What they are actually doing is multiplying two unsigned numbers and then obtaining the resulting sign from a lookup table. The lookup table was constructed by humans to produce the desired outcomes. They aren't actually multiplying the signs. Worse still, the multiplication of so-called complex numbers and even quaternions operate in the same way. If they operate in the same way as signs then they are signs. Complex numbers consist of 4 signs (as indicated by the dimensions of the lookup table) and quaternions consist of 8 signs. But due to the weird approach taken by humans they came across these concepts in haphazard ways. They don't even realise these are just signs and instead they believe that mathematics has mysteriously revealed these strange concepts to them. They reinforce these weird beliefs by dismissing the fact the lookup tables were devised by humans to give desirable results. Then all that remains is the apparent magical power of abstract mathematics that must (they conclude) form the underlying fabric of the universe. These beliefs are held by people considered to be amongst the cleverest people on the planet. The alien can only conclude that all humans are crazy creatures that are incapable of letting go of their primitive supernatural belief systems!
This looks interesting, but it does not really explain any physics. That's the problem with all of these ideas including the original Bell inequality.
The Bell inequality is such a joke. I don't know why so many people continue to believe these experiments somehow prove nonlocality. They don't.
Since each whole number can be multiplied by 2 to get a specific and unique even number, it follows that every whole number has a matching even whole number. Because they always match, the size of the set of even numbers is the same as the size of the set of whole numbers.
@johnscovill4783 Your argument relies on the presumptions/assumptions that the infinite set of all whole numbers can be said to already exist, and that the infinite set of all even numbers can also be thought of as already existing. The arguments presented in this video cast doubt on these presumptions/assumptions. Mathematicians often use a 'tending towards' argument where so-called 'infinity' is involved in order to supposedly 'prove' something that they want to be true, such as the existence of a 'limit' of an unending sequence. But as demonstrated in the video, this 'tending towards' argument can be used to identify absurdities that arise with the notion of completed infinities. If we examine how the 1-to-1 matching operation progresses as we 'tend towards infinity' it can easily be shown that the number of 'not matched' values tends towards infinity. If our human brains are merely finite biological computing devices, then as with traditional computing devices, quantities would only exist as data items within the device. Quantities would then be created as-and-when required. There would be no reason to delude ourselves that all numbers must already exist somehow in some mysterious metaphysical way. Mathematicians use techniques like 'tending towards infinity' when the answer they get appears to support their belief in completed infinities, but they ignore arguments that expose the flaws in this type of argument. This video also shows that Cantor's diagonal argument is flawed because it is always the case that the diagonal argument produces a result of a different type when applied to a FULL set of any given number type (in addition to the obvious issues with the implied completed infinities). This problem started around 2 and a half thousand years ago in Ancient Greece when they discovered that the diagonal of a unit square could not be expressed as a multiple of any known length. They could have interpreted this as evidence that '√2' cannot evaluate to a constant and thus it must be wrong to believe we can imagine perfect shapes. But instead they refused to contemplate that perfect shapes might not be possible; after all, their Gods must have perfect forms. So they decided that irrational lengths must exist, and that this was a secret previously only known by the Gods. A vast amount of flawed logic has followed in an attempt to prop up this ancient mistake. This includes the axiomatic approach in which axioms and rules of logic need not have any basis in physical reality, as well as the belief that we can work with the concept of completed infinities. This is cloaked by claims from authority figures that success is all that matters, and that nothing more is required to justify the mathematical approach to reasoning. The reality is that most of mathematics is pure garbage. We are deluding ourselves if we continue to blindly accept the status quo and refuse to accept the blatantly obvious fact that it is all based on complete nonsense!
One third plus two thirds is dual to one. One third is 0.333……, and two thirds is 😮0.666……, so 0.333… + 0.666… is equal to one.
@johnscovill4783 You said "One third is 0.333……" but this is not true. It is easily demonstrably false. Have you not watched the video??? Just because one third can be represent in bases with a factor of 3 it doesn't follow that it can be represented in other bases. Indeed, it seems far more intuitively obvious to me to claim that it can ONLY be represented in bases that have 3 as a factor. Firstly, however far you progress with the base 10 division operation, you will always have a non-zero remainder. Thus the remainder CANNOT ever be zero however many decimal places are calculated. This should be proof enough. You might counter this by claiming that you can somehow 'go to infinity' and that with an infinite amount of decimal places the equivalence can be achieved. This can easily be countered by considering a length of 1 as consisting of a series of smaller lengths represented by intervals. If '0.333...' can exist and equals a length of one third then it follows that the infinitely-many intervals shown below must be able to exist: [0,0.3)U[0.3,0.33)U[0.33,0.333)U ... ?U[0.33333...,1] Given that all the intervals are in order, one after another, the question mark symbol above must represent a single interval. Therefore there must be one and only one interval before that last one. This contradicts the idea of 'infinitely many' which requires there to be no last part. The even simpler counter argument is that in order to claim that one third has an actually infinite base 10 representation then you need to show how this is achieved. In order to do this you would not be able to simply assume that infinitely many digits are possible, you would have to explain how this can come about. As this is evidently impossible you have no coherent argument in support of your claim.
I think this is right on the nose. I came to the same conclusion _and have a viable alternative way to look at maths and physics_, in which these problems don't arise. In fact, a lot of things fall into place. I call it neural relativity, and it fita neatly with your view of infinity I shared before. Is there a way we can get in contact and discuss this stuff? Maybe work on some content?
Infinity is evil. Everytime we play with it, stupid conclusions result. We should stick to "a really big number" instead of infinity so this nightmare goes away.
GREAT VIDEO! Liked and subscribed ❤
If I remove an infinitely small point from a square, have I then removed anything?
Have you figured this one out yet? In my opinion, QM is not needed to explain these experiment results.
About 1-to-1 correspondence between the natural and real numbers.. I was thinking about how maths concepts might be described in terms of 'explicit procedures' when I encountered a maths forum comment that said "Do you consider 'f(n) = n + 1' a bijection between ℕ and ℕ+, where ℕ = {0,1,2,3,...} and ℕ+ = {1,2,3,...}?" In my view, the expression "f(n) = n + 1" denotes a programming function that takes a natural number as input and increments it by one. To me, it's simply a compact representation of a code snippet or algorithm translatable into various programming languages. While mathematicians may perceive it differently, perhaps as an infinite mapping between two infinite sets, I struggle to see it as such. Likewise, I don't view '√2' as a fixed value situated on an imaginary number line; instead, I regard it as representing a code snippet or algorithm that would perpetually continue if executed. While mathematicians may dispute whether a mathematical term like '√2' equates to a code segment housing a 'square root of two function', I anticipate they might acknowledge some form of connection between them. Essentially, I hope they would concede that one could be 'mapped' to the other. Now, let's contemplate a set of well-defined symbols capable of constructing functions related to real numbers. For instance, some symbols could define a 'square root of 2' algorithm, while others could depict a pi algorithm, and so forth. This task could be accomplished using a programming language or possibly by using existing mathematical symbols. Here's where it gets intriguing. With only a finite number of symbols ('x', say), there's a finite limit to the number of 'real number functions' achievable with 'x' symbols. Consequently, we can establish a one-to-one correspondence between each 'real number function' (formed using 'x' symbols) and natural numbers. As we increase 'x' to accommodate more 'real number functions', we can systematically continue to 'count' and thus map them to more natural numbers. Since 'x' grows 'without bound', no real number can elude our encoding into a function. Thus, for any conceivable specification of a real number, there will exist a mapping to an individual natural number. Hence, it seems we've uncovered a one-to-one correspondence between natural and real numbers.
Are you thinking of demonstrating the last concept in a video? BTW: Why is there a finite limit to the number of 'real number functions' with 'x' symbols? Why can't you use a finite set of symbols to construct an arbitrarily large number of such functions?
I meant that the length/size of the function (or computer program) consists of x symbols. Therefore if your symbol table has 256 symbols in it, then the limit on the number of 1-character programs (or functions) is 256. And so on. Regarding demonstrating this point in a video, yes I might do this. However, this is far from a new idea. It has strong similarities to Jules Richard's Paradox (1905), Alan Turing's first undecidability 'proof' (published 1937), and even some similarities to Georg Cantor's attempted first proof of the uncountability of the real numbers (1874), although this only constructed the 'algebraic' real numbers rather than all that are now referred to as being 'computable'. It is accepted that we can form a 1-to-1 correspondence between well-defined/specifiable real numbers and the natural numbers. These real numbers are called 'computable' in the sense that for any one of them we can calculate its n-th digit for any given value of n. But it is claimed that this does not cover all possible real numbers because, strangely, non-computable numbers must (apparently) exist!!! Weirdly these non-computable numbers must exist even though they cannot be specified or defined in a way that would allow the n-th digit to be calculated. This seems to be completely at odds with the desirable objective for mathematical objects to be well defined!!! One way of trying to justify the existence of these numbers is to name some of them (like Chaitin's Omega) and to provide some kind of description that, on first encounter, can sound like a plausible definition. But on closer inspection it soon becomes obvious that these descriptions are ill-defined and may contain absurdities. Chaitin's Omega involves a purportedly "randomly selected Turing machine". This is a nonsensical notion that is akin to selecting a random natural number from the infinite set of all natural numbers. Firstly, it's debatable whether anything can be truly random. Secondly, and more significantly, even if we shift to a pseudo-random selection, it necessitates a well-defined collection from which we're selecting. The absence of an upper bound or 'highest number' renders the so-called set of natural numbers inadequate for this purpose. Similarly, it would be an impossible task to select a random Turing machine. The dubious so-called definitions of these supposedly non-computable real numbers often rely on the validity of the Halting Problem proof of undecidability. I belief that I have shown that this proof is invalid (see my other videos). And so they are using an invalid argument to try to prop up this preposterous claim that something called a non-computable number can be said to exist. Is it reasonable to claim that there are other so-called real numbers for which we can't determine their n-th digits? I don't think so. It seems that the whole of Cantor's argument about the real numbers being 'uncountable' relies on an acceptance of the existence of these dubiously defined inaccessible numbers.
@@KarmaPeny Thanks for the detailed response, though it's a bit pearls before swine with me. Hope to see more content from you!
1:36 "By cantor's logic we can simply ignore the not-matched problem." What whole numbers are "not matched?" Please, enlighten me by finding one. But be warned that whatever number you name, call it n, I will point out is matched to 2*n. The issue here is that you can't see past the method you learned in Kindergarten for determining size. Which is to measure/count/whatever a thing from its beginning *_to_* *_its_* *_end._* But the infinite set of whole numbers has no end, so you can't do it. And that is the failure here - the definition of "as many" that you want to use cannot be applied, so your argument fails. In fact, one definition of an infinite set is one that can be put into a bijection with a strict subset of itself. Next, I'll point out that "12345" is not "a number." It is a character string; it can be interpreted as the *_decimal_* *_representation_* of a whole number. Or a street address, which only looks like a number. Or an access code for entry into a secure facility. So now I'll ;let you in on a little secret: Cantor did not apply his diagonal argument to numbers. He said this specifically: "There is a proof of this proposition ... which does not depend on considering the irrational numbers." He applied it to infinite length strings using the two characters "m" and "w". The examples he used were: S1 = (m, m, m, m, … ), S2 = (w, w, w, w, … ), S3 = (m, w, m, w, … ). He used different names, but this corresponds more closely to the notation in Wikipedia. And my point here is that you can't try to apply it to natural numbers, as you try at 4:04 "To a disbeliever the concept of infinitely many leading digits is ... absurd." Yet Mathematics uses "infinitely many" things in several ways. But I'll continue using real numbers, since the proof works with them. 4:52 "In order to go down a diagonal we would have to use leading zeroes trailing zeros or both and the result would not be of the same type as the numbers in the list." Patient: "Doctor, Doctor, it hurts is I do THIS!" Doctor: "So don't do THAT!" The reply to this claimed failure is "don't do that." CDA only works if you start with infinite-length strings, whether they be "m"s and "w"s or the decimal representations of real numbers. But you have to include all those pesky trailing digits. So the diagonal will also be an infinite-length string. By definition it *_has_* to be "the same type." Yours is a strawman argument. 5:30 "Note that as we change this example to allow more decimal places such as two decimal places then three decimal places and so on the size of the diagonal result grows exponentially." Doesn't it bother you that, in this argument, you are quite literally saying "We can't use this proof that the set of decimal numbers is larger than the natural numbers, because it grows faster and so we can't make the diagonal?" That is, we can;t prove it to be true because it is true? 5:54 "But now let's assume that all real numbers from zero up to but not including one are listable." BZZZZZT. This is where you really go wrong. I know that you were taught that this the start of Cantor's Diagonal Argument, but it isn't. CANTOR NEVER ASSUMES HE CAN MAKE THIS LIST. What CDA proves, translated to use numbers, is this proposition: ""If S1, S2, …, Sn, … is any simply infinite list of real numbers in [0,1], then there always exists a real number S0 in [0,1], which cannot be connected with any real number in that list." The words "all," "complete," "full," or whatever you want to use never appear in this. Examples of such lists are trivial: Sn=SQRT(1/n) is one. And yes, we can, and need to, include 1 in the set since 1=0.99999... . The point is that CDA is only a lemma used to prove that [0,1] is a bigger set, not the proof of that proposition by itself. In Cantor's words: "From [the proposition I cited above] it follows immediately that the totality of all real numbers in [0,1] cannot be put into a list S1, S2, S3, ..., Sn, ... otherwise we would have the contradiction, that a number S0 would be both in the set [0,1], but also not in the set [0,1]."
You seem to have overlooked my initial statement that "Most mathematicians won't accept that the concept of infinity is absurd". In this video, like many others I've created, I highlight the absurdity of the mainstream mathematics approach by which participants in the 'maths game' are expected to make logical deductions about impossible and unimaginable scenarios. If you were to acknowledge this, you might argue why deductions about such scenarios are possible (for instance, why you believe the existence of a string containing 'infinitely many' characters is a coherent concept). Instead, your arguments presuppose the correctness of mainstream arguments, missing the point entirely. Anyone unable to question the foundational principles of mathematics will continue to misconstrue my arguments and willfully ignore my key points, much like you disregarded my opening statement. For instance, you begin by challenging me to name a whole number 'n' that does not match the already existing whole number '2n'. But do you agree that we have finite brains, which, like any computing device, create data items as-and-when needed? Do you agree that our brains do not access some intangible mystical mathematical realm, revealing to us the pre-existence of infinitely many natural numbers? If you accept that nobody can imagine infinitely many of anything, then isn't it presumptuous and ultimately delusional to think that we can reason about infinitely many things, such as making deductions about infinite operations like matching infinitely many pairs of them? In the video, I illustrate that after matching 2-to-1, 4-to-2, and 6-to-3, we will have reached number 6 in the '2n list', but only got up to number 3 in the 'n list', leaving numbers 4, 5, and 6 unmatched in the 'n list'. As 'n' increases to infinity (if such a thing is possible), the number of unmatched values would also tend towards infinity. This 'tending towards infinity' technique is often used in mathematics to support desired outcomes. But when it doesn't support the desired outcome, it's dismissed as an invalid argument. How can mathematicians claim rigour in their discipline when they can selectively decide when an approach to reasoning is valid based on whether it yields an acceptable result? Cantor's proof relies on contradiction, immediately raising suspicions. A contradiction in deductive reasoning merely indicates an error somewhere, without revealing where or what that error is. Cantor's argument starts with imaginary 'infinite' strings, supposedly containing infinitely many occurrences of two symbolic characters, 'm' and 'w' (any two symbols could be used so they could be '0' and '1'). Not only does he presume the existence of one of these infinite strings (arguably absurd), but he also presumes the existence of all possible combinations of them (quite easily absurd to many). He fails to see any issue with all these bizarre things existing but claims that we can't order them. However, if we want to relate these strings to real numbers (which Cantor later does using various arguments) then the claim that they cannot exist in a static order seems problematic. If we accept the concept of a real number line (which I obviously don't), then all real numbers already exist in a static state on that line and are already ordered by size! Based on this, the one thing that Cantor must rule out as being the cause of the contradiction is that they cannot exist in a static order. Nonetheless, he suggests that if the strings could be ordered, then an infinite antidiagonal could be completed (how absurd is this?). He argues that this forms a contradiction because the ordered list was supposed to contain all combinations of the infinite strings, whereas the antidiagonal would form a string that cannot be in the original list. He ignores the impossibility of even checking this because you would never finish checking the digits - the sheer amount of absurdities in his arguments are staggering. Cantor could have identified many potential causes of the contradiction. But he did not consider that the contradiction might invalidate the notion of infinite strings or that creating an infinite antidiagonal might be impossible. Instead, he presumed that the contradiction must prove what he wanted it to prove. You claim that '12345' is not a number but a character string that can be interpreted as a decimal representation of a whole number. It seems you believe that actual numbers are non-physical or otherworldly. I view them as physical data items that can be used to represent real quantities within computational algorithms. I suggest watching my video specifically on the topic of 'what is a number': ua-cam.com/video/OghUe5C5cDU/v-deo.html
The debate over whether Cantor's diagonal argument is in actual fact a lemma as opposed to a 'proof by contradiction' boils down to semantics. It delves into what the informal term "Cantor's diagonal argument" actually encompasses. Georg Cantor presented a 'proof by contradiction' utilising a diagonal argument in 1891. It was an attempt to prove that there are infinite sets which cannot be put into one-to-one correspondence with the infinite set of natural numbers. Back then, it wasn't referred to as "Cantor's diagonal argument"; this term emerged over time, alongside other informal phrases like the diagonalisation argument, the diagonal slash argument, the anti-diagonal argument, the diagonal method, and Cantor's diagonalisation proof. It is widely regarded that all these informally denote Cantor's 1891 proof by contradiction. Regardless of the informal terminology used, if one asserts that Cantor's proof relies on contradiction, and then highlights the weaknesses of such an approach, the essence remains clear. The crucial point is that a contradiction in deductive reasoning merely signals the presence of an error somewhere without revealing where or what that error is. Any counter-argument should address this aspect instead of attempting to steer the conversation towards interpreting informal references. If we were forced (kicking and screaming) to go down this route of discussing this particular expression, then while the diagonalisation aspect of Cantor's proof might well be termed a lemma, we could argue that it wouldn't be entirely fair to label it as "Cantor's diagonal argument". Cantor himself acknowledged that the concept of diagonal reasoning stemmed from the work of the French mathematician Joseph Bertrand, who employed it in his 1887 proof of the existence of transcendental numbers. Therefore it would seem to make more sense to concur with the widespread usage whereby the expression is associated with Cantor's overall 1891 proof rather than with a particular stepping stone within that proof. Therefore the 'lemma' argument fails to tackle the major concern that Cantor's 1891 proof relied on the highly dubious approach of proof by contradiction. As already pointed out, a major weakness of such an approach is that the contradiction merely signals the presence of an error somewhere; it does not and cannot reveal where or what that error is. This approach to deductive reasoning is appallingly poor as it effectively allows anything you like to be identified as the cause of a particular contradiction. As such, it allowed Cantor to simply assert that he had proven his point based on this farcical deductive reasoning. In reality, it remains a highly dubious argument that is masquerading as an irrefutable truth.
From a logical perspective, there is no distinction between a lemma, proposition, or theorem since they all represent claims requiring proof. The various labels serve organisational purposes alone. A 'lemma' is viewed as a readily provable outcome designed to aid in proving a theorem, deemed a more significant result. However, if the logic purportedly proving a 'lemma' harbours a contradiction, we cannot dismiss this issue and assert that if the lemma holds, then some theorem can be proven. Therefore, there is no point in arguing that Cantor's 1891 proof contains no assumptions in its initial stage or that we must accept this part, involving an infinite diagonal, because he deemed it a 'lemma'. If the presumption of an anti-diagonal's existence leads to contradiction, then his entire argument is invalid.
The scenario I propose bears a striking resemblance to the scenario outlined by the French mathematician Jules Richard, known as "Richard's paradox." Richard introduced his paradox in 1905, predating the existence of electronic computers or programming languages. Had these tools been available at the time, Richard might have employed them similarly to my proposal. In "Richard's paradox," he represents real numbers using English language descriptions, while I suggest code segments. He organises his descriptions by length and then alphabetically for strings of equal length. My proposed code segments, or 'Turing machines' if preferred, could be arranged in the same manner. For instance, consider the code segment for √2. It might encompass the code for a general square root function along with a line of code that could invoke that function, such as 'result=SQRT(number=2,base=10,significant_digits=0)'. These parameters would enable the function to execute for a specified number of significant digits (to correspond with the mathematical definition for a 'computable number') or to run unrestrictedly by setting the third parameter 'significant_digits' to zero. If, hypothetically, the most concise way to do this in a given programming language took up 300 characters (including the line of code containing the function call), then we could conclude that the code segment length corresponding to √2 is 300 characters long. Considering that there is a finite alphabet for any given programming language, there will be a finite number of code segments that are 300 characters long. Therefore, there must be a finite number of 'real numbers' that can be encoded (and thus considered well-defined or in a closed form) into a code segment length of 300 characters. Richard then defines another real number 'r' as follows: "The integer part of r is 0, the nth decimal place of r is 1 if the nth decimal place of rn is not 1, and the nth decimal place of r is 2 if the nth decimal place of rn is 1." This 'r' value is akin to Cantor's anti-diagonal value, except that Richard is only applying it to definable/specifiable real numbers. And so with respect to Richard's paradox, we don’t need to consider whether or not any non-specifiable numbers can be said to exist. Richard argues that this is an English expression that unequivocally defines a real number 'r'. Thus, he presumes 'r' must be one of the rn numbers. However, he deems this paradoxical since 'r' was constructed to avoid being any of the rn numbers. In both cases, whether 'real number' is specified in English or via a piece of code, we describe an ongoing process. We start with shorter lengths and gradually increase to longer ones. Upon close examination, at any stage, we will have specified a finite number of 'real numbers' with self-contained code. However, the specification of Richard's 'r' number requires further consideration. Initially, I believed it would not be possible to specify a self-contained code segment to calculate Richard's 'r' number. However, upon further contemplation, I began to question whether I was correct in making that assertion. Its construction would necessitate a formulaic approach to creating the other number specifications. Then, conceivably, we could produce a single self-contained piece of code that would emulate the creation of the other numbers, calculate the nth digit of each of them, and output the altered digit as required. If such a thing were possible, then I would have to agree with Richard that his specification of 'r' could be described as unambiguously defined. However, as I delved deeper into this idea, it became somewhat mind-boggling to consider whether we could continue creating more of these 'r' values or anti-diagonals ('r1', 'r2', and so on). That is, could we proceed to create further code segments, each of which differs from all previous 'r' values as well as all specifiable numbers? Yet, while pondering this perplexing scenario, I stumbled upon a more fundamental issue that I had overlooked in my analysis of the original problem. If we assume that 'r' can be encoded into a piece of code, then what transpires during its processing when it has to deal with code segment lengths equal to its own? It appears that the code segment for 'r' would need to emulate or execute its own functionality and then apply further functionality to change the nth digit. This seems contradictory as it would require all of "its own functionality" plus "some more functionality" to be contained within "its own functionality," which is evidently impossible. Also it would need to represent not just its own real number, but its own real number with one digit altered, which is also impossible. Consequently, I reverted to my original belief that Richard's 'r' value is not well-defined as it cannot be constructed as a self-contained code segment. Richard concludes that his 'r' statement refers to the construction of an infinite set of real numbers, of which 'r' itself is a part, and so it does not meet the criteria of being unambiguously defined. Contemporary mathematicians agree that the definition of 'r' is invalid, but they claim it is because there is no well-defined notion of when an English phrase defines a real number. My proposed code segment approach would seem to negate this objection. Note that should mathematicians concur with Richard that the diagonal is not well-defined, it would suggest that Cantor's diagonal could not be defined, thus rendering the diagonal argument invalid. If all specifiable real numbers were said to already exist, all infinitely many of them, then Richard's description of 'r' would seem to be a valid specification (not only is the concept of infinite repetition readily accepted in formal definitions of real numbers, but the concept of Cantor's infinite anti-diagonal is also widely accepted by the mainstream). However, as such, the value it describes would need to already exist in the static set of all specifiable numbers. Hence, it would have to describe a value that is different from its own value, forming a trivial contradiction. Therefore, after much thought, I still maintain that the most reasonable resolution of Richard's paradox is that the concept of 'infinitely many' is incoherent. No other proposed solution can avoid contradiction to my mind (for the reasons explained above). It also renders all infinite diagonal arguments invalid.
I also have a feeling quantum computing and fusion energy are just a way to get funded. Not that because they will not work, but because the so-called scientists nowadays are like cultists.
Very nice video
Thanks
You are 100% correct. I think it would have helped if you had used the distinction between real and hyperreal numbers. I recently compared both constructions and the respective interpretation of "0.999..." in them. Please have a look at the video and see what you think of it.
Interesting vid. When do you think you'll have another video put out? I'm curious if you have thought of more stuff concerning .999... since your last video about it.
I plan to work on some more videos in the two to three months. Sadly no matter how many videos I produce about 0.999... they seem to have little impact. So I'm thinking that I need to make less subtle points about the foundational problems in future videos. For example, we can't 'imagine' an 'infinitely thin' line; an unending sequence of non-zero digits can never become a constant, and so on.
Ok 👍
The cat and mouse argument misses the point of the epsilon-delta definition You say there are 2 arguments: 1 - Give me any point before 1 and I can give you an nth sum that is closer to 1 than your point/ 2 - Give me any nth sum and I can give you a point that is closer to 1 than your nth sum term. Lets break this down so we understand what they mean 1 - For all x < 1, there exists n such that x<s_n<1 [really this should be in terms of distance getting smaller for all n>N] 2- For all n, there exists x such that distance(s_n,1) > distance (x,1) Where n is a natural number, s_n is the nth partial sum and x is a number (2) implies that s_n has distance not equal to 0 to 1, or that s_n =/= 1, so this is equivalent to 3 - For all n, s_n =/= 1. (Since being able to find a closer number implies inequality and not being equal means theres a point inbetween) Now, I think its a bit clearer that these are not equal, and that (1) is in fact a lot stronger than (3). (3) may be said about almost any number you pick (for example, 2, 3, 4, 0.8, 0, -1005, etc) But (1) can only be said about one number (1), (uniqueness of limit), so clearly that means there is something 'special' about 1 and the sequence 0.9,0.99,0.999, ... And epsilon delta lets us understand more these special numbers. Now, as for whether this is useful, I'd say the proof is in the pudding with our modern world.
@adriengrenier8902 You said: "The cat and mouse argument misses the point of the epsilon-delta definition" I beg to differ. You said: "You say there are 2 arguments: 1 - Give me any point before 1 and I can give you an nth sum that is closer to 1 than your point/ 2 - Give me any nth sum and I can give you a point that is closer to 1 than your nth sum term." Here in 'point 1' I have paraphrased the unclear mathematical terminology into a form that I believe is easier for the lay person to get their head around. Indeed, it is the way that mathematicians often describe the epsilon-delta argument to students and lay persons. Then, using the same clear language, I have shown how this argument can easily be turned on its head. Also note that early on in the video I suggest that 0.999... might be thought of as "notation for values that could be produced from an associated set of instructions (algorithm)". What I was getting at was that it is equivalent to the algorithm for the geometric series with 1st term = 0.9 and common ratio = 0.1. We can think of this as the programming language code or even a set of verbal instructions that describes how to generate the sequence of values 0.9, 0.99, 0.999, and so on. So we don't have to pretend that we can imagine an actual infinity of nines following the decimal point, as we all know that this is impossible. We can easily imagine a finite set of instructions that, when executed, would start to produce this sequence of values. This was the original meaning of a geometric series. In many of Zeno's paradoxes he described processes that we can now map directly to the geometric series 1/2 + 1/4 + 1/8 + ...and so arguably this is what a geometric series is, nothing more than a finite description of a process. Without Simon Steven's publication in 1594 we might have kept this interpretation. Then in our logical reasoning we might start by considering that 0.999... means one of three things: a constant less than 1, a constant equal to 1, or a process description which is a set of instructions rather than a constant. The cat and mouse argument would then discount the 1st two possibilities. You said: "Lets break this down so we understand what they mean 1 - For all x < 1, there exists n such that x<s_n<1 [really this should be in terms of distance getting smaller for all n>N] 2- For all n, there exists x such that distance(s_n,1) > distance (x,1) Where n is a natural number, s_n is the nth partial sum and x is a number" In my eyes, this is far, far, far more unclear than my wording. The language of mainstream mathematics claims to be formal and rigorous whereas it is anything but. A computer language is formal and rigorous because it can be executed in the real world. We can create instructions that would, when executed, go into an inescapable loop. But we cannot go around a loop a non-finite amount of times. Also we can create variables of a certain number type and we can specify properties of any created numbers of that type. But we can't create a non-finite amount of a particular number type and we can't assume that 'all' possible values can somehow be thought of as existing at the same time. We have to distinguish between accurate specification and farcical delusional beliefs. And so when you state things like "For all x < 1 ..." you have immediately introduced a contradiction into your logic. You are effectively saying "let's assume that something that can't exist (i.e. an infinite set) actually exists, then it follows that ...". You have accepted the language and arguments of mainstream mathematics and you genuinely believe it is valid because (I presume) you believe that the evidence that supports its validity is so vast and compelling that to question its validity would be unthinkable to you. I know that there is nothing I can say that will change your mind. The best I can hope for is that you will at least respect my right to question your beliefs just I accept your right to question mine. You said: "Now, as for whether this is useful, I'd say the proof is in the pudding with our modern world". When mathematicians create their mathematical objects and formal rules for their axiomatic systems, where do we suppose their inspiration comes from? If they are being influenced by things in the real world then it should come as no surprise to us that some of their mathematics can be found to be useful in the real world. This success of their mathematics in the real world is then claimed to justify having a foundation based on pure fantasy. However I believe the success is down to the way that they have been allowed to design their fantasy axiomatic systems in order to appear to work in the real world. If their abstract systems are really completely detached from all physical reality then we are left to contemplate many wondrous things. Not only is it wondrous where all these concepts have strangely come from, but it is wondrous that they amazingly appear to be so fantastically useful in the real world! It seems that most people love all this wondrous stuff and so are happy to accept it. I am not one of them. I don't accept that we should all be amazed that this fantasy game with symbols has mysteriously and wondrously turned out to be useful in the real world. And I don't accept that it provides any evidence that fantasy-based maths is far better than some other approach, such as a reality-based approach. In order to accept the mainstream position we have to believe there is some mysterious property of mathematics that is strangely not connected with physical reality but that mysteriously turns out to be useful in the real world. Mathematicians do not appear to be aware of how absurd this claim sounds to people like me. To be absolutely clear, it sounds like they are attributing magical powers to mathematics!
Would be cool to see an update following the rewarding of the Nobel prize last year
So, how many natural numbers are there?
In my opinion there are only a finite amount of numbers and, if you will allow me, I will explain why... I urge you to try to put your pre-conceived ideas of numbers to one side just for a moment and try your hardest to see things from my perspective, which I will now attempt to describe to you. I believe that a brain is just a biological machine and that a number is a data item inside a brain. Hence each number must have a physical existence within the mechanism and chemistry of a brain. And since only a finite amount of working brains exist, it follows by my logic that only a finite amount of numbers can ever exist. The alternative viewpoint put forward by modern-day mathematicians is that, by definition, we can claim that infinitely many numbers can be said to exist. I reject this because I don't think that any non-physical thing can be said to exist. Many mathematicians believe that even before any life existed there were still numbers in the universe. They might even provide an example such as there being 6 electrons in a carbon atom. But in such a scenario what exactly is a number defined as? We can't define it as a data item within the mechanism and chemistry of the brain because no brains exist in the universe. So they have to resort to some mystical supernatural existence, not unlike Plato's imagined third realm of 'perfect' forms. Given that mathematicians like clear, unambiguous and irrefutable definitions I wonder if a number is not a data item within the brain then what exactly is it? In a universe with no brains, where exactly do the numbers reside? When I ask them, they tend to come back with some explanation that is less than helpful. Their explanations always make matters far worse as they invariably involve even more concepts that are supposedly have no relation to physical reality. It makes far more sense to me than to think of numbers as data items in brains than to accept the concept of 'infinitely many' and then to accept that the statement "there is no biggest number" is some weird and wondrous property of this imagined non-physical collection of things. If people really thought about this for a very long time then I would hope that more people would agree with me. And so if you were able to see things from my perspective, you would appreciate why it seems like mathematicians are deliberately deluding themselves by believing that it is reasonable to talk about the existence of infinite sets of non-physical numbers. Worse still, they appear to all have a very dogmatic viewpoint; anyone who dares to express a different opinion (like me) is often ridiculed and forcefully told that they are wrong beyond any doubt. With my real-world approach to what numbers are, it becomes a lot easier to identify what I believe are flaws in the logic of arguments that claim to use infinity, such as Cantor's arguments described in this video.
@@KarmaPeny There is a largest integer because the said integer is what hypothetically would require all of our computing power to express, and therefore the successor function that enabled us to count there in the first place cannot by applied to it, as we have run out of computing power.
What is not understandable is why non-locality is not accepted by some. On the other hand, the theory of relativity is accepted by everyone, which seems even more mystical and paradoxical. That said, the video is excellent, it has helped me understand the physical description of the violation of Bell's inequality.
Turing Machines don't have transitions into non-existent states, by definition. So your definition of EXIT doesn't make sense for Turing Machines.
Even if you modify the definition of Turing Machines with an extra "EXIT" state, the same construction of the X machine still works and shows that it is impossible to build H such that H detects "HALT or EXIT".
@tomekczajka Thank you for your comments. I'll provide more details on my answers, but first I'll give you the short versions. You say that my definition of 'EXIT' makes no sense for Turing Machines. But my argument applies to any real-world computer including any real-world implementation of a Turing Machine (such as one I will describe later). I see no point in any discussion about a fictional Turing Machine consisting of nothing more than abstract mathematical concepts which might not have any validity in the real-world. Next you claim that even with an 'EXIT' state, we could prove that it is impossible to build H such that H detects 'Halt' or 'EXIT'. But the halting problem proof is one that relies on a 'decision problem' which is a problem that only has the mutually exclusive possibilities of either 'yes' or 'no'. It asks "does it halt(y/n)?" but then the argument of the proof assumes that H can 'exit' and that other instructions can be performed afterwards. It allows H to print its Y/N decision but it does not allow H to halt. Your suggestion of a proof that deals with 'HALT' or 'EXIT' does not meet the criteria of a decision problem because without the 'HALT' possibility it has not covered all options. Now for more details... The terminology used by Turing might have created a smokescreen that obscured the issues and made it difficult for other people to identify them. In particular we have the concepts of 'state' and 'machine'. We all know about the front-end of the imaginary Turing machine with its 'infinite' tape and read-write head, but at the back-end we have the instruction cards which, in today's terminology, might be called 'program code'. A 'state' was merely the number/identifier of one of these cards, or else it might be zero for the HALT state. For a binary tape, each of the cards might contain a block of functionality such as the following: Instruction Card #1 current_char = read-tape(); IF (current_char == 1) THEN WRITE 1 {or 0} MOVE LEFT {or RIGHT} GOTO #whatever {i.e. goto another instruction card} ELSE WRITE 1 {or 0} MOVE LEFT {or RIGHT} GOTO #whatever {i.e. goto another instruction card} And so we can see the similarities to modern day computers and programming languages. But this was all devised in the days before computers were invented and the terms of 'state' and 'machine' relate to the concept of finite-state machines. In order to relate these things to reality I prefer to say 'instruction card' instead of 'state' and I prefer to say 'program' instead of 'machine'. Finite-state machines are often imagined as diagrams with the states as nodes and with arrows to show the direction of processing (where a change of state is called a 'transition'). A normal state might appear as a label inside a circle whereas the halt state might appear with an extra circle around it. As such, it might appear reasonable to assume that we could easily incorporate one of these 'machines' inside another machine and that it would not affect the functionality of the first machine if we just remove the extra circle that identifies its 'final state'. Indeed, the argument that we are just changing the diagram to reflect that it is no longer the final state might sound perfectly reasonable. It is so subtle that it obscures the fact that we could be altering the key functionality of the first machine. Furthermore, with the apparent 'abstract' nature of finite-state machines and Turing Machines people might imagine that programs can somehow be thought of as being able to execute all by themselves. They might believe that there is no need to include a device that processes the instructions. But program instructions don't just work by magic. And so when we relate the halting problem proof to a real-world computing scenario then we should soon realise that there is a clear distinction between the program and the machine upon which the program runs. We could then get further clarity on what the HALT instruction (or 'state') actually does. We all know that 'halt' means 'stop' but in a real-world computing scenario are we talking about stopping the machine, stopping the program, or even just exiting from the program (& thus implying that processing stops)? Here we have three different possible interpretations of what 'halt' might mean. The halting problem specification and proof seem to assume that 'halt' only means the third of these options. And so the Halting problem specification only allows the choice between the two options of 'exit the program & allow further processing' (which it calls halting) or 'go into an unending loop'. These are not mutually exclusive because there are other types of halting that are not catered for, such as 'optionally produce some output and then stop the machine'. If the specification of the halting problem allowed for all possible types of halting then, as explained in the video, the proof of undecidability would no longer work. Now for a summary of my main argument... The halting problem proof makes the assumption that if a 'halting predictor' could exist then it could be implemented as some kind of subroutine or called function within another program which could take its prediction and use it in a way so as to contradict the prediction. I contest this assumption. If a 'halting predictor' could exist then it could print its prediction and then HALT. By going to the HALT state it would prevent any calling program from contradicting its prediction. I argue that if we change the program to prevent it from halting then we no longer have the 'halting predictor', we have some other program. As a really super-simple example, consider a 'self halt/loop predictor' program which is a program that predicts its own halting nature. It simply prints "I WILL HALT" and then goes to the HALT state. Now, if we change this program so that instead of going to the HALT state it goes to another section of code that loops, then it is obvious that by amending the program we have changed its functionality. The original functionality of the 'self halt/loop predictor' no longer exists inside the updated program. I claim that the same argument can be applied to a universal halt-or-loop predictor program, for which it might use the HALT state as an important integral part of its functionality. In order to use its functionality within another program we would need to alter it in such a way that it would no longer be the same functionality. The logic of the halting problem proof goes like this: If 'functionality H could exist' then it would be possible to construct program X with the following nature: If I loop then I will not loop and if I don't loop then I will loop Therefore 'functionality H' can't exist This shows that the halting problem proof is a proof by contradiction, which means that it highlights that a problem exists in our logic somewhere. We then make a best guess at what that problem is, but this best guess might be wrong. And so all proofs 'by contradiction' should be considered to be 'best guesses' rather than 'undeniable truths'. It occurs to me that the premise described in the first line ignores the fact that functionality H cannot be included in program X without changing its halting nature. Therefore perhaps the argument should go like this: If we could include the functionality of H within X then it would be possible to construct program X with the following nature: If I loop then I will not loop and if I don't loop then I will loop Therefore 'functionality H' can't be faithfully reproduced in X
@@KarmaPenyI agree with you that the undecidability of the halting problem does not extend to the undecidability of whether real world computers will ever stop calculating. In fact, it seems that it's trivial to decide whether real world computers will stop working. Given a real world computer, the solution is "yes, it will someday stop working", if nothing else then because of the heat death of the universe. I thought you were disputing the undecidability of halting for Turing Machines (i.e. by definition a mathematical idealization). For example, at 8:10 you say "even with Alan Turing's imagined computer, which we call a Turing Machine" etc. I disagree with you that the mathematical theorem about mathematical computers has no practical applications. It has plenty of practical applications, for example compiler writers sometimes use it to argue on its basis that a certain feature of their compiler isn't going to be possible to implement. The usefulness of the theorem to the real world is to show that if you're going to check whether a program halts, you will have to rely on something that Turing Machines don't have (e.g. a time limit), which is useful practical knowledge.
@tomekczajka You said: Given a real world computer, the solution is "yes, it will someday stop working", if nothing else then because of the heat death of the universe. The halting problem, as applied to real-world computers, is not about if they will someday stop working. Ideally it should be about can we determine beforehand if the execution of a given algorithm will, if given enough resources such as time and storage, eventually go into an inescapable loop (yes or no)? Note that I say "should be about" the question of "does it loop(Y/N)?" whereas unfortunately it is usually expressed as "does it halt (Y/N)?" This creates the illusion that any given algorithm must either go into an inescapable loop or reach the end of instructions thereby allowing the further instructions of a subsequent algorithm to proceed. It hides the possibility that the algorithm might end via an instruction to STOP or HALT. You said: I thought you were disputing the undecidability of halting for Turing Machines (i.e. by definition a mathematical idealization). For example, at 8:10 you say "even with Alan Turing's imagined computer, which we call a Turing Machine" etc. I don't believe that the mathematical idealism of the imagined Turing Machine, such as the 'infinite' nature of the tape, is an essential requirement of the proof. Instead of having an infinite tape we could simply assume that we have enough tape that is required before the algorithm reaches an inescapable loop or else otherwise finishes (via a HALT/STOP instruction or by EXITing as it has reached the end of its 'states', where a 'state' = an instruction card = a section of code). And so I believe that we can do away with all the wishy-washy abstract mathematical concepts without changing the underlying arguments. In this sense you are right to think that I am contesting the original proof. You said: I disagree with you that the mathematical theorem about mathematical computers has no practical applications. It has plenty of practical applications, for example compiler writers sometimes use it to argue on its basis that a certain feature of their compiler isn't going to be possible to implement. I would say this is the opposite of being useful as it suggests that certain features would be impossible to implement whereas in actual fact they might well be possible. You said: The usefulness of the theorem to the real world is to show that if you're going to check whether a program halts, you will have to rely on something that Turing Machines don't have (e.g. a time limit), which is useful practical knowledge. The halting problem deliberately avoids the issue of how long tasks will take. It doesn't inform us about any aspect of how long anything will take. Such consideration is dealt with in the field of theoretical computer science called 'time complexity'.
@tomekczajka Furthermore, for an example of the confusing nature of what a 'state' is I refer you back to my previous example of the functionality that might be contained on an instruction card. A 'state' refers to a particular 'card identifier' which might just be a number such as 1, 2, 3, and so on. But in this scenario the number 0 has a special meaning. It means "go to the halt state" which is problematic because if 'state' = 'instruction card' then we should have an instruction card containing the instruction "HALT". Conceptually we might think of this 'HALT' instruction card as being hard-coded in the machine. And so when we execute a program on a Turing Machine, if any of the instruction cards cause a jump to state 0, then the HALT state has been explicitly selected. However, if we just think of the instruction cards as being the only states then we might misunderstand the nature of halting and we might claim that Turing Machines don't have an explicit HALT command. Indeed, Turing Machine algorithms can ONLY end by encountering an explicit HALT. This means that my definition of 'EXIT' cannot apply to a complete program, but it can apply to a small section of code within the program. In the halting problem proof it assumes that a program X can be constructed that contains or somehow calls the functionality of H. In other words, that H can exist as a section of code within X. But this is trivially impossible because H must end with a HALT instruction which forces the Turing Machine to HALT. If we alter H to prevent it from halting then we no longer have the true functionality of H; we have some different program that cannot be trusted to give the same result as H. For example, consider this logic in which 'Card_0' represents the HALT instruction: Card_1: if Card_2 does not contain a 'goto Card_0' instruction then goto Card_1 else goto Card_2 Card_2: goto Card_0 The above logic will act in a completely different way if we change 'goto Card_0' on Card_2 to 'goto Card_3' or whatever, and we add more cards. Perhaps if we had not had this confusion about 'states' then it would have been obvious all along that the halting problem proof was not valid.
It's a trick of language to say that a machine "might be designed to do such and such" and implying that's somehow *different* from a program instruction in any practical sense. You can quibble over the origin of the instructions, but fact is, whether a machine is looping, halting or "exiting" - it is following some instruction. If the machine halts, then the action of halting is the last instruction it will perform. That's how halting is / should be defined. Any other definition is sophistry.
@Fictional_Name Many thanks for taking an interest in my video :) You said: It's a trick of language to say that a machine "might be designed to do such and such" and implying that's somehow different from a program instruction in any practical sense. I'm glad you made this point as it made me really think about what a program actually is. It provided me with an opportunity to clarify my position on this matter. Here are my thoughts... I've tried to imagine how your claim might be realised in a 'practical sense' as you say. I suppose rather than having a distinction between program statements and a physical machine upon which it runs we could just have a single physical machine for each individual instruction. Then we could connect several of these machines together in order to make one big machine which would be analogous to a hard-coded program. In this sense it can appear that the 'program' and the 'machine' are one and the same thing. So when we focus on what a machine actually does, it can appear that the two concepts of 'machine' and 'program' are one and the same thing. However, if we focus on what a 'program' really means then we might reach a different conclusion. If we think of a program instruction as being something physical (such as a punched card), which is interpreted by the machine and so on, then it is effectively an integral part of the physical mechanism. And so yet again your claim would appear to be correct. However, if we use a more dubious definition for the word 'program' then we might be able to draw a distinction between the two concepts. For example, we might describe it as a formal description of the operations performed by the machine. With this interpretation we have a clear distinction between the two concepts; one is a physical machine and the other is a formal (i.e. well-defined and thus unambiguous) description of the operation of the machine. If you reject this definition of a 'program' then wherever I have used this word you might replace it with 'description of operation'. I believe my argument would then still hold. After all, the one thing that we can't escape from is the necessity for a physical machine to exist. Even when someone dry-runs an algorithm in their head, the hardware and wetware of their brain will form the physical machine. And so the need for there to be a physical machine is a practical necessity in all cases of computation. With the apparent 'abstract' nature of Turing Machines people might imagine that programs (or 'descriptions of machine operations') can somehow be thought of as being able to execute all by themselves. They might believe that there is no need to include a device that processes the instructions and hence no need for a 'machine halt' instruction. But program instructions (or 'descriptions of operation') don't just work by magic. And so when we relate the halting problem proof to a real-world computing scenario we could deduce that there is a clear distinction between the 'descriptions of operation' and the physical machine. You said: If the machine halts, then the action of halting is the last instruction it will perform. That's how halting is / should be defined. Any other definition is sophistry. I'm not sure I understand you. Your definition of halting seems to include the word 'halts' and I'm struggling to understand what this means in a real-world scenario. Do you mean that if the machine stops executing instructions then we can say it has halted? If so then I completely agree with you. So if it 'exits' one section of code and proceeds to execute instructions in another section of code then it has not halted. I make this point in the video. Note that this does not prevent the existence of a type of instruction of the form "stop now and do not perform any further instructions" (i.e. a 'machine halt' instruction). If this type of instruction is allowed then my argument still holds; the halting problem proof is not a valid proof.
@@KarmaPeny Well I'm happy my comment could help you clear your thoughts more on the matter, that's rare these days. I agree with all you had to say about programs and machines executing them requiring a physical manifestation. A curious consequence of this picture though, is that no machine truly "loops" forever, since the universe has a finite amount of energy. But that's just a fun sidetrack. What I take issue with, is that you categorise "exiting" as a form of halting. When really you don't know if the new instruction will end up in a loop, or a traditional "stop" halt. If the new instruction were to end up in a loop, why wouldn't you categorise it as such? This just makes no sense in my view. It's violating the spirit of what it means to halt. To not execute further instructions. You can't simply define a new instruction set for the machine to execute, and say that because it's no longer executing the old instructions, therefore it has "halted" in some new fashion you call exiting. The machine does not know what it does. It does not know what instructions it executed in the past, or the future. It cannot distinguish between instructions and their source. (Be it inputs or part of the machine's "design" as you call it) It can either read the instruction and execute it succesfully, or not. Its parts (the turing head in particular in the case of Turing machines) are either moving, or they're stationary. I'd like us to define a machine as halted, if it is stationary. Or if that's too strict, an alternative definition such as "the absence of instruction-caused motion" is fine with me as well. But I cannot allow you to define a type of halting where the machine continues executing programs, that, to me, is absurd. Now, you mention that we could think of each program instruction as its own machine, and that's all well and good, but I am not sure how it helps your case. Each machine needs to have an input before it can execute any program. If one machine halts, then the other machines in the chain won't receive their inputs. It does not matter what you want the machine to do, or how confident you are it does it, once you feed it some input or program, *if the machine never receives said input* . Somehow, some way, that input (which contains information about itself) has to be physically transmitted to itself, from itself. And it cannot have that information without running the program first, so it halts. And using my definition for halting, it by definition cannot transmit information about its state to other machines, including itself. And if we assume it somehow doesn't halt, and loops instead, what good does that do? Arguably if it is looping infinitely, then much like we never reach the final digit of an infinite series, we never reach the end of the loop, we never get an output from the machine to use as input. It's in this sense I understand that mathematicians (which I know you aren't fond of) have come to the understanding that it is undeterminable, whether machine H will halt or not. You should be wary of your contrarianism. Being a dogmatic sheep that believes the consensus opinion because they trust the experts, is of course flawed. But it is equally flawed to be a dogmatic black sheep that disbelieves the consensus opinion because they distrust the experts. Both are two sides of the same coin. Let the arguments speak for themselves, don't be swayed either way by the popularity or acceptance of said arguments. I'll end by saying the halting problem is not unique. It's part of a class of problems in mathematics that all stem from self-referential statements. It is well known these types of statements result in paradoxes.
@Fictional_Name You said: What I take issue with, is that you categorise "exiting" as a form of halting. I don't want to; I prefer to have the three categories of 'loop', 'halt' and 'exit'. When I include it as a type of halting I'm just trying to sympathise with the viewpoint that the halting problem question is mutually exclusive and covers all possibilities. Decision problems should be mutually exclusive and cover all possibilities, and so if the two options are 'loop' and 'halt' then in order for these to cover all possibilities the best we can do is pretend that 'exit' is another category of halting. But I don't really believe it should be categorised as halting. I'm just trying to help the opposing viewpoint so as to be extra critical of my argument. You said: When really you don't know if the new instruction will end up in a loop, or a traditional "stop" halt. If the new instruction were to end up in a loop, why wouldn't you categorise it as such? This just makes no sense in my view. My three categories of 'loop', 'halt' or 'exit' apply to a specified section of code, not to any subsequent section of code (or machine functionality). If the first section of code goes to a 'loop' or 'halt' state then the processing will never reach any subsequent section of code. If processing exits the first section of code then the status of 'exit' applies to that first section of code regardless of what any subsequent section of code does. Don't forget that the halting problem proof requires us to take the halt/loop decider, H (which we might call our 1st section of code) and use it in a program X by adding a section of code to run after the first section of code (which will try to contradict the output of H). So if H ends by stopping the machine, then it will prevent the subsequent section of code from executing - and so X cannot contradict H. You said: You can't simply define a new instruction set for the machine to execute, and say that because it's no longer executing the old instructions, therefore it has "halted" in some new fashion you call exiting. I'm not inventing any new instructions. I don't understand why you think I am. You said: The machine does not know what it does. It does not know what instructions it executed in the past, or the future. I've never claimed anything like the machine 'knowing' such things. You said: It can either read the instruction and execute it succesfully, or not. Its parts (the turing head in particular in the case of Turing machines) are either moving, or they're stationary. I'd like us to define a machine as halted, if it is stationary. Or if that's too strict, an alternative definition such as "the absence of instruction-caused motion" is fine with me as well. But I cannot allow you to define a type of halting where the machine continues executing programs, that, to me, is absurd. Again, I agree that existing a section of code should not be called halting. We agree on this point. I agree that halting should mean stopping. We are in complete agreement on this. I was only trying to play devil’s advocate in order to help those people who still think the halting problem proof is valid. You said: Now, you mention that we could think of each program instruction as its own machine, and that's all well and good, but I am not sure how it helps your case. Each machine needs to have an input before it can execute any program. If one machine halts, then the other machines in the chain won't receive their inputs. It does not matter what you want the machine to do, or how confident you are it does it, once you feed it some input or program, if the machine never receives said input . We are in complete agreement on this. I don't know why you think we differ. You said: Somehow, some way, that input (which contains information about itself) has to be physically transmitted to itself, from itself. And it cannot have that information without running the program first, so it halts. As I say in the video, most people believe this kind of self-reference results in a type of looping. But should a halt/loop decider program exist then it should never go into a loop itself. Therefore it would need to analyse its input data in order to determine if it contains this type of self-reference, then it could print "It will halt" and then do a machine halt. You said: And using my definition for halting, it by definition cannot transmit information about its state to other machines, including itself. I agree that if program instructions are no different to physical machines (that execute instructions) then the matter of 'input data' becomes problematic. However, if we allow my dubious definition by which a program is a formal description of what the machine does, then this description could be fed into a machine as input data. You said: You should be wary of your contrarianism. Being a dogmatic sheep that believes the consensus opinion because they trust the experts, is of course flawed. But it is equally flawed to be a dogmatic black sheep that disbelieves the consensus opinion because they distrust the experts. Both are two sides of the same coin. Let the arguments speak for themselves, don't be swayed either way by the popularity or acceptance of said arguments. I always justify my arguments with what I consider to be good reasoning. I never claim that the consensus opinion is wrong on the basis that we should distrust the experts. I completely agree that we should not be swayed by the popularity of any particular viewpoint. You said: I'll end by saying the halting problem is not unique. It's part of a class of problems in mathematics that all stem from self-referential statements. It is well known these types of statements result in paradoxes. Yes, but the apparent paradoxes are only paradoxical due to the belief in non-physical mathematical existence, which itself is an absurdity in my opinion, or by tricks of logic where mathematicians have unwittingly managed to fool themselves. For example, consider the infamous liar's paradox which is the sentence "this sentence is a lie". The argument goes like this... If the sentence were true, it would be a lie. But if it were a lie, it would be true. This is a logical inconsistency. The problem here is what does the sentence "this sentence is a lie" really mean; what does it mean to say that a particular sentence is a lie? Let's consider a different sentence... Consider the statement "Bishop Stubbs was hanged for murder". If we consider this sentence as making an assertion about something in the real world, then we can examine the evidence to determine if this assertion is true or false. But the sentence itself is just a sentence, it has no truth property. Since the only real-world reference in "this sentence is a lie" is a self-reference to the sentence itself which has no intrinsic true or false (i.e. lie) property, we can only conclude that the sentence is intelligible. A statement would have to be comprehensible for it to be deemed to convey an inconsistency. And so the problem specification gives the impression that the two options of 'true' or 'lie' cover all possibilities when the reality is they don’t. The option of 'sentence is incoherent' is not supplied and yet it is the only applicable option out of the three. Another problem similar to the halting problem is the Barber paradox, where it is the specification of the problem that is the very thing that causes the problem. The specification says we must choose between 'shave themselves' and 'shaved by the barber'. These two options appear to be mutually exclusive and cover all possibilities. The trick is one of wordplay because the specification doesn't allow us the third option of saying 'do both by being the actual barber'. When we hear the logic of the Halting Problem proof it sounds like a sneaky trick. Indeed, it sounds very much of the same nature as the Barber paradox trick and the liar's paradox. It makes it sound like the two options that it specifies are mutually exclusive and cover all possibilities when in reality they don't cover all posibilities. In this respect the Barber paradox, the liar's paradox, and the Halting Problem are all just variations of the same underlying logic trick.
One of their proofs is false: by the rule of multiplying by ten. x= 0.999...9 10x= 9.999...0 10x= 9 + 0.999...0 9x = 9 + 0.000...1 X in the first line isn't the same as the "x" added to the 9 in line 3, due to ANY number times 10 has a virtual zero added at the end. (0.77 x 10 = 7.70) Thus 9x will not = 9, and X will not 1. But they ignored the rule above, and claim 0.999...9 times 10 just equals 9.999...9 so it would give their x = 1 proof.
This reminds me of Spekkens' toy model. He makes the assumption that photons can be knocked into a "vacuum state" upon measurement that can cause a later measurement to not detect them, but that measurement could knock it out of the vacuum state so an even further measurement might detect it. With this assumption alone you can explain double-slit, delayed-choice, and bomb-tester experiments entirely classically with an epistemic interpretation of the wave function, but it's not enough to explain the nonlocal inequalities.
this is the case in all of academia not just in that area