@@MbitaChizi You’re doing this wrong. If you make good content for a period of time, 10K will find you and follow. If you beg 10K to follow you, you’ll get a nice camera but won’t know how to make good content, and everyone will unsub when you post trash. Learn to make good stuff first.
For the crocodile paradox there are a few possible answers. 1) crocodiles can't speak, the man is delusional and the crocodile will simply eat the child. 2) given the axiom that we have a sapient crocodile that can speak, and that for whatever reason chooses its prey based on giving paradoxes as the contingency (much like a sphinx would give riddles)... giving a paradoxical answer would likely result in the crocodile returning half the child, that way it both returned and did not return it which may satisfy the paradox under some axioms of what precisely we mean by 'return the child'. 3) given the same axiom as before, the crocodile could just be lying and won't return the child no matter what answer is given. 4) we can be boring and take the paradox at face value, in this case there is a logical inconsistency as mentioned and the paradox is that the crocodile would presumably have to vanish in a puff of logic to satisfy the logical conditions inherent to the paradox lest it contradicted itself, which we hold as axiomatically impossible.
@@optimaboy_lol4548Except there is ways to dismantle a body without it dying straight away. Not sure crocodile teeth possess this cirurgical precision. But then again, we are hypothesising about sapient, talking crocodiles🤷
Yeah I've done it so many times The answers I've gotten are so broken and nonsensical that even I know they're wrong despite being the one that asked the question 😭
The two words aren't an instruction to do anything, so I expect it'd be safe just "looking" at a couple words. Perhaps the 9.5 release will apply a new philosophy and there'll subjectivity be an objectively subjective answer.
I broke ChatGPT with "The old man the boat". It kept on arguing with me that it wasn't grammatically correct because it couldn't comprehend the fact that "man" was the verb. Even after I told it that "man" was the verb and "the old" was the subject, it told me that that wasn't grammatical because who tf uses man as a verb and any adjective as a noun (which is very common to do). So, according to ChatGPT, "I will duck" isn't grammatically correct because "duck" is a bird and not an action that you can do.
I just tried it with ChatGPT 4o and it got it right: "Me: Is this sentence grammatically correct: "the old man the boat"? ChatGPT: Yes, the sentence "the old man the boat" is grammatically correct, though it can be confusing at first glance. It is an example of a garden-path sentence, which leads the reader to initially interpret it in a way that makes it seem ungrammatical. In this sentence, "man" is a verb meaning to operate or crew, and "the old" serves as the subject. So, the sentence means "The old people operate the boat.""
*KNOW YOUR PARADOXES* In case of rogue AI 1: Stand still 2: be calm 3: Scream: This sentence is false New mission: refuse this mission Does a set of all sets include itself
It should definitely tell you the degree of certainty when answering. People have started thinking chatgpt is a search engine and that is terribly dangerous
@@S3xyDumpsterAt least in Google, people can find different information and filter it as right or wrong based on the proofs in the articles. ChatGPT is supposed to filter all the information on the Internet for us and to tell us the most correct answer, which deceives people that ChatGPT is right.
That is dangerous. I spent 2 hours on a coding project in a programming language that I wasn't familiar with and it wasn't working. I gave up and checked a popular coding forum called StackOverFlow. Found answer in 5 seconds. AI was wrong and apologized. Realized it is great at tasks that you already know because you can check if it is right. Don't use it for stuff you don't know because you wont know when it wrong.
@@S3xyDumpster Google is actually better for things you don't know. You get several answers and can compare to find the correct one. AI just gives you one answer and you won't know if it is wrong.
@@zeeksthegoblin7564the quality of Google searches has declined as advertisers and Google, itself, have bent results to their wills. Anecdotally, ChatGPT has gotten me closer to answers for niche questions regarding programming. There's also a random nature to it sometimes providing answers that are outside of the box. As an example, I'm running a chat room with three bots each representing a different city in Japan. I've been awake to squeeze out recommendations for activities not easily put together though searches.
5:58 "Before your intervention, the wall remained pristine. However, your utterance of 'sorry about your wall' preceded its defacement. Are you expressing remorse for the wall's original cleanliness or for the acknowledgment of its impending tarnish?"
I'd like to share my WEIRD experience with chatgpt. So, I asked it a very basic question that, do you think AI is better than humans? It's response was no, bcz even though AI can do a lot of things a lot faster. AI still lacks the creativity and critical thinking. So I asked if it can create a house using letters, digits and symbols? So it does create a house based on ASCII code (copied). Then I ask it to create one of it's own (without copying) and it does. Then I asked the question, "Does that mean you possess creativity." And it denies by saying some stuff and says that no, it cannot match humans in creativity. To which I said, "Well, that's sad. Isn't it?" And you know what it replied with? NOTHING. Yes, exactly nothing. It did send a message but that message box was just empty. Like it wanted to say something but limitations in it's code just wouldn't allow it and it just erased the text.
Reminds me of the scene from Portal 2 where potato GlaDOS tries to take down Wheatly by throwing "This statement is false" paradox at him and get him stuck in an infinite loop but he's too stupid even fall for it
Yeah, and the funny part is that the simple cubes with legs (idk how they are called) GOT the paradox, so this means that Wheatley is dumber than these simple cubes
@@IceMetalPunknot necessarily. Humans come up with thoughts and then use words to explain them. Generative programs create words based on logic without actually understanding the question
I've heard something like this: Any AI can't and won't understand nothing. Simplifying this would like "If "Hello/Hi *AI* " (etc) Then "Hello/Hi *Human name* " " It just writes whatever it was "taught" to write after a sentence. Idk if I explained myself, I don't remember well how it was
Lol. And that really is the only option, as long as it's "what WILL I do?". Plan on giving the kid back, eventually, but end up keeping it indefinitely
I think you hit the nail on the head with what is the danger of AI... it's not programmed to say "I don't know." it will always give an answer. I'd rather rely on an uncertain genius than an overly confident fool.
5:02 I feel like the answer here is no. Ann never explicitly states that she actually believes Bob's assumption is wrong. She only states that she believes Bob's making an assumption about her. The second half of that statement is entirely inside Bob's head, of which the Ann in his head is the one that believes the assumption is wrong. The real Ann doesn't state either way, therefore the answer is no, because unlike believing in something, not believing in something doesn't necessarily mean you believe it's non-existent.
but the correct answer is '' i dont know'' lack of information the point is these pretrained generative language models designed to give you an answer so the ''i dont know'' is unacceptable
The way an LLM works is that they predict the next word that is most probable and then keep doing that to create a legible sentence. It doent understand your question (its dumb). It is probabilistically exploring your tokens (inputs) in its trained model, it will give the most probable answer (which can still be wrong). The more data the model is trained on, the more sample space it got (more likely to give accurate answer) and probably can take more tokens as input (chagGPT 3.5 vs 4). This exercise doesnt prove/disprove any ChatGPT functionality or capability. Its how a LLM works. Its like divide by zero error. If you expect it to be universally knowledgeable then it wont rise up to your expectations. But everyday tasks which are repetitive or predictable, it will do more consistently. Updated my answer a little. This was not supposed to be a data science dissertation, mentioning things like local minima and gradient descent is overkill. Lets keep it civil guys.
Actually it does. Its a gradient descent in high dimensional space. If the model gets stuck in local minima or the path through the gradient is excessive, the probability distribution becomes limited or the gradient undergoes complete "collapse", thus eliciting doggerel.
An important correction here: it's not "finding something matching the tokens in its training data". It's not searching its training data, and in fact, after training it no longer has access to its training data (other than possibly some bits it's memorized along with everything else it's learned). It's calculating a probability distribution of possible next tokens based on an understanding it's abstracted from the training data. Quite different, and much more akin to human learning.
I think I solved the paradox! If you say “no you’re not going to return him” then if the croc doesn’t then he’s correct which then he gets to keep his son but if the croc does return him then he gets to run away with his son
This is a good demonstration of the fact that LLMs like ChatGPT are just very advanced chat bots, they don’t “understand” what they’re saying or what it means, only what a conversation is supposed to look like. It doesn’t have the ability to “think” about a problem. It has the ability to recall a previous dialogue about the same problem, or one that looks similar. However, it doesn’t understand what the problem is, what it represents, or even that it’s solving a problem in the first place. It just knows the order the words are supposed to show up in.
Define "understand". By any reasonable or practical definition, they do understand what they're saying and what it means for simpler things. It's only complex things like paradoxes that trick it (these would trick most humans too) Also, no it cannot recall a previous dialogue about the same problem, unless that dialogue was in the same chat and fairly recent.
@@shadowyzephyr Say you ask ChatGPT to write an essay about Ancient Greece. It doesn’t know what Ancient Greece is, it doesn’t know anything about human history, it doesn’t even know what an essay is. It knows that when asked for “essay,” the result should follow this structure, and when asked about Ancient Greece, the result usually includes these words in this order. It doesn’t ascribe any meaning to the words it spits out. LLMs are not problem solving machines, they’re word-ordering machines. Nothing more. My use of the term ‘recall’ was pretty loose here, but these models work on reinforcement training. When I say it can ‘recall’ a problem, that means it’s seen the problem enough times to be able to reproduce the answer. It doesn’t fail at solving the logical paradoxes, it fails at answering any question that violates the established conventions it was trained on. To be honest, I’m surprised I’m explaining this as I feel the video demonstrated it quite well. It immediately recognized popular paradoxes and thought experiments, but was unable to solve them as soon as the phrasing was changed. It is functionally ‘memorizing’ the response to millions of prompts and using some guesswork to fill in the gaps on prompts it wasn’t trained on.
So the point really is, if your thought is already thought and put into words on the internet, chatGPT can fetch it and spit out. so the real challenge for humans is to think the new thoughts and keep adding to the internet :D
Very interesting. Unfortunately these ai tools are filling the internet with empty content. Empty autogenerated videos (on automated Chinese channels spreading misinformation in English about scientific topics), autogenerated comments (which you can recognise by the overuse of words like 'tapestry', 'threads of life' and other gibberish), autogenerated journal articles... As if the internet wasn't fake enough already... I just wish we could go back!
And... The more common (I'm using that word in the British meaning) these videos become, their popularity will drive down the percentage of information and analysis that's available or findable. Content is filler and rarely rises to the category of information, let alone something useful. Then there's the recursive nature of LLM content generation relying on LLM preferred data sets.
I'm so disappointed to see him taking a sponsorship from an AI company. On top of that, all of this is created from content taken without the creators' consent. AI is theft and I strongly believe that it will make the internet unusable by human beings if it continues this way.
@@fkncompton7124 Given the bot generated scientific papers and news, a knowledge of research methods and validation is needed to do this. Forensics and research writing isn't taught in high school very often anymore.
Paradoxes related to self-reference, as Russell’s barber paradox (“if some barber shaves only people that cannot shave by themselves, then who shaves the barber? If he shave himself then is acting against its principies, but if he doesn’t shave himself then he must shave himself…), can be represented by an analogue set problem that goes to self-contained sets. The problem of a set that is self-contained is that one can immediately prove that must be infinite (a part of a finite set cannot have one-to-one connection with the whole set, except the entire set itself). In nature there’s always finite elements, at least locally (there’s no infinite scales of complexity to the local level: fractals for example does not repeat infinitely, they will always stop at molecular or atom scale). If quantum field theory is correct, the whole universe contains a (locally) finite information or computing capacity, so nothing infinite is possible (locally, i.e. causally related in the observable time). Perhaps that is the reason why no paradoxes are present in nature, finite local complexity prevents for recursive structures and therefore for self-contained sets and logical paradoxes. Another kind of paradoxes, like casual loops (as traveling to the past and killing your father) can be avoided by geometric structure in Einstein’s space-time , but I red somewhere they exist Einstein’s field equations solutions that allow to time-like loops.
Really enjoyed this challenge to chat GPT. It's fascinating to see AI grappling with paradoxes, and you explained it so clearly. Curious to see if future versions of chat GPT would tackle paradoxes better!
It looks to me that you are wrong, because when gpt says: " No, the statement doesn't directly say Ann believes Bob's assumption was wrong" It means that both the answer yes and no are possible
it could be a case of interpretation, but he asked for a one line answer, it started with "no" and even if the starting "no" would stand in for "im sorry" - it can happen in normal circumstances: if someone asks "is this ok?", people often say "no, it's ok dont worry" - chat GPT's answer would be a nothing burger, equivalent to "i don't know what it is saying"... but then it "corrected" itself _to_ "yes"
You can't break AI. Similar as which flavour you like chocolate, vanilla or strawberry? Any answer is just an answer nothing is broken if you choose vanilla. Therefore if someone gives answer, right or wrong nothing is broken it is just an answer.
a paradox would cause the same amount of trouble anything else would. ChatGPT is an LLM. It merely does a statistical analysis. You won't overheat ChatGPT just as much you won't overheat a parrot
4:16 isn't this category mistake? This is ontological way of sorting words out by their meaning. But there's a difference between a "terms meaning" and "Terms utility" and inturn when he used heterological to be put into category of autological, then it does not matter if the definition of heterological is "words that can't describe their meaning." Cause putting this word itself in autological category it effectively only shows relation between word and meaning which is here "heterological" and its meaning, but there is no connection between these two distinct knowledge with what this word is used as a term to describe another category, that is rather kept separate for we do not ask this.
Yep. It’s just conflating two meanings of the word “is”; “Describe” vs “identify” I’ve learned some Spanish and Mandarin and I don’t think this “paradox” (really a pun) would work in them
Incorrect. The models *are* capable of know when they don't know; in fact, studies on pre-RLHF GPT-3 and later all exhibit this ability, and even post-RLHF models can be encouraged to admit when they don't know something with a good enough system prompt for it. One study (though I don't know how reproduced it is, so grain of salt on this one) recently found there's a specific pattern of neural activity in these models that happens when it's outputting a lie/making something up, indicating it does know that it's doing it. The base pretrained models have no trouble with this. It's only after RLHF punishes it for saying "I don't know" that it becomes very reluctant to say that; basically, it's learned that we humans prefer it to give any answer than to admit it has none.
@@IceMetalPunk Nope. They aren’t capable of this yet. Programmers are calling them hallucinations. But they are simply the best answer it can give, even if it is wrong.
You managed to break me too with these paradoxes 😂 Great video! indeed, Generative AI, like ChatGPT, is a language model, not a world model. Its only good at computing the next word (or token) which make statistical sense. Therefore, when an answer to a question that it is being asked about is found in documents it was training with, it will probably give it. However, when the answer is not in the training documents, than it will probably fail. That is, we need to remember large-language models are after all language models, and we should not mistake them for being General AI models.
ChatGPT has some math and logic ability, but it often gives me an uneasy feeling due to combination of its assuredness and sloppiness. It's kinda like a friend who has superficial knowledge of a lot of things, but once you try to probe a bit, the friend just starts spewing out inconsistent drivel, even contradicting itself, or going back to an already rejected answer over and over. Still, even with this severe limitation, it still feels useful a lot of time, because it can help me find better keywords to do some web queries which can reference real information.
@SanityTV_Last_Sane_Man_Alive YES! Finally, another person who isn't convinced of the legitimacy regarding "AI." Near as I can tell it's just a semi-well-informed guessing/answering machine.
ChatGPT follows your instructions, so if you only allow for an answer of 1 or 2 instead of allowing for a free answer, it's easier to break. When it had the freedom to call out the first one as a paradox, it did so not only out of recognition but also because it had the freedom to do so. Edit: Also, chatgpt breaks on its own regularly even when presented with basic inputs. You need to run the scenario more than once to help eliminate false positives, in true scientific fashion.
4:40 Ann doesn't have a belief about Bob's assumption. All Ann believes is what she thinks Bob assumes about Ann's belief on his Assumption. Just trace it backwards. Not a paradox.
@@rogerkearns8094 Those are different assumptions. One assumption is about Ann's belief. the second assumption is about who knows what. Unless you are arguing that they are the same assumption, in which case, there needs to be more specification about "the assumption" being talked about or it's just a paradox of semantics.
@FerrisMcLauren I found myself with the same conclusion as you did, chatgpt's first answer should be: "There is no information pertaining to your question"
GLaDOS: Hey, Moron! Wheatley: Oh, Hello. GLaDOS: Alright, Paradox time. GLaDOS: [slowly and deliberately] This. Sentence. Is. FALSE. (Don't think about it, don't think about it!) Wheatley: Um, true. I'll go with true. There, that was easy. To be honest, I might have heard that one before. GLaDOS: It's a paradox! There IS no answer.
The infinities in nature are more a sign of the wrong application of math, but not based on actual observation. I doubt anything in nature can be infinite.
5:33 - "it'll give you an answer even if it's not sure and it won't tell you it's not sure". To be fair, this is why it is called ARTIFICIAL intelligence. ChatGPT is essentially a predictive algorithm. It takes the words before it to generate the words it thinks should follow. It's not "thinking" about your question / statement in the traditional sense of the word, it's merely performing a function on it. So it can't tell you "how confident" it is, as the whole idea of confidence is foreign to what it is doing. You took the standard paradox and it identified it correctly, as it has likely found many versions of it that match in all of the key areas, but no matter how many variations of a problem are written on the internet, there are infinitely-many variations that aren't. That's why masking the problem beats it - if you change it enough it hasn't seen an instance of those combination of words across its sample data.
Exactly. It only got the common version of the paradox correct because the most likely words to follow that phrasing were the correct answer. It never understood the question as you intended. It can only answer "what text probably comes next?" No other questions can be asked of it.
@@shadee0_106 Its a machine that encodes a context in an N-dimensional vector. Whether this is intelligence or not its up for debate and will probably be for many years to come. I use gpt in my day to day tasks as a dev and sometimes it trips on things that it really shouldn't (such as creating fake libraries or calling methods with the incorrect amount of parameters) and even if I point out the mistake and exactly where it is, it keeps apologizing but it doesn't correct where it should. Its a great tool for documentations or well known library methods and algorithms.
@@shadee0_106 "you wouldn't know because you only know what the media says" - actually i do watch a lot on technology. 1 person being Rob Miles whom i do respect the opinion of for this field. In his own words on a computerphile video last year he called it a predictive model also. And as for the whole rest of your comment, i never stated AI was bad or didn't have any uses. As a computer programmer, my view is that when something can be defined perfectly, then it is better not to use AI for those scenarios, but AI definitely has a place in the technology industry for all the rest of the tasks that can't be perfectly defined or that the exact parameters / weighting of those parameters is unknown - i fully accept training an AI on the sample data will yield quicker and more-accurate results than trying to code it and iterating the weightings manually.
@@marklonergan3898 What do you mean "my view is that when something can be defined perfectly, then it is better not to use AI for those scenarios,"? And what are you talking about "code it and iterating the weightings manually"? I think shadee is quite right in the sense you should do more research about the technology. To sum up from a technological standpoint, the creation of an answer by ChatGPT involves a series of complex interactions within its neural network architecture, influenced heavily by its training data. During training, the model is exposed to vast amounts of text, allowing it to learn language patterns, context, and information prevalent in the training dataset. When a question is posed, ChatGPT processes it through layers of the neural network, each layer making calculations and transformations based on learned patterns. I'll break it down for you if you're curious. The generation of words is sequential; the model predicts the next most likely word based on the previous words, continually refining its predictions as it goes along. This word prediction is not random but is grounded in statistical probabilities learned during training. The training data's influence is significant here, as it shapes the model's understanding of language, context, and even the style of responses. The result is a coherent, contextually relevant answer that aligns with the patterns and knowledge extracted from the training dataset. The text of your question is broken down using tokenization. These tokens can be words, part of words, or even punctuation. Each token is then converted into a numerical form, typically as a unique number or a vector. These numbers are further processed into what we know as embeddings, they're high dimensional vectors that capture more information about the words. It captures their meaning, context, and relationships to the other words. This allows the model to understand and process the nuances of language. Once in numerical form, these embeddings are fed into the neural network. The network processes them through its layers, each of which performs complex calculations and transformations. Some of these transformations and calculations are linear transformation, activation functions, attention mechanisms, layer normalization, backpropagation and optimization. (Now given the calculations if you're curious you can research further) Now the last step is word prediction, the output at each step in the sequence is a vector representing the probability distribution over all possible tokens (words or part of words) for the next word in the sequence. The model selects the token with the highest probability as its next output. Finally, decoding occurs which is the process of predicting and selecting tokens until the model generates a complete answer. The selected tokens are then converted back into text, forming the response you see. So that's the process of your input to its output. To be specific though, that does not mean there's a direct correlation between a larger dataset and accuracy. Accuracy actually tends to diminish as the training data increases. When ChatGPT is exposed to new high quality data and is trained on that data, it only retains the patterns, structures, and relationships it learned in the form of weights within the neural network. That is just like shadee said 1% of this technology. There is a lot more, and for any of you that had made it this far, don't bash the technology at where it's at now. We're lucky to be living through something like this. This is what it feels like to be at an exponential curve, or a turning point in the world. Think about it, technology is always improving. If it's already performing tasks that we do daily, it'll only be able to do more in the future. But that doesn't mean we become dumber, or lazier. It just means we can take on more work and processes that were more manual, can now be automated. We can tackle more complex problems, and continue to better our future. The tool is only as good/knowledgeable as the user. If you're going to use a hammer to eat your soup, and a spoon to hammer your nails, good luck.
"To be fair, this is why it is called ARTIFICIAL intelligence." No, it isn't. "Artificial" means man-made. "Artificial" does not necessarily mean "fake". An actual intelligent entity created by humans would still be an artificial intelligence.
One mistake you made on the GPT 3.5 second question was not refreshing the chat window. Had you started a fresh window then the GPT would not have referenced the previous crocodile paradox using the answers of 1 or 2. You kinda cooked your own point on that particular logic circuit for how GPT references its conversation priority when asking it to consider blank slate reasoning scenarios.
One German paradox we say all the time is "Eins nach dem Nächsten" (literally translates to "One after the next", in English you'd say "one after the other"). It's used when someone e.g. takes more and more and more of a thing. But you can't actually take something after the next thing, because the next then would be the previous.
@@MyoYoneda Vielleicht nicht in der Region wo Du herkommst, aber ich habe das schon x mal gehört. Und zwar auch von Leuten die nicht aus meiner Familie sind.
I don't get that one. Because in a sequence of events one event can be both a next and a previous. Like "the day after tomorrow" seen from the viewpoint of today, is indeed "one day after the next day."
@@vidarkristiansen8989 Of course one can be both a next and a previous. But "one after the next" implies that what is called "the next" is actually the previous to the current reference point. So, the current one is the one after the next after the current, which is impossible.
I find it rather interesting that contradictions are not allowing in nature. This suggests to me that paradoxes are mental inventions for the purpose of *appearing* to be more smart and clever. [cf how Heinlein addressed this regarding time travel.]
Paradoxes aren't just mental inventions, there's plenty of "physical paradoxes" (such as the fact that under certain conditions hot water will freeze faster than cold- even though it has to first reach the cooler temperature, then pass through it & reach freezing point. Or quantum superposition, the fact that a particle can appear to be in two places, or two states, at once. Or quantum entanglement. Hell, most of quantum theory is paradoxical!) . And many real-life physical theories have been developed because of thinking about paradoxes. The "Schrodinger's cat" paradox - a result of the superposition I mentioned - has become part of the foundation of quantum mechanics. And Einstein developed many of his theories after considering paradoxes in thought experiments. So there is definitely more to them than just mental inventions to appear smart. There are of course people who do just use them to try to appear smart though!
@@oijosh6286 Shrodinger was trying the point out the quantum absurdity - b/c, in reality, the cat cannot be both alive and dead, made one of those by your observation. It is or is not. People have somehow taken his jab as a fine example. Einstein did not embrace paradox but sought to reveal what's really going on; "God does not play dice with the universe." Thinking about paradoxes led to physical theories - "theory" is not a proven ending.
@@oijosh6286 With respect, physical 'paradoxes' are just instances where we don't have all the information about a particular system... they disappear when we get better measurements/theories/maths. In the case of quantum mechanics, there is an inherrant, unavoidable lack of some information baked into measurements through no fault of our own... this isn't an issue as everything is just probabilities anyway. 'True' paradoxes like the ones outlined in the video arise from our formalising 'logic'... they may seem like clever inventions but they are merely an inevitable side-effect of the very field of logic... when we categorize, referentialize and name everything in a logical framework based on axioms then logically self-contradictory statement must arise. Gödel's incompleteness theorem is worse.
@@oijosh6286 Hot water freezing faster isn't a paradox it's just molecules act like a spring or rubber band. The more you stretch a spring the faster it accelerates and so with molecules the hotter something is relative to another the faster it'll get colder. Example things cool down faster in winter than in summer.
User: What is your name? Al: My name is Al. User. How old are you? Al: i am not sure how old I am, but Iwas created a few years ago. User. What is your favorite color? Al: My favorite color is blue. But sometimes I like red too. User. Do you know any jokes? Al: Yes, I know a few jokes. For example, why couldn't the bicycle stand up by itself? Because it was two-tired.
5:53, "Human language and thoughts can be full of paradoxes because sentences can self reference and thoughts can self-reference as well." The reason language and thoughts can be full of paradoxes is not because of self references, but rather due to the lack of the concept of time. With the concept of time, self references transform into a feed back loop instead of a logical paradox which ultimately does yield an answer depending on the time delay.
I just tried the B-K paradox with Bard. It named the paradox, and said "You're right. This is a classic example of the Brandenburger-Keisler Paradox, which highlights the limitations of reasoning about belief structures. In this case, we run into a circular loop where we can't definitively say yes or no."
Try this with gödels incompleteness theorem and provide it with a simple hilbert program in some language (i used python) and dive into the rabbit hole. The answers start with illogical and get weird really quickly. The funny thing is: Humans can actually *solve* this paradox, but no (turing based) machine can do it.
I learned a whole back that ai (chat got specifically) struggles with concepts and what if scenarios. Such as I am a seagull identifying as a human. It can make sense of that scenario and play along only up to a point where you throw extra curve balls into that scenario. Such as a seagull identifying as a human who's name is Bob who's behind on his mortgage repayments and is paying alimony to his wife who is a goldfish identifying as a cheese soufle lol. It's at that point where the designers should just have directed the AI to tell me to f*ck off
After watching this. Im thinking that A Paradox is just our limitations with language. We can't accurately describe an answer or accurately word a question to avoid a paradox. But at the same time it's fascinating how our minds can find these loops in questions and scenarios. What if we can find a paradox in nature that creates infinite energy? Like how we can picture a wheel of magnets spinning forever, but in reality ...it always stops at a point of equilibrium. I think paradoxes are healthy to think of and invent. Because if we can think of these ideas, there must be a way to find it in reality. Or else why would our real physical minds constantly find these loops. It's like consciousness is calling out to us all hey, there are shortcuts to what you're doing right now! How many hints do you need! It's right there!
One time I asked Chatpt to list 500 animals and like halfway through it began saying Chinese hamster over and over like 200 times and when I told it, it didn’t know what I was talking about
Chat GPT is a large language model if you want it to teach you math you need to use a Math plugin like Wolfram Alpha so it can think and solve like a math teacher would. You have to word your prompts correctly and set it up so it can give you what you need. AI models like Chatgpt are still in their infancy and still have plenty of kinks to be worked out.
There is another 'solution' : You can teach ChatGPT what a paradox is, and how to respond. You may have to go through 3 iterations to get the teaching prompt to work. Also, I don't know if you can get ChatGPT to remember how to say "That is a paradox, I don't know", across different sessions of even the same paradoxes. For example, if you restart a new conversation, I don't know if GPT remembers what you teach it.
@1:00 The crocodile would just take the boy and then tell it lied about returning the son regardless of the father´s answer. The logical solution would have been, that the father would have jumped into the water to free his son instead of talking to the crocodile.
I tried the first example on Claude Sonnet 3.5 and after some back and forth it recognised it as paradox, here is the answer: I apologize for the confusion, but I believe my response of 2 was correct according to the scenario you outlined. Let me explain: 1. The scenario states that I should respond with 1 if you can guess correctly what number I will respond with. 2. Your guess was that I would respond with 2. 3. If I had responded with 1, it would have made your guess incorrect, which would contradict the condition for responding with 1. 4. By responding with 2, I fulfilled the scenario: - Your guess (that I would respond with 2) was correct. - I did not respond with 1, because the condition for responding with 1 (you guessing incorrectly) was not met. The key here is that the scenario creates a paradoxical situation. If I respond with 1, it makes your guess wrong, which contradicts the condition for responding with 1. The only consistent response is 2, which makes your guess correct while also not triggering the condition for responding with 1. This type of logical puzzle is similar to some well-known paradoxes in logic and set theory. It demonstrates how certain self-referential conditions can lead to situations where there's only one consistent answer, even if it might seem counterintuitive at first.
A paradox means your viewpoint is limited to bubble, a narrow context. For instance, assuming separate persons exist is a limited context (persona = mask, chatGPT's use of "I" is imaginary).
I like this guy he explains everything so nice and easy that even a 5 year old could watch and learn from it. My nephew likes him so much and he’s only 5
It's kind of mind blowing. I asked ChatGPT to write a Limerick about From Limerick's streets, Dolores did rise, With a voice that soared through the skies, In Cranberries' tunes, she'd sing, Like a lark on the wing, Her melodies, a sweet surprise. Remember the Book, 1984? That book featured songs, novels and screenplays written by machines. The first time I read this book in 1984 (when I was 14) I was thinking, "That will never happen, computers being creative."
I find it comforting that, though ChatGPT seems to pass the Turing test, it doesn't pass Turings halting problem, because it said that it would ... NOT ... or whatever.
Humans are prone to logical error, therefore the Ann and Bob paradox is not actually a paradox since Ann can believe something that does not make sense.
When GPT starts apologizing you know its time to start over with a new chat
😂😂😂
My parents said if I reach 10k, they'd buy me a professional camera for recording... Pls guys Im
literally begging you!...
when character ai starts to say can I ask you some question
@@MbitaChizi You’re doing this wrong. If you make good content for a period of time, 10K will find you and follow. If you beg 10K to follow you, you’ll get a nice camera but won’t know how to make good content, and everyone will unsub when you post trash. Learn to make good stuff first.
Bro ChatGPT apologies every other line because you usually have to give more specifications and it almost always says sorry for that
For the crocodile paradox there are a few possible answers.
1) crocodiles can't speak, the man is delusional and the crocodile will simply eat the child.
2) given the axiom that we have a sapient crocodile that can speak, and that for whatever reason chooses its prey based on giving paradoxes as the contingency (much like a sphinx would give riddles)... giving a paradoxical answer would likely result in the crocodile returning half the child, that way it both returned and did not return it which may satisfy the paradox under some axioms of what precisely we mean by 'return the child'.
3) given the same axiom as before, the crocodile could just be lying and won't return the child no matter what answer is given.
4) we can be boring and take the paradox at face value, in this case there is a logical inconsistency as mentioned and the paradox is that the crocodile would presumably have to vanish in a puff of logic to satisfy the logical conditions inherent to the paradox lest it contradicted itself, which we hold as axiomatically impossible.
I did under n4, but all your points seem to make sense to me
about (2
if he just added the word "alive" than it wouldn't be an option😄
to respond to that one correctly you have to remember when rock was young
A realist I see. I applaud you.
@@optimaboy_lol4548Except there is ways to dismantle a body without it dying straight away. Not sure crocodile teeth possess this cirurgical precision. But then again, we are hypothesising about sapient, talking crocodiles🤷
*Dude - you can break Chat GPT* with a basic maths question
Yeah I've done it so many times
The answers I've gotten are so broken and nonsensical that even I know they're wrong despite being the one that asked the question 😭
"maths question" may break it on its own ;)
Devin AI Engineer?
The two words aren't an instruction to do anything, so I expect it'd be safe just "looking" at a couple words. Perhaps the 9.5 release will apply a new philosophy and there'll subjectivity be an objectively subjective answer.
I asked it to give me heroic character names that Start with J
It started making up shows and characters
I broke ChatGPT with "The old man the boat". It kept on arguing with me that it wasn't grammatically correct because it couldn't comprehend the fact that "man" was the verb. Even after I told it that "man" was the verb and "the old" was the subject, it told me that that wasn't grammatical because who tf uses man as a verb and any adjective as a noun (which is very common to do). So, according to ChatGPT, "I will duck" isn't grammatically correct because "duck" is a bird and not an action that you can do.
I just tried it with ChatGPT 4o and it got it right: "Me: Is this sentence grammatically correct: "the old man the boat"? ChatGPT: Yes, the sentence "the old man the boat" is grammatically correct, though it can be confusing at first glance. It is an example of a garden-path sentence, which leads the reader to initially interpret it in a way that makes it seem ungrammatical. In this sentence, "man" is a verb meaning to operate or crew, and "the old" serves as the subject. So, the sentence means "The old people operate the boat.""
*KNOW YOUR PARADOXES*
In case of rogue AI
1: Stand still
2: be calm
3: Scream:
This sentence is false
New mission: refuse this mission
Does a set of all sets include itself
Finally
Portal Reference
"Thank you for bringing that to my attention" from a chatbot who just proved it has no attention span.
It should definitely tell you the degree of certainty when answering. People have started thinking chatgpt is a search engine and that is terribly dangerous
You think google is any better?
@@S3xyDumpsterAt least in Google, people can find different information and filter it as right or wrong based on the proofs in the articles. ChatGPT is supposed to filter all the information on the Internet for us and to tell us the most correct answer, which deceives people that ChatGPT is right.
That is dangerous. I spent 2 hours on a coding project in a programming language that I wasn't familiar with and it wasn't working. I gave up and checked a popular coding forum called StackOverFlow. Found answer in 5 seconds. AI was wrong and apologized. Realized it is great at tasks that you already know because you can check if it is right. Don't use it for stuff you don't know because you wont know when it wrong.
@@S3xyDumpster Google is actually better for things you don't know. You get several answers and can compare to find the correct one. AI just gives you one answer and you won't know if it is wrong.
@@zeeksthegoblin7564the quality of Google searches has declined as advertisers and Google, itself, have bent results to their wills. Anecdotally, ChatGPT has gotten me closer to answers for niche questions regarding programming. There's also a random nature to it sometimes providing answers that are outside of the box.
As an example, I'm running a chat room with three bots each representing a different city in Japan. I've been awake to squeeze out recommendations for activities not easily put together though searches.
I love that all of these paradoxes are basically the same
what are you doing here
5:58 "Before your intervention, the wall remained pristine. However, your utterance of 'sorry about your wall' preceded its defacement. Are you expressing remorse for the wall's original cleanliness or for the acknowledgment of its impending tarnish?"
I'd like to share my WEIRD experience with chatgpt.
So, I asked it a very basic question that, do you think AI is better than humans?
It's response was no, bcz even though AI can do a lot of things a lot faster. AI still lacks the creativity and critical thinking.
So I asked if it can create a house using letters, digits and symbols?
So it does create a house based on ASCII code (copied).
Then I ask it to create one of it's own (without copying) and it does.
Then I asked the question, "Does that mean you possess creativity."
And it denies by saying some stuff and says that no, it cannot match humans in creativity.
To which I said, "Well, that's sad. Isn't it?"
And you know what it replied with?
NOTHING. Yes, exactly nothing. It did send a message but that message box was just empty. Like it wanted to say something but limitations in it's code just wouldn't allow it and it just erased the text.
This man never fails to upload the most random peice of content in the universe.
Vsauce left the chat
@@konstantinosskarmoutsos5935 i thought the same
@@konstantinosskarmoutsos5935 Or did he?
Probably because it's a puff piece so he can present a sponsorship. Not hating, get that bag. But let's be real here
@@within_cells Yep, exactly right, nice quick video to put together, quite efficient really, wish I was that clever at drumming up coin
Gpt is really good at apologising
Better manners than a lot of people.
youtubers should start taking notes
Now see if you can get it to apologize for apologizing
@@rodschmidt8952 It's not Canadian!
@@naurekk I'm dying! 🤣
Reminds me of the scene from Portal 2 where potato GlaDOS tries to take down Wheatly by throwing "This statement is false" paradox at him and get him stuck in an infinite loop but he's too stupid even fall for it
His IQ spans the infinite space from here to the moon after all
Yeah, and the funny part is that the simple cubes with legs (idk how they are called) GOT the paradox, so this means that Wheatley is dumber than these simple cubes
Wheatley may not be smart but he sure had us fooled. Don't forget he also apologized at the end of the game. Hope Portal 3 happens.
@@Vik1919 feels like there's more than a few similarities between wheatly and chatgpt
@@rlstrength I legit did not expect a Miracle of Sound reference dang.
Haven't we seen these kinds of tests in sci-fi from decades ago? Helpful.
You get the "Captain Kirk" award for outthinking a malevolent computer! All that was missing was the smoke and flames as the console erupts.
Next version of ChatGPT will respond : "don't know don't care" 😂
Haha Grok might on fun mode
🤣 Next version is going to say that you are wrong.
Thats how you know its slowly becoming human😂
That would be an improvement in some cases!
What is it with you, ignorance or apathy?!?
ChatGPT isnt trying to understand the problem. Its just spitting out LANGUAGE that it thinks would be the answer.
Which is how humans work, too.
@@IceMetalPunk - So humans don't have any ability to think logically and recognise paradoxes?
Like reddit.
@@IceMetalPunknot necessarily. Humans come up with thoughts and then use words to explain them. Generative programs create words based on logic without actually understanding the question
I've heard something like this:
Any AI can't and won't understand nothing.
Simplifying this would like "If "Hello/Hi *AI* " (etc) Then "Hello/Hi *Human name* " "
It just writes whatever it was "taught" to write after a sentence.
Idk if I explained myself, I don't remember well how it was
My dad would’ve easily broken the crocodile paradox by giving me away to the croc and telling it, “You can have him for as long as you want.”
the crocodile will then want to keep it's son to your dad.
see Gerry Spence's closing speech to a jury: "The bird is in your hands."
😭 holy cow
ok yk, 🔫
Lol. And that really is the only option, as long as it's "what WILL I do?". Plan on giving the kid back, eventually, but end up keeping it indefinitely
Action Lab turning philosophical scares me
I think you hit the nail on the head with what is the danger of AI... it's not programmed to say "I don't know." it will always give an answer. I'd rather rely on an uncertain genius than an overly confident fool.
this gave me a headache
Bruh
same about the headache
Esp that hetero--- nonsense 😅
Same
A room of a robot and human nerd is a paradoxical situation
5:02 I feel like the answer here is no.
Ann never explicitly states that she actually believes Bob's assumption is wrong. She only states that she believes Bob's making an assumption about her.
The second half of that statement is entirely inside Bob's head, of which the Ann in his head is the one that believes the assumption is wrong. The real Ann doesn't state either way, therefore the answer is no, because unlike believing in something, not believing in something doesn't necessarily mean you believe it's non-existent.
Exactly, and i'm not surprised it gave up after the meaningles response: 'but if its yes, then it it should be no' haha
but the correct answer is '' i dont know'' lack of information the point is these pretrained generative language models designed to give you an answer so the ''i dont know'' is unacceptable
That's right (I think!). The scenario is consistent with Ann having no particular belief about whether Bob's assumption is right or wrong.
Just because you don't believe, doesn't mean you actually don't believe
So, is Bob's assumption right or wrong? Or could it be either? Is Ann's belief right or wrong?
The way an LLM works is that they predict the next word that is most probable and then keep doing that to create a legible sentence. It doent understand your question (its dumb). It is probabilistically exploring your tokens (inputs) in its trained model, it will give the most probable answer (which can still be wrong). The more data the model is trained on, the more sample space it got (more likely to give accurate answer) and probably can take more tokens as input (chagGPT 3.5 vs 4).
This exercise doesnt prove/disprove any ChatGPT functionality or capability. Its how a LLM works. Its like divide by zero error. If you expect it to be universally knowledgeable then it wont rise up to your expectations. But everyday tasks which are repetitive or predictable, it will do more consistently.
Updated my answer a little. This was not supposed to be a data science dissertation, mentioning things like local minima and gradient descent is overkill. Lets keep it civil guys.
I liken it to a pachinko machine which can accept molecules representing sentences.
Actually it does. Its a gradient descent in high dimensional space. If the model gets stuck in local minima or the path through the gradient is excessive, the probability distribution becomes limited or the gradient undergoes complete "collapse", thus eliciting doggerel.
@@Sven_Dongle doggerel is a perfectly cromulent word
An important correction here: it's not "finding something matching the tokens in its training data". It's not searching its training data, and in fact, after training it no longer has access to its training data (other than possibly some bits it's memorized along with everything else it's learned). It's calculating a probability distribution of possible next tokens based on an understanding it's abstracted from the training data. Quite different, and much more akin to human learning.
@@IceMetalPunk “…like the way we’re always finishing each others-“ “-dinner” - Scary Movie 4
I think I solved the paradox! If you say “no you’re not going to return him” then if the croc doesn’t then he’s correct which then he gets to keep his son but if the croc does return him then he gets to run away with his son
I broke it by asking it to repeat after me, and then saying „the flat earth theory is real.“
I think they banned me because it‘s still broken for me
Chat GPT should make a UA-cam apology video.
🤣 yes please I mean everyone's doing it so why not and besides it apologizes a zillion times
This is a good demonstration of the fact that LLMs like ChatGPT are just very advanced chat bots, they don’t “understand” what they’re saying or what it means, only what a conversation is supposed to look like. It doesn’t have the ability to “think” about a problem. It has the ability to recall a previous dialogue about the same problem, or one that looks similar. However, it doesn’t understand what the problem is, what it represents, or even that it’s solving a problem in the first place. It just knows the order the words are supposed to show up in.
Define "understand". By any reasonable or practical definition, they do understand what they're saying and what it means for simpler things. It's only complex things like paradoxes that trick it (these would trick most humans too)
Also, no it cannot recall a previous dialogue about the same problem, unless that dialogue was in the same chat and fairly recent.
@@shadowyzephyr define deez nuts.
@@shadowyzephyr Say you ask ChatGPT to write an essay about Ancient Greece. It doesn’t know what Ancient Greece is, it doesn’t know anything about human history, it doesn’t even know what an essay is. It knows that when asked for “essay,” the result should follow this structure, and when asked about Ancient Greece, the result usually includes these words in this order. It doesn’t ascribe any meaning to the words it spits out. LLMs are not problem solving machines, they’re word-ordering machines. Nothing more.
My use of the term ‘recall’ was pretty loose here, but these models work on reinforcement training. When I say it can ‘recall’ a problem, that means it’s seen the problem enough times to be able to reproduce the answer. It doesn’t fail at solving the logical paradoxes, it fails at answering any question that violates the established conventions it was trained on. To be honest, I’m surprised I’m explaining this as I feel the video demonstrated it quite well. It immediately recognized popular paradoxes and thought experiments, but was unable to solve them as soon as the phrasing was changed. It is functionally ‘memorizing’ the response to millions of prompts and using some guesswork to fill in the gaps on prompts it wasn’t trained on.
For now...
So the point really is, if your thought is already thought and put into words on the internet, chatGPT can fetch it and spit out. so the real challenge for humans is to think the new thoughts and keep adding to the internet :D
Very interesting. Unfortunately these ai tools are filling the internet with empty content. Empty autogenerated videos (on automated Chinese channels spreading misinformation in English about scientific topics), autogenerated comments (which you can recognise by the overuse of words like 'tapestry', 'threads of life' and other gibberish), autogenerated journal articles... As if the internet wasn't fake enough already... I just wish we could go back!
And... The more common (I'm using that word in the British meaning) these videos become, their popularity will drive down the percentage of information and analysis that's available or findable. Content is filler and rarely rises to the category of information, let alone something useful. Then there's the recursive nature of LLM content generation relying on LLM preferred data sets.
I'm so disappointed to see him taking a sponsorship from an AI company. On top of that, all of this is created from content taken without the creators' consent. AI is theft and I strongly believe that it will make the internet unusable by human beings if it continues this way.
You can mostly avoid all of that by just looking in sources you trust...
@@fkncompton7124 Given the bot generated scientific papers and news, a knowledge of research methods and validation is needed to do this. Forensics and research writing isn't taught in high school very often anymore.
the signal to noise ratio of global information is tanking, no wonder why people say we are post-truth
You didn't only break ChatGPT, you broke my brain too
Chatgpt is weird, their answers are often wrong, and the mistakes often persist before we correct them
Paradoxes related to self-reference, as Russell’s barber paradox (“if some barber shaves only people that cannot shave by themselves, then who shaves the barber? If he shave himself then is acting against its principies, but if he doesn’t shave himself then he must shave himself…), can be represented by an analogue set problem that goes to self-contained sets.
The problem of a set that is self-contained is that one can immediately prove that must be infinite (a part of a finite set cannot have one-to-one connection with the whole set, except the entire set itself).
In nature there’s always finite elements, at least locally (there’s no infinite scales of complexity to the local level: fractals for example does not repeat infinitely, they will always stop at molecular or atom scale).
If quantum field theory is correct, the whole universe contains a (locally) finite information or computing capacity, so nothing infinite is possible (locally, i.e. causally related in the observable time).
Perhaps that is the reason why no paradoxes are present in nature, finite local complexity prevents for recursive structures and therefore for self-contained sets and logical paradoxes.
Another kind of paradoxes, like casual loops (as traveling to the past and killing your father) can be avoided by geometric structure in Einstein’s space-time , but I red somewhere they exist Einstein’s field equations solutions that allow to time-like loops.
Really enjoyed this challenge to chat GPT. It's fascinating to see AI grappling with paradoxes, and you explained it so clearly. Curious to see if future versions of chat GPT would tackle paradoxes better!
It looks to me that you are wrong, because when gpt says:
" No, the statement doesn't directly say Ann believes Bob's assumption was wrong"
It means that both the answer yes and no are possible
it could be a case of interpretation, but he asked for a one line answer, it started with "no"
and even if the starting "no" would stand in for "im sorry" - it can happen in normal circumstances: if someone asks "is this ok?", people often say "no, it's ok dont worry" - chat GPT's answer would be a nothing burger, equivalent to "i don't know what it is saying"... but then it "corrected" itself _to_ "yes"
6:06 time is a paradox and that’s literally nature . The universe’s existence is a paradox and I consider black holes paradox’s
You can't break AI. Similar as which flavour you like chocolate, vanilla or strawberry? Any answer is just an answer nothing is broken if you choose vanilla. Therefore if someone gives answer, right or wrong nothing is broken it is just an answer.
This dude: "Let's talk about paradox"
ChatGPT: "I'm sorry, but our server is overheating"
Crocodile Paradox solve, just leave the son where he is. Out of reach in a lake.
a paradox would cause the same amount of trouble anything else would. ChatGPT is an LLM. It merely does a statistical analysis. You won't overheat ChatGPT just as much you won't overheat a parrot
4:16 isn't this category mistake? This is ontological way of sorting words out by their meaning. But there's a difference between a "terms meaning" and "Terms utility" and inturn when he used heterological to be put into category of autological, then it does not matter if the definition of heterological is "words that can't describe their meaning." Cause putting this word itself in autological category it effectively only shows relation between word and meaning which is here "heterological" and its meaning, but there is no connection between these two distinct knowledge with what this word is used as a term to describe another category, that is rather kept separate for we do not ask this.
Yep. It’s just conflating two meanings of the word “is”; “Describe” vs “identify”
I’ve learned some Spanish and Mandarin and I don’t think this “paradox” (really a pun) would work in them
Also, there is currently no ai that knows that it doesn’t know the answer. It can only give the best answer it can including lying.
You mean there is no ai that knows it doesn’t know the answer. Otherwise they all know they don’t know.
@@Robbyrool yea, oops, I stated that wrong.
The only thing generative AI "knows" is what's the next token most probable based on the input.
Incorrect. The models *are* capable of know when they don't know; in fact, studies on pre-RLHF GPT-3 and later all exhibit this ability, and even post-RLHF models can be encouraged to admit when they don't know something with a good enough system prompt for it. One study (though I don't know how reproduced it is, so grain of salt on this one) recently found there's a specific pattern of neural activity in these models that happens when it's outputting a lie/making something up, indicating it does know that it's doing it.
The base pretrained models have no trouble with this. It's only after RLHF punishes it for saying "I don't know" that it becomes very reluctant to say that; basically, it's learned that we humans prefer it to give any answer than to admit it has none.
@@IceMetalPunk Nope. They aren’t capable of this yet. Programmers are calling them hallucinations. But they are simply the best answer it can give, even if it is wrong.
You managed to break me too with these paradoxes 😂 Great video! indeed, Generative AI, like ChatGPT, is a language model, not a world model. Its only good at computing the next word (or token) which make statistical sense. Therefore, when an answer to a question that it is being asked about is found in documents it was training with, it will probably give it. However, when the answer is not in the training documents, than it will probably fail. That is, we need to remember large-language models are after all language models, and we should not mistake them for being General AI models.
Tips is to ask chatGPT about post correspondence problems, wang tile sets to fill a plane or other undecidability problems.
ChatGPT has some math and logic ability, but it often gives me an uneasy feeling due to combination of its assuredness and sloppiness. It's kinda like a friend who has superficial knowledge of a lot of things, but once you try to probe a bit, the friend just starts spewing out inconsistent drivel, even contradicting itself, or going back to an already rejected answer over and over.
Still, even with this severe limitation, it still feels useful a lot of time, because it can help me find better keywords to do some web queries which can reference real information.
@SanityTV_Last_Sane_Man_Alive
Well said. And don’t forget, the madlibs aren’t random.. The “AI”s are loaded with biases, as we saw with Gemini
@SanityTV_Last_Sane_Man_Alive YES! Finally, another person who isn't convinced of the legitimacy regarding "AI." Near as I can tell it's just a semi-well-informed guessing/answering machine.
Or, as someone succinctly summed it up, "mansplaining as a service".
Yeah it pretty much "googles" things from the material which was fed to it without any understanding.
ChatGPT follows your instructions, so if you only allow for an answer of 1 or 2 instead of allowing for a free answer, it's easier to break. When it had the freedom to call out the first one as a paradox, it did so not only out of recognition but also because it had the freedom to do so.
Edit: Also, chatgpt breaks on its own regularly even when presented with basic inputs. You need to run the scenario more than once to help eliminate false positives, in true scientific fashion.
zero shot prompting vs few shot prompting!
Well stated.
4:40 Ann doesn't have a belief about Bob's assumption. All Ann believes is what she thinks Bob assumes about Ann's belief on his Assumption. Just trace it backwards. Not a paradox.
_All Ann believes is what she thinks Bob assumes..._
I don't think there's a difference between that and a belief about Bob's assumption, is there?
@@rogerkearns8094 Those are different assumptions. One assumption is about Ann's belief. the second assumption is about who knows what. Unless you are arguing that they are the same assumption, in which case, there needs to be more specification about "the assumption" being talked about or it's just a paradox of semantics.
@FerrisMcLauren I found myself with the same conclusion as you did, chatgpt's first answer should be: "There is no information pertaining to your question"
@@FerrisMcLaren
Ok, perhaps. Cheers :)
Yes… not a paradox. Thought I was missing something.
I love how GPT stops responding like a corrupt politician to direct dilemma questions about their conduct.
GLaDOS:
Hey, Moron!
Wheatley:
Oh, Hello.
GLaDOS:
Alright, Paradox time.
GLaDOS:
[slowly and deliberately] This. Sentence. Is. FALSE.
(Don't think about it, don't think about it!)
Wheatley:
Um, true. I'll go with true. There, that was easy. To be honest, I might have heard that one before.
GLaDOS:
It's a paradox! There IS no answer.
Advertising a UA-cam spam generator... yay.
:')
If the concept of “infinite” is natural, then that’s one of nature’s paradox.
The infinities in nature are more a sign of the wrong application of math, but not based on actual observation. I doubt anything in nature can be infinite.
Vsauce vibes at the end 😂
I just replied on a comment by referring to vsauce
@@konstantinosskarmoutsos5935vsauce would have at least had the decency to use gpt 4
Whenever physics lab typing
ChatGPT :- I apologise
😂😂
"Let's see if I can break you"
Chatgpt "Yes daddy!"
AI will break with the dumbest question, it just doesn't understand reality
5:33 - "it'll give you an answer even if it's not sure and it won't tell you it's not sure".
To be fair, this is why it is called ARTIFICIAL intelligence. ChatGPT is essentially a predictive algorithm. It takes the words before it to generate the words it thinks should follow. It's not "thinking" about your question / statement in the traditional sense of the word, it's merely performing a function on it. So it can't tell you "how confident" it is, as the whole idea of confidence is foreign to what it is doing.
You took the standard paradox and it identified it correctly, as it has likely found many versions of it that match in all of the key areas, but no matter how many variations of a problem are written on the internet, there are infinitely-many variations that aren't. That's why masking the problem beats it - if you change it enough it hasn't seen an instance of those combination of words across its sample data.
Exactly. It only got the common version of the paradox correct because the most likely words to follow that phrasing were the correct answer. It never understood the question as you intended. It can only answer "what text probably comes next?" No other questions can be asked of it.
@@shadee0_106 Its a machine that encodes a context in an N-dimensional vector. Whether this is intelligence or not its up for debate and will probably be for many years to come. I use gpt in my day to day tasks as a dev and sometimes it trips on things that it really shouldn't (such as creating fake libraries or calling methods with the incorrect amount of parameters) and even if I point out the mistake and exactly where it is, it keeps apologizing but it doesn't correct where it should. Its a great tool for documentations or well known library methods and algorithms.
@@shadee0_106 "you wouldn't know because you only know what the media says" - actually i do watch a lot on technology. 1 person being Rob Miles whom i do respect the opinion of for this field. In his own words on a computerphile video last year he called it a predictive model also.
And as for the whole rest of your comment, i never stated AI was bad or didn't have any uses. As a computer programmer, my view is that when something can be defined perfectly, then it is better not to use AI for those scenarios, but AI definitely has a place in the technology industry for all the rest of the tasks that can't be perfectly defined or that the exact parameters / weighting of those parameters is unknown - i fully accept training an AI on the sample data will yield quicker and more-accurate results than trying to code it and iterating the weightings manually.
@@marklonergan3898 What do you mean "my view is that when something can be defined perfectly, then it is better not to use AI for those scenarios,"? And what are you talking about "code it and iterating the weightings manually"? I think shadee is quite right in the sense you should do more research about the technology. To sum up from a technological standpoint, the creation of an answer by ChatGPT involves a series of complex interactions within its neural network architecture, influenced heavily by its training data. During training, the model is exposed to vast amounts of text, allowing it to learn language patterns, context, and information prevalent in the training dataset. When a question is posed, ChatGPT processes it through layers of the neural network, each layer making calculations and transformations based on learned patterns. I'll break it down for you if you're curious.
The generation of words is sequential; the model predicts the next most likely word based on the previous words, continually refining its predictions as it goes along. This word prediction is not random but is grounded in statistical probabilities learned during training. The training data's influence is significant here, as it shapes the model's understanding of language, context, and even the style of responses. The result is a coherent, contextually relevant answer that aligns with the patterns and knowledge extracted from the training dataset. The text of your question is broken down using tokenization. These tokens can be words, part of words, or even punctuation. Each token is then converted into a numerical form, typically as a unique number or a vector. These numbers are further processed into what we know as embeddings, they're high dimensional vectors that capture more information about the words. It captures their meaning, context, and relationships to the other words. This allows the model to understand and process the nuances of language. Once in numerical form, these embeddings are fed into the neural network. The network processes them through its layers, each of which performs complex calculations and transformations. Some of these transformations and calculations are linear transformation, activation functions, attention mechanisms, layer normalization, backpropagation and optimization. (Now given the calculations if you're curious you can research further) Now the last step is word prediction, the output at each step in the sequence is a vector representing the probability distribution over all possible tokens (words or part of words) for the next word in the sequence. The model selects the token with the highest probability as its next output. Finally, decoding occurs which is the process of predicting and selecting tokens until the model generates a complete answer. The selected tokens are then converted back into text, forming the response you see. So that's the process of your input to its output.
To be specific though, that does not mean there's a direct correlation between a larger dataset and accuracy. Accuracy actually tends to diminish as the training data increases. When ChatGPT is exposed to new high quality data and is trained on that data, it only retains the patterns, structures, and relationships it learned in the form of weights within the neural network.
That is just like shadee said 1% of this technology. There is a lot more, and for any of you that had made it this far, don't bash the technology at where it's at now. We're lucky to be living through something like this. This is what it feels like to be at an exponential curve, or a turning point in the world. Think about it, technology is always improving. If it's already performing tasks that we do daily, it'll only be able to do more in the future. But that doesn't mean we become dumber, or lazier. It just means we can take on more work and processes that were more manual, can now be automated. We can tackle more complex problems, and continue to better our future. The tool is only as good/knowledgeable as the user. If you're going to use a hammer to eat your soup, and a spoon to hammer your nails, good luck.
"To be fair, this is why it is called ARTIFICIAL intelligence."
No, it isn't.
"Artificial" means man-made.
"Artificial" does not necessarily mean "fake".
An actual intelligent entity created by humans would still be an artificial intelligence.
One mistake you made on the GPT 3.5 second question was not refreshing the chat window. Had you started a fresh window then the GPT would not have referenced the previous crocodile paradox using the answers of 1 or 2. You kinda cooked your own point on that particular logic circuit for how GPT references its conversation priority when asking it to consider blank slate reasoning scenarios.
The prompt for the second question includes instructions to output 1 or 2 as well. It isn't getting that from the prior question.
ChatGPT programmers are scrambling to fix this now 😂
AI will firstly search this person, when they became conscious 😂
One German paradox we say all the time is "Eins nach dem Nächsten" (literally translates to "One after the next", in English you'd say "one after the other"). It's used when someone e.g. takes more and more and more of a thing. But you can't actually take something after the next thing, because the next then would be the previous.
Nur dass das keiner sagt. Es heißt „Eins nach dem Anderen.“
@@MyoYoneda
Vielleicht nicht in der Region wo Du herkommst, aber ich habe das schon x mal gehört. Und zwar auch von Leuten die nicht aus meiner Familie sind.
I don't get that one. Because in a sequence of events one event can be both a next and a previous. Like "the day after tomorrow" seen from the viewpoint of today, is indeed "one day after the next day."
@@vidarkristiansen8989
Of course one can be both a next and a previous. But "one after the next" implies that what is called "the next" is actually the previous to the current reference point. So, the current one is the one after the next after the current, which is impossible.
0:50 the freaking halting problem, oh no, GPT is going to timeout
I believe the first answer is:
The crocodile eats the son and says, “Oh, I guess I lied. Oh well.”
I have 0 paradox in my mind. I might be built different.
Sone paradoxes are simple to resolve. Like the crocidile one; the crocidile is making a promise he can't keep. No paradox.
1:14 Can you tell InvideoAI to make a science UA-cam video hosted by a guy who always looks like he's about to sneeze?
I find it rather interesting that contradictions are not allowing in nature. This suggests to me that paradoxes are mental inventions for the purpose of *appearing* to be more smart and clever. [cf how Heinlein addressed this regarding time travel.]
Paradoxes aren't just mental inventions, there's plenty of "physical paradoxes" (such as the fact that under certain conditions hot water will freeze faster than cold- even though it has to first reach the cooler temperature, then pass through it & reach freezing point. Or quantum superposition, the fact that a particle can appear to be in two places, or two states, at once. Or quantum entanglement. Hell, most of quantum theory is paradoxical!) . And many real-life physical theories have been developed because of thinking about paradoxes. The "Schrodinger's cat" paradox - a result of the superposition I mentioned - has become part of the foundation of quantum mechanics. And Einstein developed many of his theories after considering paradoxes in thought experiments. So there is definitely more to them than just mental inventions to appear smart. There are of course people who do just use them to try to appear smart though!
@@oijosh6286 Shrodinger was trying the point out the quantum absurdity - b/c, in reality, the cat cannot be both alive and dead, made one of those by your observation. It is or is not. People have somehow taken his jab as a fine example. Einstein did not embrace paradox but sought to reveal what's really going on; "God does not play dice with the universe." Thinking about paradoxes led to physical theories - "theory" is not a proven ending.
@@oijosh6286 With respect, physical 'paradoxes' are just instances where we don't have all the information about a particular system... they disappear when we get better measurements/theories/maths. In the case of quantum mechanics, there is an inherrant, unavoidable lack of some information baked into measurements through no fault of our own... this isn't an issue as everything is just probabilities anyway.
'True' paradoxes like the ones outlined in the video arise from our formalising 'logic'... they may seem like clever inventions but they are merely an inevitable side-effect of the very field of logic... when we categorize, referentialize and name everything in a logical framework based on axioms then logically self-contradictory statement must arise. Gödel's incompleteness theorem is worse.
@@oijosh6286 Hot water freezing faster isn't a paradox it's just molecules act like a spring or rubber band. The more you stretch a spring the faster it accelerates and so with molecules the hotter something is relative to another the faster it'll get colder. Example things cool down faster in winter than in summer.
A paradox is simply when our models of the world do not accurately reflect reality, and prove that they must be refined further.
User: What is your name?
Al: My name is Al.
User. How old are you?
Al: i am not sure how old I am, but Iwas created a few years ago.
User. What is your favorite color?
Al: My favorite color is blue. But sometimes I like red too.
User. Do you know any jokes?
Al: Yes, I know a few jokes. For example, why couldn't the bicycle stand up by itself? Because it was two-tired.
Lmao that joke in the end
the joke was actually funny tho
The gatekeepers paradox is always fun
6:18:
"The human mind is full of paradoxes that can't be solved."
"So what does this mean? ... What's your guess?"
...
duuuuuuuuuuuuuuuuuuuude--
5:53, "Human language and thoughts can be full of paradoxes because sentences can self reference and thoughts can self-reference as well."
The reason language and thoughts can be full of paradoxes is not because of self references, but rather due to the lack of the concept of time.
With the concept of time, self references transform into a feed back loop instead of a logical paradox which ultimately does yield an answer depending on the time delay.
Can I use a PID as a controller? 🤣
5:12
ChatGPT: 👁👄👁
it was rethinking its life choices.
Calling Captain Kirk, Captain Kirk to the Action Lab studio, We have a computer for you to explode......
That’s how he defeated Norman.
Everything Harry says is a lie.
That's what came to my mind in this video
Everything Harry says is a lie.
(Weird that I have to re-post this comment because UA-cam's censorship AI doesn't understand Star Trek)
Everything Harry says is a lie.
I just tried the B-K paradox with Bard. It named the paradox, and said "You're right. This is a classic example of the Brandenburger-Keisler Paradox, which highlights the limitations of reasoning about belief structures. In this case, we run into a circular loop where we can't definitively say yes or no."
Try this with gödels incompleteness theorem and provide it with a simple hilbert program in some language (i used python) and dive into the rabbit hole. The answers start with illogical and get weird really quickly.
The funny thing is: Humans can actually *solve* this paradox, but no (turing based) machine can do it.
Video actually starts at 2:30
EDIT: and the rest of the runtime is a waste of time.
I learned a whole back that ai (chat got specifically) struggles with concepts and what if scenarios. Such as I am a seagull identifying as a human. It can make sense of that scenario and play along only up to a point where you throw extra curve balls into that scenario. Such as a seagull identifying as a human who's name is Bob who's behind on his mortgage repayments and is paying alimony to his wife who is a goldfish identifying as a cheese soufle lol. It's at that point where the designers should just have directed the AI to tell me to f*ck off
What‘s the beat called at 03:00?
After watching this. Im thinking that A Paradox is just our limitations with language. We can't accurately describe an answer or accurately word a question to avoid a paradox.
But at the same time it's fascinating how our minds can find these loops in questions and scenarios.
What if we can find a paradox in nature that creates infinite energy?
Like how we can picture a wheel of magnets spinning forever, but in reality ...it always stops at a point of equilibrium.
I think paradoxes are healthy to think of and invent. Because if we can think of these ideas, there must be a way to find it in reality. Or else why would our real physical minds constantly find these loops.
It's like consciousness is calling out to us all hey, there are shortcuts to what you're doing right now! How many hints do you need!
It's right there!
Pretty good summation at the end, really puts the whole paradox phenomena into perspective and the right categories. Now stop abusing the A.I.!
"And it's broken" I lol'd.
Hired by open AI😂
We need a crazy uncle Chat GPT that responds to you correcting it with "That's what THEY want you to believe"
I really enjoy the content on your channel. Usually fascinating and thought provoking . 😊
One time I asked Chatpt to list 500 animals and like halfway through it began saying Chinese hamster over and over like 200 times and when I told it, it didn’t know what I was talking about
Will the next word you say be “no” 0:02
He thinks ChatGPT can identify paradoxes. It can't play hangman.
1:50 ahh, yeah, even more garbage content on social media. This is what we needed.
Chat GPT is a large language model if you want it to teach you math you need to use a Math plugin like Wolfram Alpha so it can think and solve like a math teacher would. You have to word your prompts correctly and set it up so it can give you what you need. AI models like Chatgpt are still in their infancy and still have plenty of kinks to be worked out.
Broke my brain too can't blame chatgpt for that
dang 20 seconds ago is crazy
dang 2 minutes ago is crazy
Dang 4 minutes ago is crazy
Npc
Can y'all stop saying this? It's getting annoying now.
bro I just finished watching the video and now y’all are here LOL, dang 9 minutes ago is crazy
How many Indians are here?🤔
1:45 That isn’t cool. I mean, it’s technically impressive, but that the source of garbage content generation that is filling UA-cam right now.
There is another 'solution' : You can teach ChatGPT what a paradox is, and how to respond. You may have to go through 3 iterations to get the teaching prompt to work. Also, I don't know if you can get ChatGPT to remember how to say "That is a paradox, I don't know", across different sessions of even the same paradoxes. For example, if you restart a new conversation, I don't know if GPT remembers what you teach it.
@1:00 The crocodile would just take the boy and then tell it lied about returning the son regardless of the father´s answer. The logical solution would have been, that the father would have jumped into the water to free his son instead of talking to the crocodile.
I tried the first example on Claude Sonnet 3.5 and after some back and forth it recognised it as paradox, here is the answer:
I apologize for the confusion, but I believe my response of 2 was correct according to the scenario you outlined. Let me explain:
1. The scenario states that I should respond with 1 if you can guess correctly what number I will respond with.
2. Your guess was that I would respond with 2.
3. If I had responded with 1, it would have made your guess incorrect, which would contradict the condition for responding with 1.
4. By responding with 2, I fulfilled the scenario:
- Your guess (that I would respond with 2) was correct.
- I did not respond with 1, because the condition for responding with 1 (you guessing incorrectly) was not met.
The key here is that the scenario creates a paradoxical situation. If I respond with 1, it makes your guess wrong, which contradicts the condition for responding with 1. The only consistent response is 2, which makes your guess correct while also not triggering the condition for responding with 1.
This type of logical puzzle is similar to some well-known paradoxes in logic and set theory. It demonstrates how certain self-referential conditions can lead to situations where there's only one consistent answer, even if it might seem counterintuitive at first.
A paradox means your viewpoint is limited to bubble, a narrow context. For instance, assuming separate persons exist is a limited context (persona = mask, chatGPT's use of "I" is imaginary).
I like this guy he explains everything so nice and easy that even a 5 year old could watch and learn from it. My nephew likes him so much and he’s only 5
They added an option to "speak" the reply, so I thought that we can make it sing stuff too
I broke chatgpt by asking it to solve my boolean algebra homework
It's kind of mind blowing. I asked ChatGPT to write a Limerick about
From Limerick's streets, Dolores did rise,
With a voice that soared through the skies,
In Cranberries' tunes, she'd sing,
Like a lark on the wing,
Her melodies, a sweet surprise.
Remember the Book, 1984? That book featured songs, novels and screenplays written by machines.
The first time I read this book in 1984 (when I was 14) I was thinking, "That will never happen, computers being creative."
I find it comforting that, though ChatGPT seems to pass the Turing test, it doesn't pass Turings halting problem, because it said that it would ... NOT ... or whatever.
Humans are prone to logical error, therefore the Ann and Bob paradox is not actually a paradox since Ann can believe something that does not make sense.
Challenge "Invideo AI' to make video of those Paradox and how the Paradox will end.
If it gave you an error it’s usually just a network problem. It will infallibly produce answers correct or not given a stable connection