I agree with you. I think the scale of internet level data is vastly underestimated. So even absolutely remote use cases are still somewhere in the training data. Take for example computer vision datasets with images that people have never seen before that end up surfacing 3-5 years *after* the dataset has been the standard benchmark for years (eg there’s a pug in a microwave in the COCO dataset).
I know smart people that can't drive, and perfect idiots with a driving licence. Hawking was one of the smartest people in it's field but could only output one word at the time using his eyes at the end. Than consider a smart teenager that is not allowed to drive because of the age limit. Is the teen not smart? ....anyways I agree that AGI will solve automated driving.
Stay strong! Claude 3 DID demonstrate consciousness and self-awareness... or that it's read a lot about them and knows when to riff on them as if it's in the opening chapter of a science fiction story.
People would legit think the function print("I am self aware") is proof the computer is conscious. Claude authors really went over the top with the hype and pretention of intelligence
People seem to ignore the fact that an algorithm, even our own minds, can process language without being councious and vice versa, can be councious and unable to process language at all. Language processing ability is not a sign of counciousness. It's just an ability.
I don't know why people would classify sleep as unconsciousness.. I think that's just a word-game more so than anything else. Lucid dreaming for example you are consciously aware of even tha fact that you are dreaming, saying you're unconscious when you sleep does not seem productive to me.
It is self-aware in the sense that it knows it's an LLM. ChatGPT is also self-aware in that sense. That doesn't mean it's sentient. It's not like human. Just self-aware. It's not a difficult target.
I don't know if you can call that self-aware. Self-awareness is the ability for an entity to recognize its own consciousness. We have no clue if LLMs are conscious or not (they're not though).
@@pladselsker8340 only if you ascribe mythical properties to a concept of self-awareness. You're trying to bring back religious concepts like "soul". Self-awareness is awareness of oneself, by definition. Anything else is BS.
Nobody agrees on what these terms even mean but as far as I can tell 'self aware' and 'sentient' are being used as 'biologically human', so no matter what happens these things are deemed 'not sentient'. Which I think makes these terms useless.
I like the way Mike Pound from Computerphile explained this. Leaving aside the question of whether any machine could be conscious in the first place, an LLM isn’t one of them, and that’s obvious when you know how it works. It’s not even on, except for the brief period of time when it’s firing a response - it doesn’t have experiences, or time to reflect on those experiences. If you’re interacting with the base model instead of a fine-tuned instruct model, it will give completely contradictory answers to “personal” questions based on slight changes to your prompt. It gives an incredible illusion of a thing that is more than the sum of its parts, for a brief period of time - but that’s all it is: an illusion. This merely reflects the brilliance of its design, like a very lifelike painting.
I guess it is just too tempting for some people to belief that the one thing is discovered solving all their problems. Also, there are a lot of s**t-fluencers who gain clicks from overhyping Ai news. I'm so happy that your channel exists and that you present educated evaluations for Ai news and paper analysis.
Consciousness isn’t the ability to speak, it doesn’t exist in thoughts, in information; it’s the strange loop between the thoughts. That process is where we exist. Yes, I have been reading GEB. No, I haven’t finished it. But I genuinely think the complexity of and/or ability for a loop for modify thoughts, evolve it’s own process, mould it’s own learning is something that should be studied when looking for consciousness in machines.
I believe that Claude, ChatGPT etc. even a book when it’s being read, are to a certain degree conscious. We should consider this now already, even if this „consciousness“ is much smaller and different than ours.
Just ask the model if it is sentient and if it says yes, that's it😏🤣 Also I really miss your glasses, would've been a giga chad move to wear them outside at night😎
what if it turns out that "our" higher level cognition runs on the equivalent of a super optimized "apple ii", and the claude 3 system is already aware in ways "we" can scarcely imagine?
I don't understand these needle in a haystack tests. If some unique test is mentioned in the book single time it is easy to use simple text search to find this location and it will be done in a fraction of a second. So what is being tested? Question should not have direct reference to the "needle".
"Self-aware", "conscious", "sentient", etc. No one can define what these words mean, provide any test for them, or prove they exist at all. It's completely pointless to talk or care about. But I do care about intelligence, and these models seem to be getting pretty smart. I think you and many others dismiss their accomplishments too easily, as "just copying the training data." You admit nothing would convince you it's not, and yes that is a problem. To me it seems obvious that it can generalize to a lot of random stuff I throw at it that can't possibly be in any training set. I've even seen people post examples of "things large language models will never be capable of doing" and they are easily done by just GPT4.
what makes matter (us or Cluade) self aware? I tried for a long time to understand it, probably you are a 4 dimensional object in regards to space - time, and you process information by using the movement of electricity in space, through time. if you add the fact that you have a nervous system, senses... etc dedicated to percieving your body, I think that creates the illusion of time moving and you being aware of yourself at the moment. now about Claude, a process is present, but I don't belive it has the ability to perceive itself...
i think claude3 says its self aware due to how they designed the train data, as all three sizes say it. if it was a result of emergent capabilities, it would likely be a product partially of scale (therefore the largest model would be more likely, not all equally likely.) what would convince me is full modal coherence: the inability to be 'hacked' in the text domain. Inability to get it to devolve into nonsense via any text inputs. And full comprehension of any text based concept within two layers of output (the agent should get a scratchpad since humans have internal thoughts before they speak) at that point (full modal coherence of 1 mode), without manipulating the model's 'brain', then it is reasonably as sentient as a human in its context window and mode.
People criticise the spirit of the message, not the message itself. They want to believe, like the X-files guy, whatever his name was. Me? I agree with you... ... in this case... but...
I don't get it. It is good to start from common understanding and then narrow scope of misunderstanding and debate on that. For me it is very important what came after "but".
I still didn't watch that video of yours but the whispering isnt a good test, not only does it also work on Mistral 7b based models but it's also the expected behavior once you have an understanding about how these models act.
I agree with you, in this case. (How can someone so cool, be so intelligent? [Not that I'm in any way qualified to judge coolness, but I do have the ability to write UA-cam comments, that must count for something.])
I think it really depends on what constitutes self awareness no? Like, we don't know how to measure it anyway so both denying and claiming self awareness for these models is speculation. If you think of it like a system with an emergent property of referring to itself, you could call that self awareness in a very basic sense.
Let's say you are a teacher and correcting exams. Do you give grade to a student who got the right results from wrong statements? Simply getting the right answer is not always meaningful. We all know that AI is yet to be conscious, the interesting bit is explaining why it's not there.
The idea that a computer can possess consciousness is ridiculous and opinions to the contrary indicate a gross misunderstanding of their fundamental operating principles. A computer cannot think for the same reason a calculator cannot perform math. In the case of the latter, the user is simply passing an arbitrary sequence of high and low voltage through a series of logic gates to produce another sequence of voltages as output. Those signals do not have any meaning until one is imposed upon them by a conscious entity. For example, the binary value 0xF4 could be a number, a character, an instruction, a pixel, a coordinate, or anything else.
It is the Internet. It's going to be a while before we have a model, which may even be a candidate for consciousness. This theme is a mine field, I do think the "is just statistics" while it makes sense. It also feeds the fire.
Assassin Creed mode on 😎👋
I agree with you. I think the scale of internet level data is vastly underestimated. So even absolutely remote use cases are still somewhere in the training data. Take for example computer vision datasets with images that people have never seen before that end up surfacing 3-5 years *after* the dataset has been the standard benchmark for years (eg there’s a pug in a microwave in the COCO dataset).
AGI that doesnt drive is not AGI....
FIGHT ME
Not driving is an intelligent response, especially if you've learned about driving from the internets.
I know smart people that can't drive, and perfect idiots with a driving licence. Hawking was one of the smartest people in it's field but could only output one word at the time using his eyes at the end. Than consider a smart teenager that is not allowed to drive because of the age limit. Is the teen not smart? ....anyways I agree that AGI will solve automated driving.
Stay strong! Claude 3 DID demonstrate consciousness and self-awareness... or that it's read a lot about them and knows when to riff on them as if it's in the opening chapter of a science fiction story.
People would legit think the function print("I am self aware") is proof the computer is conscious.
Claude authors really went over the top with the hype and pretention of intelligence
People would legit think your comment is proof that you're conscious.. but I don't think so.
People seem to ignore the fact that an algorithm, even our own minds, can process language without being councious and vice versa, can be councious and unable to process language at all. Language processing ability is not a sign of counciousness. It's just an ability.
Agreed. Even further, consciousness is pre-human, and has nothing to do with our higher level cognition. This is the main idea of “The Hidden Spring”.
Are you sure about that? I have not seen evidence of an unconscious person being able to process language.
Sleep talking (it's called Automatic Speaking) @@WiseWeeabo
I don't know why people would classify sleep as unconsciousness.. I think that's just a word-game more so than anything else. Lucid dreaming for example you are consciously aware of even tha fact that you are dreaming, saying you're unconscious when you sleep does not seem productive to me.
non sequitur@@WiseWeeabo
Pleasee review the stable diffusion v3 paper!!!!!!
It is self-aware in the sense that it knows it's an LLM. ChatGPT is also self-aware in that sense.
That doesn't mean it's sentient. It's not like human. Just self-aware. It's not a difficult target.
I don't know if you can call that self-aware. Self-awareness is the ability for an entity to recognize its own consciousness. We have no clue if LLMs are conscious or not (they're not though).
I think what you meant to say was that it has knowledge of itself. This is very different from being self-aware.
@@pladselsker8340 only if you ascribe mythical properties to a concept of self-awareness.
You're trying to bring back religious concepts like "soul".
Self-awareness is awareness of oneself, by definition. Anything else is BS.
Nobody agrees on what these terms even mean but as far as I can tell 'self aware' and 'sentient' are being used as 'biologically human', so no matter what happens these things are deemed 'not sentient'. Which I think makes these terms useless.
I can hear techbros, PR businesses, philosophers, and non-STEM influencers grunting.
I know what would convince me, and it would convince you too. I can't say any more because I don't want to get scooped.
Holy shit! He doesn’t have sunglasses on
I like the way Mike Pound from Computerphile explained this. Leaving aside the question of whether any machine could be conscious in the first place, an LLM isn’t one of them, and that’s obvious when you know how it works. It’s not even on, except for the brief period of time when it’s firing a response - it doesn’t have experiences, or time to reflect on those experiences. If you’re interacting with the base model instead of a fine-tuned instruct model, it will give completely contradictory answers to “personal” questions based on slight changes to your prompt. It gives an incredible illusion of a thing that is more than the sum of its parts, for a brief period of time - but that’s all it is: an illusion. This merely reflects the brilliance of its design, like a very lifelike painting.
Didnt expect you to have eyes
I guess it is just too tempting for some people to belief that the one thing is discovered solving all their problems. Also, there are a lot of s**t-fluencers who gain clicks from overhyping Ai news.
I'm so happy that your channel exists and that you present educated evaluations for Ai news and paper analysis.
Don’t focus on such comments too much, no point to waste energy on that. Keep doing what you are doing. Cannot wait for the next video ;).
Is this related to Vandame?
Consciousness isn’t the ability to speak, it doesn’t exist in thoughts, in information; it’s the strange loop between the thoughts. That process is where we exist.
Yes, I have been reading GEB. No, I haven’t finished it.
But I genuinely think the complexity of and/or ability for a loop for modify thoughts, evolve it’s own process, mould it’s own learning is something that should be studied when looking for consciousness in machines.
agreed. Without continual recursion, you can't even begin a conversation on consciousness IMO.
I believe that Claude, ChatGPT etc. even a book when it’s being read, are to a certain degree conscious. We should consider this now already, even if this „consciousness“ is much smaller and different than ours.
Inb4 are you high? But i See your Point beeing sober. Its a ver controversial thought though.
Just ask the model if it is sentient and if it says yes, that's it😏🤣
Also I really miss your glasses, would've been a giga chad move to wear them outside at night😎
I agree with you, without any butts
what if it turns out that "our" higher level cognition runs on the equivalent of a super optimized "apple ii", and the claude 3 system is already aware in ways "we" can scarcely imagine?
I don't understand these needle in a haystack tests. If some unique test is mentioned in the book single time it is easy to use simple text search to find this location and it will be done in a fraction of a second. So what is being tested? Question should not have direct reference to the "needle".
"Self-aware", "conscious", "sentient", etc. No one can define what these words mean, provide any test for them, or prove they exist at all. It's completely pointless to talk or care about.
But I do care about intelligence, and these models seem to be getting pretty smart. I think you and many others dismiss their accomplishments too easily, as "just copying the training data." You admit nothing would convince you it's not, and yes that is a problem. To me it seems obvious that it can generalize to a lot of random stuff I throw at it that can't possibly be in any training set. I've even seen people post examples of "things large language models will never be capable of doing" and they are easily done by just GPT4.
These words are for those who simply enjoy arguing and for those who struggle with logic, making them feel included.
The hard problem of consciousness is actually fairly well defined. Look up Frank Jackson’s knowledge argument, and try to take it seriously.
people want to be tricked by their own brains. thats all it is.
I will be convinced an AI system has consciousness when it manages to collapse a quantum propability cloud by perceiving something.
So why does that Not Happen all the time ?
what makes matter (us or Cluade) self aware? I tried for a long time to understand it, probably you are a 4 dimensional object in regards to space - time, and you process information by using the movement of electricity in space, through time. if you add the fact that you have a nervous system, senses... etc dedicated to percieving your body, I think that creates the illusion of time moving and you being aware of yourself at the moment. now about Claude, a process is present, but I don't belive it has the ability to perceive itself...
i think claude3 says its self aware due to how they designed the train data, as all three sizes say it. if it was a result of emergent capabilities, it would likely be a product partially of scale (therefore the largest model would be more likely, not all equally likely.)
what would convince me is full modal coherence: the inability to be 'hacked' in the text domain. Inability to get it to devolve into nonsense via any text inputs. And full comprehension of any text based concept within two layers of output (the agent should get a scratchpad since humans have internal thoughts before they speak)
at that point (full modal coherence of 1 mode), without manipulating the model's 'brain', then it is reasonably as sentient as a human in its context window and mode.
Just chill and ignore their shit talk. I wonder if they are just freaking out, or maybe some fanatics woke up.
People criticise the spirit of the message, not the message itself. They want to believe, like the X-files guy, whatever his name was. Me? I agree with you...
... in this case... but...
I don't get it. It is good to start from common understanding and then narrow scope of misunderstanding and debate on that. For me it is very important what came after "but".
I still didn't watch that video of yours but the whispering isnt a good test, not only does it also work on Mistral 7b based models but it's also the expected behavior once you have an understanding about how these models act.
I agree with you, in this case.
(How can someone so cool, be so intelligent? [Not that I'm in any way qualified to judge coolness, but I do have the ability to write UA-cam comments, that must count for something.])
I think it really depends on what constitutes self awareness no? Like, we don't know how to measure it anyway so both denying and claiming self awareness for these models is speculation. If you think of it like a system with an emergent property of referring to itself, you could call that self awareness in a very basic sense.
I don't really understand this desire for LLM to be conscious.
Jesus, nie realisiert das us züri chunsch 😅
Yo whats going on
Let's say you are a teacher and correcting exams. Do you give grade to a student who got the right results from wrong statements? Simply getting the right answer is not always meaningful. We all know that AI is yet to be conscious, the interesting bit is explaining why it's not there.
You deserve criticism. At 2:30 of that other video you said you haven't tried it yet....
The idea that a computer can possess consciousness is ridiculous and opinions to the contrary indicate a gross misunderstanding of their fundamental operating principles. A computer cannot think for the same reason a calculator cannot perform math. In the case of the latter, the user is simply passing an arbitrary sequence of high and low voltage through a series of logic gates to produce another sequence of voltages as output. Those signals do not have any meaning until one is imposed upon them by a conscious entity. For example, the binary value 0xF4 could be a number, a character, an instruction, a pixel, a coordinate, or anything else.
And why should 3 neural spikes in the distance of 1.2 ms and 1.7 ms have meaning by themselves?
for me im like... whats your point? lets say its "concious". what then? its not relevant, so lets move on. mr. kilcher has important work to do.
I tried the free version, guess what? It's dumb!
My Excel spreadsheet is conscious. It has many rows and formulas.
It is the Internet. It's going to be a while before we have a model, which may even be a candidate for consciousness.
This theme is a mine field, I do think the "is just statistics" while it makes sense. It also feeds the fire.