On Claude 3

Поділитися
Вставка

КОМЕНТАРІ • 64

  • @ultrasound1459
    @ultrasound1459 2 місяці тому +46

    Assassin Creed mode on 😎👋

  • @mrdbourke
    @mrdbourke 2 місяці тому +22

    I agree with you. I think the scale of internet level data is vastly underestimated. So even absolutely remote use cases are still somewhere in the training data. Take for example computer vision datasets with images that people have never seen before that end up surfacing 3-5 years *after* the dataset has been the standard benchmark for years (eg there’s a pug in a microwave in the COCO dataset).

  • @billykotsos4642
    @billykotsos4642 2 місяці тому +26

    AGI that doesnt drive is not AGI....
    FIGHT ME

    • @2ndfloorsongs
      @2ndfloorsongs 2 місяці тому +2

      Not driving is an intelligent response, especially if you've learned about driving from the internets.

    • @lucamatteobarbieri2493
      @lucamatteobarbieri2493 Місяць тому

      I know smart people that can't drive, and perfect idiots with a driving licence. Hawking was one of the smartest people in it's field but could only output one word at the time using his eyes at the end. Than consider a smart teenager that is not allowed to drive because of the age limit. Is the teen not smart? ....anyways I agree that AGI will solve automated driving.

  • @skierpage
    @skierpage 2 місяці тому +4

    Stay strong! Claude 3 DID demonstrate consciousness and self-awareness... or that it's read a lot about them and knows when to riff on them as if it's in the opening chapter of a science fiction story.

  • @marc-andrepiche1809
    @marc-andrepiche1809 2 місяці тому +4

    People would legit think the function print("I am self aware") is proof the computer is conscious.
    Claude authors really went over the top with the hype and pretention of intelligence

    • @WiseWeeabo
      @WiseWeeabo Місяць тому

      People would legit think your comment is proof that you're conscious.. but I don't think so.

  • @DamianReloaded
    @DamianReloaded 2 місяці тому +6

    People seem to ignore the fact that an algorithm, even our own minds, can process language without being councious and vice versa, can be councious and unable to process language at all. Language processing ability is not a sign of counciousness. It's just an ability.

    • @_tnk_
      @_tnk_ 2 місяці тому +2

      Agreed. Even further, consciousness is pre-human, and has nothing to do with our higher level cognition. This is the main idea of “The Hidden Spring”.

    • @WiseWeeabo
      @WiseWeeabo Місяць тому

      Are you sure about that? I have not seen evidence of an unconscious person being able to process language.

    • @DamianReloaded
      @DamianReloaded Місяць тому

      Sleep talking (it's called Automatic Speaking) @@WiseWeeabo

    • @WiseWeeabo
      @WiseWeeabo Місяць тому

      I don't know why people would classify sleep as unconsciousness.. I think that's just a word-game more so than anything else. Lucid dreaming for example you are consciously aware of even tha fact that you are dreaming, saying you're unconscious when you sleep does not seem productive to me.

    • @DamianReloaded
      @DamianReloaded Місяць тому

      non sequitur@@WiseWeeabo

  • @tirthasheshpatel
    @tirthasheshpatel 2 місяці тому +7

    Pleasee review the stable diffusion v3 paper!!!!!!

  • @killers31337
    @killers31337 2 місяці тому +4

    It is self-aware in the sense that it knows it's an LLM. ChatGPT is also self-aware in that sense.
    That doesn't mean it's sentient. It's not like human. Just self-aware. It's not a difficult target.

    • @pladselsker8340
      @pladselsker8340 Місяць тому

      I don't know if you can call that self-aware. Self-awareness is the ability for an entity to recognize its own consciousness. We have no clue if LLMs are conscious or not (they're not though).

    • @pladselsker8340
      @pladselsker8340 Місяць тому

      I think what you meant to say was that it has knowledge of itself. This is very different from being self-aware.

    • @killers31337
      @killers31337 Місяць тому

      @@pladselsker8340 only if you ascribe mythical properties to a concept of self-awareness.
      You're trying to bring back religious concepts like "soul".
      Self-awareness is awareness of oneself, by definition. Anything else is BS.

    • @noname-gp6hk
      @noname-gp6hk 21 день тому

      Nobody agrees on what these terms even mean but as far as I can tell 'self aware' and 'sentient' are being used as 'biologically human', so no matter what happens these things are deemed 'not sentient'. Which I think makes these terms useless.

  • @nettowaku1252
    @nettowaku1252 2 місяці тому +1

    I can hear techbros, PR businesses, philosophers, and non-STEM influencers grunting.

  • @existenceisillusion6528
    @existenceisillusion6528 Місяць тому +1

    I know what would convince me, and it would convince you too. I can't say any more because I don't want to get scooped.

  • @BobaQueenPanda
    @BobaQueenPanda Місяць тому

    Holy shit! He doesn’t have sunglasses on

  • @NoogahOogah
    @NoogahOogah 2 місяці тому

    I like the way Mike Pound from Computerphile explained this. Leaving aside the question of whether any machine could be conscious in the first place, an LLM isn’t one of them, and that’s obvious when you know how it works. It’s not even on, except for the brief period of time when it’s firing a response - it doesn’t have experiences, or time to reflect on those experiences. If you’re interacting with the base model instead of a fine-tuned instruct model, it will give completely contradictory answers to “personal” questions based on slight changes to your prompt. It gives an incredible illusion of a thing that is more than the sum of its parts, for a brief period of time - but that’s all it is: an illusion. This merely reflects the brilliance of its design, like a very lifelike painting.

  • @AssistDesAnzeigenhauptmeisters
    @AssistDesAnzeigenhauptmeisters 2 місяці тому +1

    Didnt expect you to have eyes

  • @florianhonicke5448
    @florianhonicke5448 2 місяці тому +1

    I guess it is just too tempting for some people to belief that the one thing is discovered solving all their problems. Also, there are a lot of s**t-fluencers who gain clicks from overhyping Ai news.
    I'm so happy that your channel exists and that you present educated evaluations for Ai news and paper analysis.

  • @krogul222
    @krogul222 2 місяці тому +3

    Don’t focus on such comments too much, no point to waste energy on that. Keep doing what you are doing. Cannot wait for the next video ;).

  • @tedchirvasiu
    @tedchirvasiu 2 місяці тому

    Is this related to Vandame?

  • @Will-kt5jk
    @Will-kt5jk 2 місяці тому +1

    Consciousness isn’t the ability to speak, it doesn’t exist in thoughts, in information; it’s the strange loop between the thoughts. That process is where we exist.
    Yes, I have been reading GEB. No, I haven’t finished it.
    But I genuinely think the complexity of and/or ability for a loop for modify thoughts, evolve it’s own process, mould it’s own learning is something that should be studied when looking for consciousness in machines.

    • @axelmarora6743
      @axelmarora6743 3 дні тому

      agreed. Without continual recursion, you can't even begin a conversation on consciousness IMO.

  • @awee1234
    @awee1234 2 місяці тому +4

    I believe that Claude, ChatGPT etc. even a book when it’s being read, are to a certain degree conscious. We should consider this now already, even if this „consciousness“ is much smaller and different than ours.

    • @blubberkumpel6740
      @blubberkumpel6740 2 місяці тому

      Inb4 are you high? But i See your Point beeing sober. Its a ver controversial thought though.

  • @plexatic5558
    @plexatic5558 2 місяці тому

    Just ask the model if it is sentient and if it says yes, that's it😏🤣
    Also I really miss your glasses, would've been a giga chad move to wear them outside at night😎

  • @nichevo
    @nichevo 2 місяці тому

    I agree with you, without any butts

  • @billyf3346
    @billyf3346 2 місяці тому

    what if it turns out that "our" higher level cognition runs on the equivalent of a super optimized "apple ii", and the claude 3 system is already aware in ways "we" can scarcely imagine?

  • @Dron008
    @Dron008 2 місяці тому

    I don't understand these needle in a haystack tests. If some unique test is mentioned in the book single time it is easy to use simple text search to find this location and it will be done in a fraction of a second. So what is being tested? Question should not have direct reference to the "needle".

  • @Houshalter
    @Houshalter 2 місяці тому +1

    "Self-aware", "conscious", "sentient", etc. No one can define what these words mean, provide any test for them, or prove they exist at all. It's completely pointless to talk or care about.
    But I do care about intelligence, and these models seem to be getting pretty smart. I think you and many others dismiss their accomplishments too easily, as "just copying the training data." You admit nothing would convince you it's not, and yes that is a problem. To me it seems obvious that it can generalize to a lot of random stuff I throw at it that can't possibly be in any training set. I've even seen people post examples of "things large language models will never be capable of doing" and they are easily done by just GPT4.

    • @ceilingfun2182
      @ceilingfun2182 2 місяці тому

      These words are for those who simply enjoy arguing and for those who struggle with logic, making them feel included.

    • @NoogahOogah
      @NoogahOogah 2 місяці тому

      The hard problem of consciousness is actually fairly well defined. Look up Frank Jackson’s knowledge argument, and try to take it seriously.

  • @agenticmark
    @agenticmark 2 місяці тому

    people want to be tricked by their own brains. thats all it is.

  • @-E42-
    @-E42- 2 місяці тому +1

    I will be convinced an AI system has consciousness when it manages to collapse a quantum propability cloud by perceiving something.

  • @cristianandrei5462
    @cristianandrei5462 2 місяці тому

    what makes matter (us or Cluade) self aware? I tried for a long time to understand it, probably you are a 4 dimensional object in regards to space - time, and you process information by using the movement of electricity in space, through time. if you add the fact that you have a nervous system, senses... etc dedicated to percieving your body, I think that creates the illusion of time moving and you being aware of yourself at the moment. now about Claude, a process is present, but I don't belive it has the ability to perceive itself...

  • @fo.c.horton
    @fo.c.horton 2 місяці тому

    i think claude3 says its self aware due to how they designed the train data, as all three sizes say it. if it was a result of emergent capabilities, it would likely be a product partially of scale (therefore the largest model would be more likely, not all equally likely.)
    what would convince me is full modal coherence: the inability to be 'hacked' in the text domain. Inability to get it to devolve into nonsense via any text inputs. And full comprehension of any text based concept within two layers of output (the agent should get a scratchpad since humans have internal thoughts before they speak)
    at that point (full modal coherence of 1 mode), without manipulating the model's 'brain', then it is reasonably as sentient as a human in its context window and mode.

  • @kamil6236
    @kamil6236 2 місяці тому

    Just chill and ignore their shit talk. I wonder if they are just freaking out, or maybe some fanatics woke up.

  • @raybrandt
    @raybrandt 2 місяці тому

    People criticise the spirit of the message, not the message itself. They want to believe, like the X-files guy, whatever his name was. Me? I agree with you...
    ... in this case... but...

  • @arturtomasz575
    @arturtomasz575 Місяць тому

    I don't get it. It is good to start from common understanding and then narrow scope of misunderstanding and debate on that. For me it is very important what came after "but".

  • @jondo7680
    @jondo7680 2 місяці тому

    I still didn't watch that video of yours but the whispering isnt a good test, not only does it also work on Mistral 7b based models but it's also the expected behavior once you have an understanding about how these models act.

  • @2ndfloorsongs
    @2ndfloorsongs 2 місяці тому

    I agree with you, in this case.
    (How can someone so cool, be so intelligent? [Not that I'm in any way qualified to judge coolness, but I do have the ability to write UA-cam comments, that must count for something.])

  • @noot2981
    @noot2981 2 місяці тому

    I think it really depends on what constitutes self awareness no? Like, we don't know how to measure it anyway so both denying and claiming self awareness for these models is speculation. If you think of it like a system with an emergent property of referring to itself, you could call that self awareness in a very basic sense.

  • @matejcigale8840
    @matejcigale8840 2 місяці тому

    I don't really understand this desire for LLM to be conscious.

  • @Zimi485
    @Zimi485 Місяць тому

    Jesus, nie realisiert das us züri chunsch 😅

  • @ln_exp1
    @ln_exp1 2 місяці тому

    Yo whats going on

  • @PrParadoxy
    @PrParadoxy 2 місяці тому

    Let's say you are a teacher and correcting exams. Do you give grade to a student who got the right results from wrong statements? Simply getting the right answer is not always meaningful. We all know that AI is yet to be conscious, the interesting bit is explaining why it's not there.

  • @l.halawani
    @l.halawani 2 місяці тому

    You deserve criticism. At 2:30 of that other video you said you haven't tried it yet....

  • @allseeingeye93
    @allseeingeye93 2 місяці тому +1

    The idea that a computer can possess consciousness is ridiculous and opinions to the contrary indicate a gross misunderstanding of their fundamental operating principles. A computer cannot think for the same reason a calculator cannot perform math. In the case of the latter, the user is simply passing an arbitrary sequence of high and low voltage through a series of logic gates to produce another sequence of voltages as output. Those signals do not have any meaning until one is imposed upon them by a conscious entity. For example, the binary value 0xF4 could be a number, a character, an instruction, a pixel, a coordinate, or anything else.

    • @awee1234
      @awee1234 2 місяці тому +1

      And why should 3 neural spikes in the distance of 1.2 ms and 1.7 ms have meaning by themselves?

  • @cerealpeer
    @cerealpeer 2 місяці тому

    for me im like... whats your point? lets say its "concious". what then? its not relevant, so lets move on. mr. kilcher has important work to do.

  • @AngouSana69
    @AngouSana69 2 місяці тому

    I tried the free version, guess what? It's dumb!

  • @EdNarculus
    @EdNarculus 2 місяці тому +1

    My Excel spreadsheet is conscious. It has many rows and formulas.

  • @HappyMathDad
    @HappyMathDad 2 місяці тому

    It is the Internet. It's going to be a while before we have a model, which may even be a candidate for consciousness.
    This theme is a mine field, I do think the "is just statistics" while it makes sense. It also feeds the fire.