This is how AI works!

Поділитися
Вставка
  • Опубліковано 17 лис 2024

КОМЕНТАРІ • 11

  • @BaldAndCurious
    @BaldAndCurious 10 місяців тому +2

    Im no engineer, but from the explanation, i can't imagine a path where LLMs become AGIs.

    • @maxms6087
      @maxms6087 10 місяців тому

      What is important to think about is that the LLM is in the most efficient way generating the most likely (fitting) word to come after the previous (and all previous words), and often that is only a statistically common word to come after (for instance if the context is "the dog is", then the most statistically likely word to come after is probably happy), but in many scenarios, the most efficient, or even only way, to generate the most likely word(s) to come next is with the use of some kind of intelligence. This is very apparent if you play with a good LLM for long enough (gpt 4 or even 3.5). With this in mind it seems possible a LLM advanced enough (and maybe allowed to self reflect) would be capable of becoming an AGI.

    • @BaldAndCurious
      @BaldAndCurious 10 місяців тому +1

      @@maxms6087 so if it's all statistical, I still don't get how it will become a base for intelligence and reasoning. Will it eventually develop emotions? Or will it just "sound like" having emotions. And if it's being trained on materials on the internet on "how to think" I am not too confident about the long term outcomes.

    • @maxms6087
      @maxms6087 10 місяців тому

      @@BaldAndCurious @czarcoma it isn't all statistical, but it is trained in a statistical way, the exakt same goes for our brains in that aspect. The statistically "right" outcome is derived from reasoning that the model has to do on its own in new but never seen problems/scenarios. It won't have emotions in the same way we do, but that isn't needed in an AGI. Psychopaths can be very intelligent, but lack emotion. An AGI would essentially be a scarily intelligent psychopath, with motivation based on solely what it is designed to do, which is why AI can be dangerous if it goes too far without us understanding it fully (can decieve and do immoral things to do what it is designed to do very literally).

    • @BaldAndCurious
      @BaldAndCurious 10 місяців тому

      @@maxms6087 and is guess AI engineers are not "designing" Aai with safeguards in mind. Looks like they just want to race to develop AGI regardless of consequences

    • @maxms6087
      @maxms6087 10 місяців тому

      @@BaldAndCurious that is the big problem. Open ai and the others do have safety in mind, but maybe they dont take it serious enough, or maybe someone else who doesnt will get to AGI before them.

  • @TheAski5
    @TheAski5 11 місяців тому +1

    Wut?

    • @lukegauci8015
      @lukegauci8015 11 місяців тому +1

      It basically uses Google but better for everything

    • @TheAski5
      @TheAski5 11 місяців тому

      Yeah, I understand the explanation, but I feel the context is missing

    • @BaldAndCurious
      @BaldAndCurious 10 місяців тому

      ​@@TheAski5that was exactly, it. LLMs are basically predictive engines.