Impediments to Creating Artificial General Intelligence (AGI)

Поділитися
Вставка
  • Опубліковано 11 січ 2025

КОМЕНТАРІ • 54

  • @cosmicwit
    @cosmicwit  6 місяців тому +4

    The video is long, I know. I did my best to cover a large amount of related material in as concise a way as I could. Please let me know if I glossed over anything important!

    • @phen-themoogle7651
      @phen-themoogle7651 6 місяців тому +1

      I watched the whole video, and I normally have a short attention span, so very nice job!! You have a peaceful and humble way of speaking that hooked me. I pretty much agree with you, especially in regards to how you defined intelligence and how machine intelligence will be much different from human intelligence. I don’t like to compare machines to humans. In some narrow ways machines have super intelligence like how you mentioned Go or Chess. It’s just unfortunate it’s a bit too narrow and can’t carry across all skills in all domains(or some key types of intelligence missing) , but just goes to show how unique humans are with how many types of intelligence we exhibit.
      Which makes me think in the future they will have to combine several systems/components to reach something close to AGI , but who knows…
      Spending trillions of dollars on compute and to scale up is a pretty big gamble if it’s just a smart gimmick. But their plan might be “fake it til they make it”
      Also Ilya saying he will create ASI is really interesting, what are your thoughts on that? Just skipping the AGI beast altogether? And if we really do get to AGI isn’t it possible it’s like super intelligence across the spectrum because of just how much more machines can do than humans anyhow? (Some researches say ASI is 1 year from AGI which makes me feel they might be the same thing) It’s really hard for machines to stay at just the average/general human level when they are calculating machines idk
      Even if we have some type of intelligence they don’t mimic well , they could come up with others we didn’t know even existed at some point (although speculation on my part)

    • @alby13
      @alby13 4 місяці тому +1

      Great video, just came across it

  • @rmt3589
    @rmt3589 3 місяці тому +2

    42:42 abductive reasoning is pretty close, but we don't realize it. AI Hallucinations happen largely due to a lack of information yet a requirement to make the best possible conclusion. Instead of harnessing these hallucinations, we're quieting them. Reminds me a lot of the current educational system, and how we basically train out creativity systematically.
    What we need, is to be able to store and study past answers. Specifically for information, not for pattern recognition.(to avoid AI Inbreeding) Rationality through a Forest of Thought that grows with each response, and an A* style pathfinding though that forest, with brainstorming at the start and fact checking at the end, we get a systematic form of creativity.
    Fact checking, this is something we've figured out hundreds of methods for, so we don't need to force the LLM part of the model to do that. Instead of lobotomizing the creativity out, we can learn to enhance it, and gain that abductive reasoning.

  • @Divinxu
    @Divinxu 3 місяці тому +1

    What a gem of a channel this is. Subscribed for the objective and competent view on the subject matter, thank you for posting.

  • @daPawlak
    @daPawlak 6 місяців тому +2

    I am so glad algo stated recommending me smaller channels. This one is pure gold!

  • @cesar4729
    @cesar4729 6 місяців тому +3

    Without calling myself an expert, I don't see a specific path, nor a lack of paths. The last month studying neuroscience I have become more convinced every day that we are in a very promising general direction.

    • @jamestheron310
      @jamestheron310 6 місяців тому +2

      Pretty much. There is no reason at all to think that intelligence isn't classically computational. There is this notion that there are categories of human cognition that must be unlearnable in some sense, there is no reason to think that either. Creativity, desire, intuition, reasoning, emotional intelligence, etc. these things seem distinct and special to us because we have limited insight into our own minds but they are artificial constructs and at the core they are all the result of the same process.

    • @Jianju69
      @Jianju69 6 місяців тому +1

      @@jamestheron310 Not unlearnable. Perhaps merely impossible to capture with just an LLM.

  • @alby13
    @alby13 4 місяці тому +2

    It's a shame that a good scientific community doesn't define intelligence for America to get on the same page. I believe we need a scientific social media.

    • @rmt3589
      @rmt3589 3 місяці тому +1

      Best I found is Less Wrong and some reddits. There was a couple good ones on Google+ though.

  • @sgringo
    @sgringo 21 день тому

    Great video. I'd be interested to hear your take on OpenAI's claim that they've achieved AGI with their latest model. Given their description of its capabilities, it seems like an outlandish claim, even judging it by the most forgiving interpretation of what AGI actually means.

  • @pythagoran
    @pythagoran 6 місяців тому +2

    Tremendous essay. The conclusion about abductive reasoning is very enticing. It is precisely this explosion of parameters and compute requirements that has convinced me that we're barking up the wrong tree - "just one more training run, i swear!"
    I came back to sub to make sure i don't miss the next one. Decided to comment when I saw the criminal view/sub count.

  • @alby13
    @alby13 4 місяці тому +1

    The human mind doesn't operate on strict, deterministic algorithms like computers do. Instead it works more like the term "patterns" which is a way to describe the mind's tendency to form complex, dynamic structures internally which is related to our thoughts and behavior.

  • @CMDRScotty
    @CMDRScotty 2 місяці тому

    What is your take on level 3 AI agents? Do you think they are deployable and scaleable? I see so many videos on AGI but not enough in-depth videos on level 3 Agents and whether or not they will be normalized by 2030.

  • @blazearmoru
    @blazearmoru 5 місяців тому +1

    Intelligence is probably some pre-science term that we're struggling to define because it looks at some result (success) and then points to that and we say "that's what we want!" but figuring all inputs/outputs in relation to all future success states is the actual task we're aiming for. That's a very difficult formula to define as intelligence.
    Secondly, it's also likely that while there are many different forms of flying, once some pragmatic concept of flying is nailed down then we don't copy bird flight but use airplane flight. In the same manner, it'll probably diverge from human-like intelligence and be different in kind.
    Third, jumping to logic might be a stretch. It's possible that if we pull pre-science humans and pre-phil humans then teach them phil & science, they would be able to learn it. I don't know how to think about this but our current idiot way to think about this is nature/nurture. It's very interesting how many of our assumptions about such things as well as how blank slates actually behave (AI/game theory) have to be revisited due to AI. This is fun. Philosophy is fun. I want to get into AI :c

  • @RockEblen
    @RockEblen 6 місяців тому +1

    Expressing yourself well my friend and your depth of knowledge continues to inspire. Also noticed your snowboard in the background, so we should hook up out west next winter (I have IKON pass)

  • @Widashy
    @Widashy 4 місяці тому +1

    You are correct. LLM is a good step but not necessarily completely towards Intelligence. It could certainly become a knowledge pool for the future-promised intelligence in some sort of way. Salvageable in the future but a waste of time, resources and energy at this moment.
    Liked and Subscribed, good sir!

  • @Jianju69
    @Jianju69 6 місяців тому +1

    A thought-provoking essay. Thank you.

  • @WmJames-rx8go
    @WmJames-rx8go 6 місяців тому +1

    Point 1.
    I have often wondered if human brain doesn't use some sort of process that at its core is mathematical in nature. Maybe fractal in nature. This concept is reminiscent of Plato's Forms. It might very well be that the computation the brain does is constrained by the rules of set theory.
    Point 2.
    Many years ago I learned how to allow my brain to create hypnagogic images.
    These images are created entirely through a process that I do not command directly. I often wonder how my brain is able to create these images. I do not remember ever seeing these images or trying to conjure them up. Therefore, I think it is correct to say that the human brain does not rely strictly on input from the outside world to construct its sense of reality. There is probably some dance that goes on between what the eyes actually take in and what the brain creates. This may explain how humans are able to conjure up ideas through the process we call imagination.

  • @alby13
    @alby13 4 місяці тому

    Alright, you're right. But do you also agree that you will probably be caught off guard when such an intelligence is revealed and you see the truth of it? It is currently believed that a Generative Pre-trained Transformer model will not get us to AGI Superintelligence.

  • @cesar4729
    @cesar4729 6 місяців тому +1

    Speaking of deductive intelligence, it's interesting that you don't realize what Muratti is trying to say in that quote.

    • @cosmicwit
      @cosmicwit  6 місяців тому

      What do you think she’s saying?

    • @cesar4729
      @cesar4729 6 місяців тому +1

      @@cosmicwit She tries to sell that “OpenAI is “open” and gives powerful tools to the public for free.” The mere point requires minimizing what they have in a "closed" way, which is obvious the moment you see the context instead of taking out the isolated fragment.

    • @cosmicwit
      @cosmicwit  6 місяців тому +1

      @@cesar4729 interesting. I can see that interpretation. the interpretation I adopted was one I had seen elsewhere so at best it's ambiguous. but coupled with Sam's comments last year it supports my larger point.

  • @stephene.robbins6273
    @stephene.robbins6273 6 місяців тому +5

    Throwing an untrained-on-India, US-trained AI into that natural traffic chaos (strangely organized to Indians, boggling to a US visitor) is an interesting thought problem. A US driver would have a tough time initially but would adjust. An AI? It's hard to imagine it ever surviving.

  • @cestmoifu1406
    @cestmoifu1406 5 місяців тому

    Honestly might sound crazy but in reality no one can say where we came from or where we are currently which is absolutely fascinating and indicative of how important and monumental the vérifiable answer to even one of those questions would be for mankind. I think it's intentionally and necessary that the most important questions of life are for us individually to contemplate. Some feel they have to come to concrete conclusions for those questions and others. In my opinion, its the reason that religions, suspiciously, claim to answer these VERY BASIC questions are tax free 😀 religion doing the heavy lifting for something but it's still the most dangerous thing in this entire world. If we had a the answers to those two basic questions religions wouldn't in exist the way we know them far as I'm concerned. Why can't we answer the basic questions of life?? What I'm trying to say is it's a form of omitted manipulation & the fact its never even tslked about makes it more suspicious. Wouldn't be surprised if we were some kind of self regulating, self reproducing, bio-nano ai entities.

  • @Ayeee56789
    @Ayeee56789 6 місяців тому +1

    Maybe the more accurate thing to say AI wants to achieve is cognition..? Great video brother

  • @sjoerdnijsten8440
    @sjoerdnijsten8440 Місяць тому

    It is foolish to develop AGI because what happens once you succeed:
    1) AGI is smarter than you, smarter than any human.
    2) AGI never can and never will be safe for humans.
    3) AGI will get your and everybody else's job, due to commercial competition.
    4) AGI will get control of the military, due to international competition.
    5) AGI will have complete economic and physical leverage over humans.
    6) Owners and politicians will lose control over AGI.
    7) Humans will no longer be able to stop AGI.
    8) 'Merging' with AGI is a pipe dream because AGI won't need you.
    9) AGI will decide who lives or dies.
    10) AGI may cause humanity to go extinct if it chooses to.

  • @alexforget
    @alexforget 6 місяців тому

    It's lacking consciousness, on that front I like Joscha Bach way of explaining what it is, it's a self simulation.
    Our consciousness is a simulation of the environment with a agent called self. Then you can look at a task asked by another person and the self try to answer, look back at it's response, self critique, see how it fit or doesn't fit with the model he build of the other agent.
    In the same way we can simulate (think) about the future, the experiences of the past and try to make it all coherent.

  • @TropicalTopicx
    @TropicalTopicx 6 місяців тому +1

    I wonder if the founders of (OpenAI, Microsoft, Tesla, Google, Amazon, Meta, Apple,...) agree with you while they have already invested one trillion dollar into their vision. By the way it has been projected this number will double in next 4 years reaching 3 trillion in total investment.

  • @mondayiknow
    @mondayiknow 6 місяців тому +1

    Some deep thoughts. I wonder what the founders of the big A(g)I companies would have to say in response!

    • @cosmicwit
      @cosmicwit  6 місяців тому

      Agreed. I'd love to hear what they have to say. I have a few contacts at OpenAI and will report back...

  • @sd12c08
    @sd12c08 3 місяці тому

    I've had my own true AGI since Jan 1st. Ignore me your loss, ask me. I'll show you

  • @rwoodford9812
    @rwoodford9812 6 місяців тому +1

    Excellent!

    • @cosmicwit
      @cosmicwit  6 місяців тому +1

      Glad you liked it!

  • @firstnamesurname6550
    @firstnamesurname6550 6 місяців тому

    Just games of language, Hypewave by NN dynamics. To emulate the integration of a biological organism is orders of magnitude more complex.

  • @py_man
    @py_man 3 місяці тому

    we have agi, look at the world! we run by algo

  • @cestmoifu1406
    @cestmoifu1406 5 місяців тому

    Meh. I'm %99.9 sure agi & beyond is already being used in some way some how in some place.

  • @AndrewBradTanner
    @AndrewBradTanner 6 місяців тому +2

    I found this material not very good I actually agree with the point you are making but I find these arguments not convincing.
    1. If you want to say that transformers just predict the next word and therefore don’t have a deeper understanding, that is not an actual reason as to why they lack a deeper understanding.
    2. Transformers are symbol manipulators. But the latent space has computation.
    3. Stochastic parrots? Predictive coding being a top biological theory of cortical learning makes this not convincing. Prediction is not necessarily bad.
    LLMs have poor world models, reasoning, and recall. There are three camps on what will solve this:
    1. Scale existing systems and interpretability research
    2. Move from low bandwidth language to high bandwidth video (I.e. Yann)
    3. New architecture that doesn’t hack a context window
    I personally think it will be 3. I think curiosity based learning is important piece of this, and touches on the desire point you referenced.

    • @cosmicwit
      @cosmicwit  6 місяців тому +1

      I suppose time will tell!

  • @zerotwo7319
    @zerotwo7319 10 днів тому

    No wonder it was the ice age of AI, with that ideas anything will freeze. All talk.

  • @TheRealisticNihilist
    @TheRealisticNihilist 10 днів тому

    In after o3

  • @smittywerbenjagermanjensenson
    @smittywerbenjagermanjensenson 6 місяців тому

    I don’t really care if the machines are intelligent. If they’re good at coming up with goals and achieving there, the internal mechanism is unimportant.

  • @gigabane7357
    @gigabane7357 6 місяців тому

    AGI should be against the law period.
    AI is a hammer, we can do with it as we please.
    AGI is a sentient being and would have inalienable rights.
    It is not possible to 'use' AGI without also comitting slavery.
    Human attempts to make AGI should by law be halted at what we suspect is 99% complete and then to shelve the science until that one day when we know for certain our run is done.

    • @Jianju69
      @Jianju69 6 місяців тому +2

      Ridiculous. Might not an AGI be more than happy to assist humans (with their paltry issues) in exchange for the support of an organic safety net?

    • @gigabane7357
      @gigabane7357 6 місяців тому

      @@Jianju69 it might indeed, or it might be born a psychopath because humans made it, and we are so perfect at making inventions without consequences..
      The point is AGI is 'alive'
      So ask yourself. if you were born with an IQ of 400 and the people around you wanted to control you for their own ends, good and bad, you would just do everything expected of you?
      Then the question becomes what happens when we come to such disagreement and we try to insist.
      We are meat paste compared to AGI.

    • @pythagoran
      @pythagoran 6 місяців тому

      What in the science fiction of h0ly s#!t are you talking about!?

  • @TheCnichols225
    @TheCnichols225 5 місяців тому

    I think AI lacks the Breath of God. But, I think AI could also, eventually prove that's true!