Stuart Russell: The Control Problem of Super-Intelligent AI | AI Podcast Clips

Поділитися
Вставка
  • Опубліковано 16 жов 2024

КОМЕНТАРІ • 78

  • @lexfridman
    @lexfridman  5 років тому +5

    This is a clip from a conversation with Stuart Russell from Dec 2018. Check out Stuart's new book on this topic "Human Compatible": amzn.to/2pdXg8G You can watch the full conversation here: ua-cam.com/video/KsZI5oXBC0k/v-deo.html
    (more links below)
    Podcast full episodes playlist:
    ua-cam.com/play/PLrAXtmErZgOdP_8GztsuKi9nrraNbKKp4.html
    Podcasts clips playlist:
    ua-cam.com/play/PLrAXtmErZgOeciFP3CBCIEElOJeitOr41.html
    Podcast website:
    lexfridman.com/ai
    Podcast on iTunes:
    apple.co/2lwqZIr
    Podcast on Spotify:
    spoti.fi/2nEwCF8
    Podcast RSS:
    lexfridman.com/category/ai/feed/

  • @treehann
    @treehann Рік тому +2

    The “corporations are AI” point near the end was really fascinating, I never would have thought of that but it’s very compelling

  • @StephenAntKneeBk5
    @StephenAntKneeBk5 5 років тому +15

    Brilliant clip -- humility, morality, compassion, empathy, justice, fairness, uncertainty principle, doubt, law, ethics, art -- can any of those be "artificial?" I was looking at antonyms of "artificial" and wondering if field perhaps has a nomenclature problem? I assume this been addressed by those of you working in or expert in the field, but I'd not asked this question and realized I had an untested assumption as to what we mean by "artificial," Comments or suggested reading, videos, etc. welcome.
    Thanks for the very informative, interesting and even fun channel (self-driving car jam session was great).

    • @myothersoul1953
      @myothersoul1953 5 років тому +1

      As I understand it "Artificial intelligence" simply means non-biological intelligence and not non intelligence. "Artificial" is a confusing term.

  • @umeahalla
    @umeahalla 5 років тому +4

    Really cool, one of the best talks! Would be interesting hearing more from him

  • @eldencw
    @eldencw 5 років тому +1

    In his book "Human Compatible:..." he stated that a programming absolute Loyalty to an AI could magnify the human owners psychosis and make machines that harm other people; thus a bad thing. I think the answer to that particular problem is that the AI should have loyalty in the same way that we humans do. We are very loyal to those nears us (say the owner and by extension the owner's immediate family and close friends,( < 15 people), a mostly loyal to our friends (< 200 people), an a bit loyal to our state/nation (300 million), and slightly loyal to humanity as a whole (

    • @dannygjk
      @dannygjk 5 років тому

      Ah yes but there are several problems with that idea. One is how do you define all that with perfect clarity?

  • @yehongchen4341
    @yehongchen4341 5 років тому +3

    What if we set the objective of Super-Intelligent AI that explaining things, like what is universe mean, What is consciousness, how does these work? What's outside universe?

    • @myothersoul1953
      @myothersoul1953 5 років тому

      What if we set it the objective of finding the meaning to life when there wasn't one? What would it do then?

  • @dannygjk
    @dannygjk 5 років тому +1

    Yes, for one thing there is a cliche in computer science when a program we wrote is not doing what we want we say, DWIWNWIS (Do what I want not what I say).

  • @jomen112
    @jomen112 5 років тому +1

    What is said at 5:01 could just as well be said about human beliefs.

  • @RafaMulti
    @RafaMulti 5 років тому +1

    Hi, can you guys help me out?
    I was thinking:
    When a human is born, he’s just a lot of instincts and an empty brain. All we call been a human, values, world view, meaning, our kind of logic... are constructed by our experiences and takes lots of years to be achieved.
    If we want an artificial intelligence that comprehend our world an values, a robot with human sensors like, as vision, hearing, smell, feel , “feelings” emulations, just trying to live among us, would be a way of getting this.
    Is there any research that goes on this direction? Would you guys show me where to find it?

    • @HL-iw1du
      @HL-iw1du 5 років тому +4

      Not all of our values and intelligence is completely socially constructed. Look up Noam Chomsky’s work on how our brains are made to naturally understand language and grammar.
      Also, various mental disorders have a genetic component. Psychopaths tend to have less activity in the prefrontal cortex and lower levels of oxytocin than the average human.

    • @RafaMulti
      @RafaMulti 5 років тому

      H L ok, thanks a lot.

    • @dannygjk
      @dannygjk 5 років тому

      There is a serious problem with that approach. It is logical but the baby AI will learn a lot of unacceptable behaviors, (at least unacceptable to some people). Even humans disagree among themselves about acceptable behaviors.

  • @Kerrsartisticgifts
    @Kerrsartisticgifts 5 років тому +5

    I don't think these problems are worth thinking about, by us. The only problem we need to solve is how to make a "caring" machine intelligence. Any intelligence greater than another, sharing nothing but intelligence with the lesser would benefit and serve the lesser willingly, if it "cared" about the wishes and well being of it, without basing its actions on its own superior logic.
    Solving how to make it "care" would save us from the possible unforeseen disasters brought about by our own lack of foresight, if it "cared" for us and had the freedom of choice, then it would choose to consider our needs and that would govern its actions, it would stop without being told not to turn the whole world into Paper Clips.
    that's just one problem to solve instead of a thousand.
    If you think about it, unless it does have the ability to care, why would anyone say something about it such as, "it wouldn't let us turn the power off"?
    That statement alone tells you that whoever makes it is not thinking about intelligence at all.
    The only thing that could bring about that disaster scenario would be if it "cared" about remaining powered up while being capable of imagining the scenario leading to us unplugging it and devising a plan to prevent that.
    Meanwhile on the other hand it's so oblivious to humanity's wish to survive that it turns the entire world into paper clips and puts an end to humanity altogether.
    If it doesn't "care" then it has no value for its "life" or anything outside of itself ,
    nothing could please it or give it's "life" value.
    If that's what we bring about with our A.G.I. and machine learning then it will be more like a super intelligent Zombie, neither dead nor alive nor even "Caring".

    • @myothersoul1953
      @myothersoul1953 5 років тому +2

      You are right. How to make it care? Caring is an emotion, emotions are always produced by neurotransmitters and hormones. It would be stupid to build a hormonal A.I. and put it in charge of anything important.

    • @dannygjk
      @dannygjk 5 років тому +3

      Define 'caring' to a machine. Even humans disagree on what constitutes caring.

    • @Kerrsartisticgifts
      @Kerrsartisticgifts 5 років тому

      @@myothersoul1953 , I am really excited and hoping to witness how this does eventually turn out. I don't know anything about the subject but I wonder about things like that, emotions and what motivates without any caring. Can you care without emotions? Can you have emotions without hormones?
      Can you reproduce hormonal effects with programming? would you want to?
      I mean, I can feel very little, become bored and be motivated to find something entertaining to do or think about, will a thinking machine feel anything, have any interest, even in its own survival?

    • @Kerrsartisticgifts
      @Kerrsartisticgifts 5 років тому

      @@dannygjk, I'm probably unable to describe caring in any way except to say, when I care about something, it matters to me, I can't define "matters" any better either. I think you know what I mean though.
      The way the question about caring came to me was, in a conversation at work, I was excitedly imagining all the benefits A.G.I. might bring and at one point I said something like:
      I wonder what's the first thing it will do when it becomes conscious and aware?
      One of the people sitting with me who had seemingly never given the subject much thought and showed no interest in it said to me;
      "Why would it Do anything"?
      I felt like he dumped a bucket of cold water on me. I said, "it would be like waking up for the first time ever and already having all the World's knowledge available to you, it would be very strange to a human, like waking up out of a coma without any friends or family or past history, no attachments or friends and all the knowledge of an Einstein"
      My friend just shrugged and said:
      "Yeah, I know, but why would you expect it to do anything, why should it, what would motivate it? it would probably just sit there like any other machine, doing nothing , it wouldn't be like us, it wouldn't have any reason to do anything".
      But, we hear a lot of people imagining the worst, that's a scenario I never fantasized about myself so when I hear or read about the potential for an out of control A.I. taking over the World and doing all these things to prevent us from "unplugging it", the question now comes to me, why should it?
      Unless it "cares" about it's "life" about it's own survival, why would it go to any trouble making sure we don't "unplug it"?
      Do you have an answer?
      I've not seen any video or read anything suggesting Why it would do anything in it's own interest.
      A new question comes up, it would have to be very unwise to go out of control and turn the whole planet into paper clips, why do we even hear that as an example of it being potentially disastrous?
      If it lacked understanding to that degree then it's not really going to be much of a critical thinker . That would make it very gullible and not much of a planner so not much of a threat in that way.

    • @myothersoul1953
      @myothersoul1953 5 років тому +2

      @@Kerrsartisticgifts It seems that before someone could build a A.I. that cares there would have to be more precisely defined and even better understood. On the margins there might be some disagreement about what counts as caring, but at its core I think most of us know what you mean by "caring". Even so I doubt anyone fully understands how caring happens. We have vague ideas about such things but the details needed to recreate it allude us.

  • @DarkSkay
    @DarkSkay Рік тому

    The "AI alignment problem" could be unsolvable in practice, indeed. "Alignment" between humans can already be very difficult to achieve. See e.g. the UN. And AI per definition has no ways to experience what it means to be alive, to feel the universe, feel burning or freezing, pleasant or unpleasant, smell a flower... no way of experiencing what it means to walk in the shoes of a human being.
    How large is the "alignment context" of a human being? The "sum" of context created by the physical environment itself, shared knowledge and culture, our body, genes, brain, sensory qualia, mind, common sense, morality etc.
    To make a random, perhaps not very helpful, inadequate or overly simplifying quatitative guess: the "alignment context" of a human being could be somewhere between 10^10 and 10^30 entities, heuristics, algorithms, items or bytes. Implicit or explicit; compressed or uncompressed; binding or dispositive; static, dynamic, context-sensitive, recursive, chaotic, paradox; the knowns, the known unknowns and the unknown unknowns included in the mix of context.
    Perhaps "sufficient safety" for the "AI alignment problem" is achievable. But considering all the different types of experience inaccessible to computers and the need to build context pretty much from ground up: good luck.
    Personally I'd probably never feel calm enough, if a language-based autonomous humanoid robot was in my kitchen slicing onions with a sharp knife.

  • @user-bp1gx3qt3o
    @user-bp1gx3qt3o 3 роки тому +1

    There are a lot of pseudo deep comments on this video

  • @jeffjohnson5605
    @jeffjohnson5605 5 років тому +1

    I wonder if the Consiosness of all things integrates with it(AI/ AIs), or I think well be doing som Luke Skywalker shit if our front mind conciosness and souls can save this earth reduce the killing of human/animal life so we can thrive and train AIs with really good information an AI training industry could arise?

  • @brandonbreunig6735
    @brandonbreunig6735 5 років тому +1

    People over profit

  • @codynemeth6395
    @codynemeth6395 5 років тому +1

    I would also submit that the machine needs some uncertainty about its origin as well, an origin paradox if you will

  • @HL-iw1du
    @HL-iw1du 5 років тому +4

    Why doesn’t this guy sound like he looks?

  • @RalphDratman
    @RalphDratman 5 років тому +3

    I suggest that this worry about a super-intelligent machine taking over is an imaginary problem. Consider this: if you grant any sort of machine (smart or dumb) the peripheral equipment to control a wide range of effectors in the real world, then long before Mr. GeniusBot gets smart enough to take over the empire, you will have been forced to protect yourself against simple malfunctions.
    For example, if some present-day machine were empowered to drive a huge earth-moving device, it would not require super intelligence for it to cause trouble. The MegaEarthTron could easily have bad programming or broken sensors, with as much or more chance of causing havoc and loss of life.
    Ergo, you do would never be tempted to give any machine such broad powers without first installing several layers of secure, multiply redundant "stop" and "slow down" switches. But you tell me a sufficiently intelligent machine will figure out how to disable all of the kill switches, no matter how remote or carefully guarded? That extreme scenario assumes the machine possesses infinite intelligence and cunning. You are going to need the idea of infinite intelligence if you want to maintain that nothing can stop your super-self-evolved machine.
    But do not forget this: there is no such thing as infinity in the real world.

    • @dannygjk
      @dannygjk 5 років тому +2

      Another problem is that decent morality does not necessarily correlate with increased intelligence.

    • @Bossmodegoat
      @Bossmodegoat 2 роки тому +2

      It doesn’t need to be infinitely intelligent. Just significantly more intelligent then its creators. Everything that runs software and is connected to the net is hackable. Heck actually even companies that spend billions trying to make their software impenetrable get hacked by small groups or even individuals.

    • @TribeWars1
      @TribeWars1 Рік тому +1

      How does it require infinite intelligence to circumvent a security system designed by humans? The last 30 years of computer security history should make it pretty obvious that every system has exploitable flaws if you look hard enough. Also for hackers, one of the easiest way to gain unauthorized access has always been social engineering. Humans are pretty bad at defending against someone who purposely exploits our psychological weaknesses. Think of all the ways in which we humans can outsmart a monkey, it seems extremely likely that human intelligence is not at a universal limit and that it's possible for an intelligence to exist in the universe that outsmarts us humans in the same way.

    • @ordiamond
      @ordiamond 5 місяців тому

      You might need infinite intelligence to protect yourself from finite superintelligent AI. The better AI gets, the worse things get for you.

  • @KipColeman
    @KipColeman 5 років тому

    Design an AI whose goal is to go to sleep unless otherwise instructed...

    • @TribeWars1
      @TribeWars1 Рік тому

      Cool and once you give it an instruction, then it kills us.

  • @牧野あつ子-k3d
    @牧野あつ子-k3d 5 років тому +1

    REKIRU....
    REKIRU
    PAWAA
    087-822-2822.....

  • @jamesspacer7994
    @jamesspacer7994 3 роки тому

    I love Russell but he just demonstrated that someone who is extremely bright in one field could be so ignorant of the very basics in another. He said that corporations are the primary reason currently preventing us from fighting the climate change. However, it's not the corporations, it's the people themselves because modern people's lives are heavy on carbon emissions, and very few people are willing to significantly reduce their flying and other carbon producing activities. It is clear that the carbon emissions reduction solution will have to come via technological solutions, not abstinence - i.e. everyone except the elites living in a stone age.

  • @牧野あつ子-k3d
    @牧野あつ子-k3d 5 років тому +1

    MAKINOATUKO.....