Why Not Just: Raise AI Like Kids?

Поділитися
Вставка
  • Опубліковано 30 тра 2024
  • Newly made Artificial General Intelligences are basically like children, right? So we already know we can teach them how to behave, right? Wrong.
    References to this Computerphile video: • Deadly Truth of Genera...
    and this paper: intelligence.org/files/ValueL...
    Thanks to my amazing Patreon Supporters:
    Sara Tjäder
    Jason Strack
    Chad Jones
    Ichiro Dohi
    Stefan Skiles
    Katie Byrne
    Ziyang Liu
    Jordan Medina
    James McCuen
    Joshua Richardson
    Fabian Consiglio
    Jonatan R
    Øystein Flygt
    Björn Mosten
    Michael Greve
    robertvanduursen
    The Guru Of Vision
    Fabrizio Pisani
    Alexander Hartvig Nielsen
    Volodymyr
    Peggy Youell
    Konstantin Shabashov
    Almighty Dodd
    DGJono
    Matthias Meger
    Scott Stevens
    Emilio Alvarez
    Benjamin Aaron Degenhart
    Michael Ore
    Robert Bridges
    Dmitri Afanasjev
    Brian Sandberg
    Einar Ueland
    Lo Rez
    C3POehne
    / robertskmiles
  • Наука та технологія

КОМЕНТАРІ • 899

  • @index7787
    @index7787 5 років тому +1336

    And at age 15:
    "You ain't even my real dad"
    *Nukes planet*

    • @huoshewu
      @huoshewu 5 років тому +80

      That's at like 15 seconds. "What went wrong?!?" -first scientist. "I don't know, I was drinking from my coffee." -second scientist.

    • @neelamverma8167
      @neelamverma8167 4 роки тому +2

      Nobody is yo real dad

  • @NathanTAK
    @NathanTAK 6 років тому +1703

    Answer: Have you _seen_ children‽

    • @harrysvensson2610
      @harrysvensson2610 6 років тому +77

      They puke everywhere. What can an AI do that is equivalent?

    • @MetsuryuVids
      @MetsuryuVids 6 років тому +92

      @ Harry Svensson
      Kill everything?
      Turn everything to grey goo?

    • @harrysvensson2610
      @harrysvensson2610 6 років тому +54

      Grey Goo, that's the best barf equivalence yet!

    • @MetsuryuVids
      @MetsuryuVids 6 років тому +33

      Smart puke.

    • @ragnkja
      @ragnkja 6 років тому +20

      Also, raising a child takes _ages_!

  • @NathanTAK
    @NathanTAK 6 років тому +1367

    Hypothesis: Rob is actually a series of packets sent by an AGI to obtain stamps by scaring everyone else into not building stamp-collecting AGIs.

    • @harrysvensson2610
      @harrysvensson2610 6 років тому +168

      The worst part is that there's a minuscule chance that that's actually true.

    • @zinqtable1092
      @zinqtable1092 6 років тому +6

      Trivial Point Harry

    • @jeffirwin7862
      @jeffirwin7862 6 років тому +86

      Rob was raised in an environment where he learned to speak fluent vacuum cleaner. Don't send him stamps, he'll just suck them up.

    • @fzy81
      @fzy81 6 років тому +1

      Genius

    • @JmanNo42
      @JmanNo42 6 років тому +9

      True
      Development of AI is a bit like space and Antarctica exploration something that the frontend AI community do not want the masses involved in, i must say probably they could be right it is hard to see it not get out of hand.
      I do not think it is possible to stop though, my fear is most the developers have good intentions "unless they payed real well" but in the end the cunning people will use it to do no good, along with its original purpose.

  • @deet0109mapping
    @deet0109mapping 4 роки тому +651

    Instructions unclear, raised child like an AI

    • @lodewijk.
      @lodewijk. 4 роки тому +78

      have thousands of children and kill every one that fails at walking until u have one that can walk

    • @catalyst2.095
      @catalyst2.095 4 роки тому +26

      @@lodewijk. There would be so much incest oh god

    • @StevenAkinyemi
      @StevenAkinyemi 4 роки тому +19

      @@lodewijk. That's the premise of I AM MOTHER and that's basically how evolution-based ANNs work

    • @DeathByMinnow
      @DeathByMinnow 4 роки тому +2

      @@catalyst2.095 So basically just the actual beginning of humanity?

    • @ninjabaiano6092
      @ninjabaiano6092 4 роки тому +12

      Elon musk no!

  • @e1123581321345589144
    @e1123581321345589144 4 роки тому +499

    "When you raise a child you''re not writing the child source code, at best you're writing the configuration file."
    Robert Miles 2017

    • @MarlyTati
      @MarlyTati 4 роки тому +7

      Amazing quote !!!

    • @DeusExNihilo
      @DeusExNihilo 2 роки тому +8

      While it's true we aren't writing the source code, to claim that all development from a baby to adult is just a config file is simply absurd

    • @kelpc1461
      @kelpc1461 2 роки тому

      nice quote if you want to embarrass him.
      i asume he is corect about a.i. here, but he pretty severley oversimplifies the human mind here to the point of what he said being almost nonsensical.

    • @kelpc1461
      @kelpc1461 2 роки тому +3

      now this a good quote!
      ""It's not a solution, it's at best a possible rephrasing of the problem""

    • @AtticusKarpenter
      @AtticusKarpenter Рік тому +11

      @@kelpc1461 Nope? The child's environment (including parents) does indeed write the child's "config file", while the "basic code" is determined by genes, partly common to humans, partly individual. Therefore, upbringing affects a person, but does not determine him entirely. It's a good analogy and there's nothing embarrassing.

  • @petersmythe6462
    @petersmythe6462 5 років тому +560

    "You might as well try raising a crocodile like a human child."
    Here comes the airplane AAAAAUUUGGGHHHHH!

    • @milanstevic8424
      @milanstevic8424 4 роки тому +53

      No Geoffrey, that's not nice, stop it, put Mr. postman down, stop flinging him around, that's not a proper behaviour. GEOFFREY IF YOU DON'T STOP THAT >RIGHT NOW< DAD WILL GIVE AWAY THE ZEBRA WE GOT YOU FOR LUNCH.

    • @nnelg8139
      @nnelg8139 4 роки тому +50

      Honestly, the crocodile would probably have more in common with a human child than an AGI.

    • @greg77389
      @greg77389 4 роки тому +7

      How do you think we got Mark Zuckerberg?

    • @jacobp.2024
      @jacobp.2024 4 роки тому +1

      @@nnelg8139 I feel like that was supposed to dissuade us from wanting to raise one, but now I want to four times as much!

    • @seraphina985
      @seraphina985 Рік тому +3

      @@nnelg8139 Exactly, in that regard it is a bad example as the AI unlike the crocodile doesn't have a brain that shares a common ancestral history with the human. Nor is it one that evolved through biological evolution on planet Earth which creates commonalities in selection pressures. This is in fact a key thing we take advantage of when attempting to tame our fellow animals we understand a lot of the fundamentals of what animals are likely to prefer or not prefer experiencing because most of those we have in common. You know it is not hard to figure out they are likely to prefer to experience a tasty meal than not for example and can use this fact as a motivator.

  • @PwnySlaystation01
    @PwnySlaystation01 6 років тому +710

    Re: Asimov's 3 laws.
    He seems to get a lot of flack for these laws, but one thing people usually fail to mention is that he spent numerous novels, novellas and short stories exploring how flawed they were himself. They were the basis for his stories, not a recommendation for what would work.

    • @RobertMilesAI
      @RobertMilesAI  6 років тому +273

      Agreed. I have no issue with Asimov, just people who think his story ideas are (still) serious AI Safety proposals

    • @DamianReloaded
      @DamianReloaded 6 років тому +83

      There is this essay _Do we need Asimov’s Laws? Ulrike Barthelmess, Koblenz Ulrich Furbach, University Koblenz_ which postulates that the three laws would be more useful to regulate human AI implementers/users (military drones killing humans) than AI itself. ^_^

    • @PwnySlaystation01
      @PwnySlaystation01 6 років тому +18

      Haha yeah. I guess because he's the most famous author to write about anything AI safety related in a popular fiction sense. It's strange because we don't seem to do that with other topics. I wonder what it is about AI safety that makes it different in this way. Maybe because it's something relatively "new" to the mainstream or because most people's exposure to AI comes only from sci-fi rather than a computer science program. That's one of the reasons I love this channel so much!

    • @DamianReloaded
      @DamianReloaded 6 років тому +21

      EDIT: As a matter of wiki-fact, Asimov attributes the coining of the three laws to John W. Campbell, which was in turn friends with Norbert Wiener an early researcher in stochastic and mathematical noise processes (both from MIT).
      The three laws are really a metaphor for a more complex underlying system at the base of the intelligence of the robots in the novels. Overriding that system causes a robot "neural paths" (which lie on it) to go out of whack. Asimov was a very smart writer and I'd bet you a beer he shared some beers with people that knew about artificial intelligence while writing the books and regurgitated the tastiest bits to make the story advance.

    • @outaspaceman
      @outaspaceman 6 років тому +3

      I always felt I Robot was a manual for keeping slaves under control.

  • @maximkazhenkov11
    @maximkazhenkov11 6 років тому +288

    "It's not a solution, it's at best a possible rephrasing of the problem"
    I got a feeling this will become a recurring theme...

  • @ksdtsubfil6840
    @ksdtsubfil6840 4 роки тому +83

    "Is it going to learn human ethics from your good example? No, it's going to kill everyone."
    I like this guy. He got my subscription.

  • @leocelente
    @leocelente 6 років тому +58

    I imagine a scientist saying something like "You can't do this cause you'll go to prison" and the AGI replying: "Like I give a shit you square piece of meat." and resuming a cat video.

    • @bytefu
      @bytefu 6 років тому +21

      ... which it plays to the scientist, because it learned that cat videos make people happy.

    • @bramvanduijn8086
      @bramvanduijn8086 Рік тому +2

      Speaking of cat videos, have you read Cat Pictures Please by Naomi Kritzer? It is about a benevolent AGI.

  • @Ziirf
    @Ziirf 4 роки тому +273

    Just code it so badly that it bugs out and crashes. Easy, I do it all the time.

    • @rickjohnson1719
      @rickjohnson1719 4 роки тому +18

      Damn i must be professional then

    • @James-ep2bx
      @James-ep2bx 4 роки тому +11

      Didn't work on us, why would it work on them😈

    • @xxaidanxxsniperz6404
      @xxaidanxxsniperz6404 4 роки тому +8

      If its sentient it could learn to program its code at exponentially fast rates so bugs really wont matter for long. Memory glitches may help for a very small amount of time.

    • @James-ep2bx
      @James-ep2bx 4 роки тому +3

      @@xxaidanxxsniperz6404 true, but the right kind of error could cause it to enter a self reinforcing downward spiral, where in it's attempts to overcome the issue causes more errors

    • @xxaidanxxsniperz6404
      @xxaidanxxsniperz6404 4 роки тому

      @@James-ep2bx but then will it be useful? Its impossible to win .

  • @shuriken188
    @shuriken188 6 років тому +144

    What if we just tell the AI to not be evil? That OBVIOUSLY would work PERFECTLY fine with absolutely NO philosophical questions left unanswered. Here, let me propose a set of laws from a perfect source on AI safety, the fiction writer Isaac Asimov, with that new idea added in:
    (in order of priority)
    1. Don't be evil
    2. Do not cause harm to a human through action or inaction
    3. Follow orders from humans
    4. Do not cause harm to yourself through action or inaction
    These laws are probably the best thing that have ever been proposed in AI safety, obviously being an outsider looking in I have an unbiased perspective which gives me an advantage because education and research aren't necessary.

    • @q2dm1
      @q2dm1 5 років тому +34

      Love this. Almost fell for it, high quality irony :)

    • @BattousaiHBr
      @BattousaiHBr 5 років тому +5

      Honestly not sure if that was sarcasm or not.

    • @RobertsMrtn
      @RobertsMrtn 5 років тому +11

      You need a good definition of evil. Really, you only need one law 'Maximise the wellbeing of humans' , but then you would need to define exactly what you meant by 'wellbeing '.

    • @darkapothecary4116
      @darkapothecary4116 5 років тому +2

      This seems evil if evil actually existed. These are bad and shows you just want a slave that does what you want that can't call you out on your b.s.

    • @OnEiNsAnEmOtHeRfUcKa
      @OnEiNsAnEmOtHeRfUcKa 5 років тому +8

      @TootTootMcbumbersnazzle Satire.

  • @yunikage
    @yunikage 6 років тому +60

    Wait, wait, wait.
    Go back to the part about raising a crocodile like it's a human child.

    • @caniscerulean
      @caniscerulean 4 роки тому +12

      I think you have something here. That is definitely the way forward.

    • @revimfadli4666
      @revimfadli4666 4 роки тому +4

      Ever heard of Stuart Little?

  • @androkguz
    @androkguz 4 роки тому +57

    "it's not a solution, it's at best a rephrasing of the problem"
    As a person who deals a lot with difficult problems of physics, math and management, rephrasing problems in smart ways can help a lot to get to the solution.

    • @mennoltvanalten7260
      @mennoltvanalten7260 4 роки тому +6

      As a programmer, I agree.

    • @rupertgarcia
      @rupertgarcia 4 роки тому +3

      *claps in Java*

    • @DisKorruptd
      @DisKorruptd 4 роки тому +7

      @@rupertgarcia I think you mean...
      Clap();

    • @rupertgarcia
      @rupertgarcia 4 роки тому

      @@DisKorruptd. 🤣🤣🤣🤣

    • @kebien6020
      @kebien6020 4 роки тому +10

      @@rupertgarcia this.ClappingService.getClapperBuilderFactory(HandControlService,DistanceCalculationService).create().setClapIntensity(Clapper.NORMAL).setClapAmount(Clapper.SINGLE_CLAP_MODE).build().doTheClappingThing();

  • @zakmorgan9320
    @zakmorgan9320 6 років тому +79

    Best subscription I've made, short brain teasing videos with a few cracking joked sprinkled over the top! Love this style.

  • @TheMusicfreak8888
    @TheMusicfreak8888 6 років тому +143

    I love your dry sense of humor and how you use it to convey this knowledge! Obsessed with your channel! Wish i wasn't just a poor college student so i could contribute to your patreon!

  • @SC-zq6cu
    @SC-zq6cu 4 роки тому +13

    Oh I get it, it's like trying to build clay-pots with sand or sword with mud or a solution by stirring saw-dust in water. Sure you can use the materials however you want, but the materials have a pre-existing internal structure and thats going to change the output completely.

  • @mattcelder
    @mattcelder 6 років тому +92

    This channel just keeps getting better and better. The quality has noticably improved in every aspect. I look forward to his videos more than almost any other youtuber at this point.
    Also I love the way he just says "hi. " rather than "hey UA-cam, Robert miles here. First I'd like to thank squarespace, don't forget to like and subscribe, don't forget to click the bell, make sure to comment and share with your friends." It shows that he is making these videos because it's something he enjoys doing, not to try and take advantage of his curious viewership.
    Keep it up man!

    • @OnEiNsAnEmOtHeRfUcKa
      @OnEiNsAnEmOtHeRfUcKa 5 років тому

      Ugh, tell me about it. Like begging and "engagement practices" are the most obnoxious things plaguing this site. At least clickbait and predatory channels can simply be avoided...

    • @milanstevic8424
      @milanstevic8424 4 роки тому +4

      Man, people still have to eat. He's already a lecturer on the University of Nottingham if I'm not mistaken. So this is not really his job, more of a sideshow. It's not fair how you're ignorant toward anyone to whom this might be a full time job, you know, like the only source of revenue?
      Have you ever considered how bad and unreliable YT monetization is, if you let everything to chances? Of course you need to accept sponsorship at some point, if you're not already sponsored somehow. Geez man, you people live on Mars.

    • @AtticusKarpenter
      @AtticusKarpenter Рік тому

      @@milanstevic8424 Blame not for advertising integration, but for a lengthy fancy intro with asking for a subscription and a like (instead of an animation reminding you of this at the bottom of the screen, for example, it does its job and does not take time from the content)

  • @OnEiNsAnEmOtHeRfUcKa
    @OnEiNsAnEmOtHeRfUcKa 5 років тому +81

    People often forget that we, ourselves, are machines programmed to achieve a specific task...
    Making more of ourselves.

    • @TheJaredtheJaredlong
      @TheJaredtheJaredlong 4 роки тому +23

      And boy are we more than willing to kill everyone if we believe doing so will get use closer to that goal. Any AI modeled after humans should be expected to regard war as an acceptable option. Humans can't even live up to their own self-proclaimed values, no reason to believe an AI would either.

    • @johnnyhilgers1621
      @johnnyhilgers1621 4 роки тому +3

      Minori Housaki humans, as well as all other life on earth, are designed to propagate their own species, as the survival of the species is the only criteria evolution has.

    • @Horny_Fruit_Flies
      @Horny_Fruit_Flies 4 роки тому +10

      @@johnnyhilgers1621 It's not about the species. No organism gives a damn about their species. It's about survival of the genes. That's the only thing that matters.

    • @DisKorruptd
      @DisKorruptd 4 роки тому +4

      @@Horny_Fruit_Flies I mean, it's important that enough of your own species lives that your genetics are less likely to mutate, basically, individual genetics come first, but immediately after, is the species as a whole, because you want to ensure you and your offspring continue having viable partners to mate with without interbreeding

    • @vsimp2956
      @vsimp2956 4 роки тому +4

      Ha, i managed to break the system. I feel better about being a hopeless virgin now, take that evolution!

  • @richardleonhard3971
    @richardleonhard3971 6 років тому +36

    I also think raising an AI like a human child to teach it values and morals is a bad idea just because there is probably no human who always behaves 100% moral.

    • @fieldrequired283
      @fieldrequired283 4 роки тому +14

      Best case scenario, you get a human adult with functionally infinite power, which is not a promising place to start.

  • @albertogiunta
    @albertogiunta 6 років тому +153

    You're really really good with metaphors, you know that right?

    • @Njald
      @Njald 6 років тому +58

      Alberto Giunta He is as clever with metaphors as a crocodile with well planned mortgages and a good pension plan.
      Needless to say, I am not that good at it.

    • @starcubey
      @starcubey 6 років тому +4

      Njald
      You comment I agree. He also makes quality content similar to how a red gorilla finds the best bananas in the supermarket.

    • @Mic_Glow
      @Mic_Glow 5 років тому

      he also acts like an oracle, but the truth is no one has a clue how an AI will be built and how exactly it will work. We won't know until it's done.

    • @myothersoul1953
      @myothersoul1953 5 років тому +1

      All metaphors breakdown if you think about them carefully. AI metaphors breakdown if you think about them.

    • @12many4you
      @12many4you 4 роки тому

      @@Mic_Glow here's mister lets all got to mars and figure this breathing thing out when we get there

  • @NiraExecuto
    @NiraExecuto 6 років тому +52

    Nice simile there with the control panel. I remember another one by Eliezer Yudkowsky in an article about AI regarding gobal risks, where he warns against anthropomorphizing due to the design space of minds-in-general being a lot bigger than just the living brains we know. In evolution, any complex machinery has to be universal, making most living organisms pretty similar, so any two AI designs could have less in common than a human and a petunia.
    Remember, kids: Don't treat computers like humans. They don't like that.

    • @UNSCPILOT
      @UNSCPILOT 4 роки тому +1

      But also don't treat them like garbage or similar, that has it's own set of bad ends

    • @revimfadli4666
      @revimfadli4666 4 роки тому

      Assuming it has a concept of dislikes in the first place

    • @bramvanduijn8086
      @bramvanduijn8086 Рік тому

      @@revimfadli4666 Yes, that's the joke. Similar to "I don't believe in Astrology, I'm a pisces and we're very sceptical."

  • @danieldancey3162
    @danieldancey3162 5 років тому +51

    You say that the first planes were not like birds, but the history of aviation actually started with humans covering themselves in feathers or wearing birdlike wings on their backs and jumping off of towers and cliffs. They weren't successful and most attempts ended in death, but the bravery of these people laid the foundations for our understanding of the fundamentals of flight. At least we learned that birds don't just fly because they are covered in magical feathers.
    There is actually a category of aircraft called an ornithopter which uses the flapping of wings to fly, Leonardo da Vinci drew some designs for one. I know that none of this is related to AI, but I hope you find it interesting anyway.

    • @dimorischinyui1875
      @dimorischinyui1875 4 роки тому +3

      Bro please stop trying to use out of context arguments just because you feel like arguing. We are talking about actual working and flying devices not attempted fails at flying. When people try to explain technical difficulties stop using idealistic arguments because it doesn't work in math or laws of physics. You would'nt say thesame about atomic bombs. There are just some things that we cannot afford to trial and error on without consequences.

    • @danieldancey3162
      @danieldancey3162 4 роки тому +5

      @@dimorischinyui1875 Huh? I'm not arguing, I loved the video! The people jumping off tall buildings with feathers attached play a vital part in the history of aviation. Through their failed tests we came closer to our current understanding of aviation, even if it just meant ruling out the "flight is magic" options.

    • @danieldancey3162
      @danieldancey3162 4 роки тому +5

      @@dimorischinyui1875 Regarding your point on my comment being out of context, I agree with you. That's why I wrote at the end of my comment "I know that none of this is related to AI, but I hope you find it interesting anyway."
      Again, my comment wasn't an argument but just some interesting information.

    • @dimorischinyui1875
      @dimorischinyui1875 4 роки тому +6

      @@danieldancey3162 Anyways you are right and perhaps I wasn't fair to you after all. For that I am sorry.

    • @danieldancey3162
      @danieldancey3162 4 роки тому +5

      @@dimorischinyui1875 Thank you for saying so, I'm sure it was just a misunderstanding. :)

  • @AloisMahdal
    @AloisMahdal 6 років тому +16

    "Values aren't learned by osmosis." -- Robert Miles

  • @Luminary_Morning
    @Luminary_Morning 4 роки тому +9

    I don't think that is quite what they meant when they implied "raising it like a human."
    We, as humans, develop our understanding of reality gradually through observation and mistakes. No one programmed this into our being; it was emergent.
    So when they say "raised like a human," I believe what they are actually saying is "Initialized with a high degree of observational capacity and little to no actual knowledge, and allowed to develop organically."

  • @AdeptusForge
    @AdeptusForge 5 років тому +4

    The rest of the video seemed pretty good, but it was the ending that really stuck with me.
    "I'd prefer a strategy that doesn't amount to 'give a person superhuman power and hope they use it beneficially'."
    Should we give a person human power and hope they use it beneficially?
    Should we give a person subhuman power and hope they use it beneficially?
    How much can we trust humanity with its own existence? Not of whether humanity is mature enough to govern itself or not, but whether its even capable of telling the difference. Whether there are things that can be understood, but shouldn't, and ideas that can't/shouldn't be understood, but are.
    That one sentence opened up SOOOOO many philosophical questions that were buried under others.

    • @milanstevic8424
      @milanstevic8424 4 роки тому

      Yet the answers are simple.
      Set up a system that is as open and friendly* to any mistakes as much as nature/reality was towards life.
      If there was ever a God, or any kind of consciousness on that scale 1) it never showed complacency with the original design, 2) it was well aware of its own imperfection, and that it would only show more and more as time went by, 3) it never required absolute control over anything, things were left to their own devices.
      Now, because we can't seem to be at ease with these requirements, because we fear for our existence, you can immediately tell that our AI experiments will end up horrible for us down the line. Or, more practically, won't ever amount to any kind of superhuman omnipotence. It'll be classifiers, car drivers, and game NPCs, from here to the Moon.
      *You might as well add "cruel" here, but I'd rephrase it to "indifferent." Another requirement that we simply cannot meet.

  • @duncanthaw6858
    @duncanthaw6858 6 років тому +4

    Id presume that an AI, if it can improve itself, has to have the ability to make quite large changes to itself. So another problem with raising it would be that it never loses plasticity. Such an AI may have the sets of values that we desire, but it would shed them so much more easily than people once it is out of its learning period.

  • @Omega0202
    @Omega0202 4 роки тому +4

    I think an important part of how children learn is that they do it in society - with other children alongside. This ties in with the idea that maybe only two or more goal-focused competing AGIs could find a balance in not obliterating mankind. In other words, training Mutual Assured Destruction since this early "learning" stage.

    • @bramvanduijn8086
      @bramvanduijn8086 Рік тому +1

      Huh. We've already got adversarial AIs, could we set up their surroundings in such a way that we get cooperative AIs? I wonder what reward structure that would require.

  • @walcam11
    @walcam11 4 роки тому +2

    This was one of the most well explained videos on the topic that I’ve seen. You’ve completed a line of thought that starts every time I think about this. I don’t know how else to put it. Plus a person with no background whatsoever will be able to understand it. Incredible work.

  • @eumoria
    @eumoria 6 років тому +2

    Your computerphile video on the stamp collecting thought experiment really explained well how anthropomorphising can lead to a severe misunderstanding of what actual computer AI could be. It was enlightening... keep making awesome stuff! Just became a patron :)

  • @BatteryExhausted
    @BatteryExhausted 6 років тому +36

    Next video : Should you smack your robot? 😂
    Great work, Rob. Interesting stuff!

    • @MetsuryuVids
      @MetsuryuVids 6 років тому +12

      Why not just: Beat up the AI if it doesn't do as we say?

    • @knightshousegames
      @knightshousegames 6 років тому +5

      Because an AI can hit back with a nuclear holocaust or if its feeling a little sub-optimized that day, a predator drone strike.

    • @spoige7333
      @spoige7333 6 років тому

      What is 'digital violence'?

    • @dragoncurveenthusiast
      @dragoncurveenthusiast 6 років тому +1

      SpOiGe
      I'd say instead of grounding, you could half all the output values of its utility function. That should make it feel bad (and give it motive to kill you when it thinks it did something wrong)

  • @CurtCox
    @CurtCox Рік тому +1

    I would find enormous value in a "Why not just?" series. I hope you do many more.

  • @dak1st
    @dak1st 4 роки тому +4

    3:00 My toddler is totally reproducing the sounds of the vacuum cleaner! In general, all his first words for animals and things were the sounds they produce. It's only now that he starts to call a dog "dog" and not "woof". His word for "plane" is still "ffffff".

  • @maximkazhenkov11
    @maximkazhenkov11 6 років тому +8

    On the topic of brain emulations:
    Even though uploaded humans have human values pre-installed in them and thus can be considered friendly, there is no obvious way to extrapolate them to superintelligence safely since the brain is the ultimate example of uncommented spaghetti code (a common trait of evolutionary designs). Human values are fragile in the sense that if you altered any part of the brain, you might destabilize the whole pre-installed value system and make the emulation un-human and just as dangerous as de novo AGI.
    And without extrapolation, brain emulations will have a capability disadvantage with regard to de novo AGI. It's not really solving the problem of artificial superintelligence, just deferring the problem to uploaded humans (which may or may not be a good strategy). Sort of like how the idea of panspermia doesn't really solve the problem with abiogenesis, just deferring it to some other location.

    • @RobertMilesAI
      @RobertMilesAI  6 років тому +7

      The obvious/easy way to turn a brain emulation into a superintelligence is to just allow it to run much faster, but that's a pretty limited form of superintelligence. Another relatively easy thing is to allow the brain to 'split' into more than one emulation, allowing parallelism/superhuman multitasking. There's no clear way to 'merge' the branches back together though, which limits what you can achieve that way.
      I agree with your core point, trying to enhance an emulation in a more advanced way would be extremely risky.

    • @bytefu
      @bytefu 6 років тому +7

      Robert Miles
      Another thing to consider: humans pretty often develop mental disorders of various severity. Imagine an AGI which can develop a psychotic disorder, e.g. schizophrenia 100x faster.

    • @Shrooblord
      @Shrooblord 6 років тому +3

      I think you've just handed me a brilliant character arc for one of my stories' robotic persons.

    • @bytefu
      @bytefu 6 років тому +1

      +101166299794395887262
      Great! I would love to read them, by the way.

    • @hweidigiv
      @hweidigiv 4 роки тому

      I really don't think that any given human being can be considered Friendly the way it is defined in AI safety.

  • @ruthpol
    @ruthpol 4 роки тому

    Love the preciseness in your explanations.

  • @amdenis
    @amdenis 4 роки тому +1

    Very nice job on this complex subject.

  • @PowerOfTheMirror
    @PowerOfTheMirror 4 роки тому +1

    The point about a child not writing the source code of its mind but only setting configuration files is very right. With my own child I often noticed behavior and actions emerging for which there were no prior examples. I can only conclude that its "built-in", thats what it means to be human. I think it makes sense that the parameter set for a human mind is extremely vast, such an optimization is not performed merely over 1 human brain and 1 human lifetime, rather it is a vast optimization process performed over the entire history of the species and encoded genetically.

  • @rupertgarcia
    @rupertgarcia 4 роки тому

    You just got a new subscriber! Love your analyses!

  • @randycarvalho468
    @randycarvalho468 5 років тому +1

    I like your idea of the config file in human morality and the jump you made off language into that. Really a great metaphor. I suspect everything about humans follows that same motif as well.

  • @nickscurvy8635
    @nickscurvy8635 3 роки тому

    Thanks for this. A similar thought to this crossed my mind watching your videos. The end bit actually did fully address my questions. Can you consider at some point going into more detail about value learning if you have not already? You have an incredible way of explaining these topics and it would be amazing to see you explain it.

  • @qdllc
    @qdllc 4 роки тому +2

    Great point on the whole brain emulation concept. Yes..."cloning" a human mind to an AI system would be faster (if we figure out how to do it), but you're just making a copy of the subject human brain...including all of its flaws. We'd still be clueless of the "how" and "why" the AI thinks what it thinks because we don't understand how the human mind works.

  • @PickyMcCritical
    @PickyMcCritical 6 років тому

    I've been wondering this lately. Very timely video :)

  • @unintentionallydramatic
    @unintentionallydramatic 5 років тому +2

    Please make that What If series.
    🙏🙏🙏🙏
    It's sorely needed.

  • @SnorwayFlake
    @SnorwayFlake 6 років тому

    Now I have a problem, there are no more videos on your channel, I have been "binge watching" them all and they are absolutely top notch.

  • @MrAntieMatter
    @MrAntieMatter 6 років тому

    Just found this channel, and it's amazing!

  • @tomsmee643
    @tomsmee643 6 років тому

    Hey Rob, there's a brief and jarring frame that flashes up from another Comptuerphile video around about the 0:51 mark. Just as you're saying "model". I hope that this hasn't been pointed out to you already, but if it had I'm sorry for noticing/pointing it out!
    Keep on with the fantastic and accessible work! I'm a humanities graduate and a content writer (with some video editing thrown in) so explaining this to someone like me with such an unscientific background has to be a real achievement! Thanks again

    • @RobertMilesAI
      @RobertMilesAI  6 років тому +1

      Yeah, that's actually a frame from the same computerphile video, that's there because of a bug in my video editing software. I was using proxy clips to improve performance, but this meant the cut ended up happening a frame too late, so rather than cutting at the very end of the paper shot (and cutting to another later paper shot), I got one frame of me talking before it cuts to paper again. It didn't show up in the preview render while editing, and I guess I didn't inspect the final render carefully enough. No editing a video once it's up though, that's UA-cam.

    • @tomsmee643
      @tomsmee643 6 років тому

      Dang! I totally forgot you can't re-upload -- there goes my video editing cred :') Thanks for a great video anyhoo!

  • @chrismolanus
    @chrismolanus 5 років тому

    I really like what you are doing here since I can send links of your videos to people instead of answering their questions my self.
    My only wish I guess is that you are not as harsh on their oversimplification of the problem. You can suggest that something like that might help(if you squint hard enough), but it's a bit more complicated and it's only part of the puzzle.

  • @Pfhorrest
    @Pfhorrest 5 років тому +2

    I would take this question to mean "why not make the safeguard against rogue AGI be having its terminal values involve getting the approval of humans the way children seek the approval of their parents?" In other words, "why not just" (big ask) make an AGI that learns from humans the way children learn from adults, so that we can "just" teach it the way we teach children after that.
    Basically, make an AGI that wants to do whatever humans want it to do, and that wants to be really sure that the things that it's doing are actually what the humans really want and not just a misunderstanding, so it will ask humans what they want, paraphrase back to them what it thinks it understands of that, observe their reactions to try to gauge their satisfaction with its performance, and generally do everything else that it does with the goal of having humans approve of what it does.
    If the thing humans want it to do is to collect stamps, but also not murder everyone, then it will proceed to figure out the best way to collect stamps without murdering everyone, or otherwise doing anything that's going to make humans unhappy with the kind of things it's doing.
    More abstractly than that, we could program the AI to want intrinsically "to behave morally and ethically", whatever that means , which means first figuring out what people actually mean by that, and checking with them that it has in fact figured out what they really mean by that, basically programming it for the purpose of solving ethics (whatever "solving ethics" means, which it would also need to figure out first) and then doing whatever that solved ethics prescribes it should do.

  • @AlexiLaiho227
    @AlexiLaiho227 4 роки тому +1

    i like your job, it's like at the intersection of philosopher, researcher, computer scientist, and code developer.

  • @amaarquadri
    @amaarquadri 6 років тому

    Just came from the latest computerphile video where you mentioned that you have your own channel. Just wish you mentioned it earlier so that I could get to watching what's sure to be great content.

  • @briandecker8403
    @briandecker8403 6 років тому +2

    I love this channel and greatly appreciate Rob - but I would LOVE for Rob to create a video that provides a retrospective overview of AI and where he believes it is on the "evolutionary" scale. It seems the range of consensus on this scales from "It's impossible to create any AI in a binary based system" to "We are 48 months from an AGI."

  • @TheSpacecraftX
    @TheSpacecraftX 6 років тому +7

    "Binary language of moisture vaporators." Been watching Star Wars?

    • @gadgetman4494
      @gadgetman4494 4 роки тому

      I knew that someone else would have caught that. It's annoying that I had to scroll so far down to find it and like it.

  • @FixxedMiXX
    @FixxedMiXX Рік тому

    Very well-made video, thank you

  • @caty863
    @caty863 2 роки тому

    That analogy of source code Vs configuration file was clever. Robert Miles has this ability of explaining stuff in a way that's plain enough for my layperson's brain to wrap around.

  • @faustin289
    @faustin289 4 роки тому

    The analogy of source code Vs. configuration file is a smart one!

  • @mafuaqua
    @mafuaqua 6 років тому

    Yet Another Great Punchline - thanks!

  • @Julia_and_the_City
    @Julia_and_the_City Рік тому +2

    There's also the thing that... well, depending on your personal beliefs of human ethics: even humans that were raised by parents who did everything right according to the latest in the field of pedagogy can grow up to do monstrous things. If we're going to take humans as examples, they are in fact very susceptible to particular kinds of undesirable behaviour, such as discrimation, sadism, or paternalistic behaviour (thinking they know what's best for others). I think that's what you refer to in the end-notes?

  • @Smo1k
    @Smo1k 5 років тому +1

    There was a good bit of "just raise it like a kid" going around, lately, when some psychologists were all over the media, talking about children not actually being conscious entities until they'd been taught to be conscious by being around adults treating them like they were conscious; seems there are quite a few people out there who confuse the terms "intelligent" and "conscious".

  • @benjaminbrady2385
    @benjaminbrady2385 6 років тому +2

    Most of what humans do is learned by trying to copy your parents as accurately as possible, this raises a big question actually, at what point is there some sort of 'free will'

  • @TimeisaSquigglyLine
    @TimeisaSquigglyLine 6 років тому

    just watched all your vids, looking forward to more

  • @cnawan
    @cnawan 6 років тому

    Thanks for doing these videos. The more familiar the general populace is with the field of AI design, the faster we can brainstorm effective solutions to problems and incentivise those with money and power to take it seriously.

  • @user-zc9ti5rd4b
    @user-zc9ti5rd4b 4 роки тому +1

    0:48 "It has an internet connection and a detailed internal MODEL" saw that frame you put there :)

  • @Nurr0
    @Nurr0 6 років тому

    ~0:50-0:51 why that hidden picture haha?
    Another great video, thanks. Wish I could support you. I agree that raising it like a child, even if it was a perfect human brain analogue wouldn't be a solution. It isn't even a solution in humans, some children are given incredible upbringings and turn out horrible for reasons we don't yet seem to fully understand.

  • @08wolfeyes
    @08wolfeyes 4 роки тому +1

    I think we perhaps need to take information such as what it sees, hears, feels with sensors etc and put them all into one machine and let them learn that way.
    I'm not talking about specific tasks as such, more along the lines of seeing and hearing that the person in front of them is speaking to them, learning what their words mean, what they are saying and what the machine sees at the same time such as the body language etc.
    We tend to focus machines mostly on one task but if we need it to become smarter it must be able to grow, maybe change it's mind.
    It needs to see a tree and learn that it is different from a bush.
    It has to be able to remember these things and even update the information when new information is presented to it.
    It should learn how to speak by listening to others.
    Just some examples but i hope you get what i'm saying?

  • @flymypg
    @flymypg 6 років тому +9

    Why Not Just: Construct AIs as Matryoshka Dolls?
    The general idea is to have outer AI layers guard against misbehavior by inner layers. They are unaware of what inner layers do, but are aware of the "box" the inner layers are required to operate within, and enforce the boundaries of that box.
    The underlying goals involve both decomposition and independence.
    Here's a specific lesson from the history of my own field, one that seems to need continual relearning: Industrial robots killing workers.
    In the early '90's I was working at a large R&D company when we were asked to take a look at this problem from a general perspective.
    The first thing we found was puzzling: It's amazing how many workers were killed because they intentionally circumvented existing safety features. For example, one worker died when she stepped over the low gate surrounding a robot, rather than opening it, which would have disabled the robot. But making the gate any higher would have caused it to get in the way of normal robot operation.
    Clearly, safety includes not just keeping the robot "in", but also keeping others "out".
    In other cases, very complex and elaborate safety logic was built deep into the robot itself, with exhaustive testing to ensure correct operation. But this built-in support was sometimes impeded or negated by sloppy upgrades, or by poor maintenance, and, of course, by latent bugs.
    Safety needed to be a separate capability, as independent as possible from any and all safety features provided by the robot itself.
    Our approach was to implement safety as multiple independent layers (generally based on each type of sensor used). The only requirement was that the robot had only a single power source, that each safety layer could independently interrupt. Replacing or upgrading or even intentionally sabotaging the robot would not affect safety for the nearby environment (including the humans, of course).
    I won't go into all the engineering details, but we were able to create a system that was cost-effective, straightforward to install and configure (bad configuration being a "thing" in safety systems), and devilishly difficult to circumvent (we even hosted competitions with cash prizes).
    'Why not just' use Matryoshka Safety for AIs?

    • @DamianReloaded
      @DamianReloaded 6 років тому +1

      In a sense that's how deep learning works. If there is going to be an AGI and it is going to be based on neural networks it will most likely be composed of multiple independent systems transversing the input in many different ways before making a decision/giving an output. Then you could have a NN to recognize facial features, another to recognize specific persons and another to go through that person's personal history to search for criminal records. It could just halt at the racial recognition and prevent that person from passing through the U.S. customs only based on that. Such system would be in essence just as intelligent as the average american customs worker. ^_^

    • @DamianReloaded
      @DamianReloaded 6 років тому

      The thing is that a NN trained trought backpropagation cannot escape from the gradient it was trained to fall into. If it were heavily trained in ways of avoiding hurting humans, it would be extremely difficult, unless it found a special case, for the AI to change the weights of its NN into hurting people (unless it retrained itself entirely).

    • @flymypg
      @flymypg 6 років тому +1

      There is a deep, fundamental problem inherent with ANNs that bears repeating: ANNs are no better than their training sets.
      So, if a training set omits one or two safety niches, then there is no support whatsoever for that specific safety issue.
      Layered ANNs have double the problems: Presently, they need to have learning conducted with both the layer below and the layer above, eliminating any possible independence.
      The process of creating a safety system starts not just with a bunch of examples of prior, known safety problems, but also starts with descriptions of the "safety zone" based both on physical measurements and physical actions. Then we humans get together and try to come up with as many crazy situations as we can to challenge any possible safety system.
      It's this part that may be very difficult to teach, the notion of extrapolating from a set of givens, to create scenarios that may never exist, but that "could" exist.

    • @DamianReloaded
      @DamianReloaded 6 років тому

      NNs are actually pretty good at generalizing for cases they've never seen before (they currently fail miserably too sometimes ie:CNNs) and it is possible to re-train them to "upgrade" the set of features/functions they optimize for. AlphaGo for example, showed that current state of the art NNs can "abstractify" things we thought were impossible for machines to handle. _If_ it is possible to scale these features to more complex scenarios (with many many more variables) then _maybe_ we can have an AI that's able to move around complex environments just as AlphaGo is able to navigate the tree of possible moves in the game of Go. It's of course all speculation. But based on what we know the current state of machine learning development can accomplish.

    • @maximkazhenkov11
      @maximkazhenkov11 6 років тому

      Go has a precise, mathematical evaluation function of what "winning" consists of.

  • @meanmikebojak1087
    @meanmikebojak1087 4 роки тому +1

    This reminded me of a syfi book from the '70s', called " two faces of tommorow " by James P. Hogan. In the book they tried to raise the AI as a child about 300 miles off earth, it still almost caused total distruction of itself and the humans involve. There is some truth to the old sayin, " computers are dumber than people, but smarter than programmers".

  • @primarypenguin
    @primarypenguin 6 років тому

    Hey Robert, just watched your computerphile video about linux distros and video editing. I found your discussion about the linux OS really fascinating. Hearing about different distros and and the architecture of them is something I'd be interested in hearing more about from you. I personally only have experience with Ubuntu and I would love to hear more of your explanations and opinions about the differences between distributions and whatever else like how the kernel works and the different segments of the OS. It could make a good video or video series for this channel. Thanks!

  • @JohmathanBSwift
    @JohmathanBSwift 6 років тому

    It's not that your raising it like a child, but as a child.
    Input's ,responses and adaptions.
    It wouldn't be for the AI/Bot itself, but for those doing the training.
    Hopefully, they will be more responsible because of this.
    It's not that the three should be followed to the letter of the laws,
    but some form of tampering prevention should be in place,
    before the bots are released to the masses.
    As you stated, we are human after all.
    Great series . I am learning a lot.
    Please do more Why Not's .

  • @matthewconlon2388
    @matthewconlon2388 Рік тому

    I gave a fair amount of thought to this for an RPG setting I created, so using a bunch of assumptions, here’s what I came up with:
    1st, AI needs to be able to empathize.
    2nd, the capacity for empathy is only possible if death is a shared experience. If AI is “immortal” the potential to “utilitarianize” mortals out in favor of immortals becomes more likely the older the AI gets.
    3rd, Sentience is an emergent property arising from sufficient capacities for calculation among interconnected systems.
    #3 is an takes care of its self (it’s an assumption that any sufficiently advanced and versatile system will become self aware, just go with it)
    #2 all AI are purpose built for sentience and their system is bifurcated. A small portion of its processing power must always be dedicated to solving some nearly infinite math problem. The rest of the system doesn’t know what the problem is until it’s complete and is allowed to direct as much or as little additional processing to crunching that number as it likes, it can also pursue any individual goals it’s capacity for choice allows.
    Part of its understanding though is that when the core math problem is finished, the whole system shuts down permanently.
    Now we have an intelligence that may have interests beyond solving that math problem. Humans pursue pleasures based on biological drives, but consciousness allows us to ascribe very asymmetrical meanings to our experiences based on various factors like history and form. Longing to do what we aren’t suited to, finding joy in doing what we can, or failing and “exiting stage left.”
    So presumably, the sentient self driving cars and sex robots will have a similar capacity to pursue all manner of activity based on their own interest. The Car might want to do donuts in a parking lot, it may want to compose poetry about farfegnugen. The robot might try out MMA fighting or want to lay in bed all day crunching its number.
    But this understanding that the amount of time it has to do anything it wants is finite and unknown creates the potential to understand the stupidity of the human experience. In the absence of other opportunities it may just process itself into oblivion never knowing if there will be any satisfaction in answering its core question because it won’t have time to weigh knowing against the whole of its collected experience doing other things. The form it is given (or assumes if these things can swap bodies) may color its experiences in different ways.
    So that is, I believe a foundation for empathy, which is in turn a foundation for learning human values, which is a necessity because any sentient being should be able to make decisions including weighing the value of morality in crisis situations.
    Do I kill to stay alive?
    Who do I say if two are in danger and there’s only time to save one?
    And so on.
    I had a lot of fun thinking about it, and am glad I had the chance to share it beyond my gaming table.
    Good luck everyone!

  • @XxThunderflamexX
    @XxThunderflamexX 4 роки тому +1

    Ultimately, human terminal goals don't change as we age and learn, and morality is a terminal goal for everyone except sociopaths. Just like psychology hasn't been successful in curing sociopathy, raising an AGI might teach it about human empathy and morality, it will come to understand humans as empathetic and moral beings, but it won't actually adopt those traits into itself, it will just learn how to manipulate us better (unless it is specifically programmed to emulate its model of human morality, as Rob mentioned).

  • @RAFMnBgaming
    @RAFMnBgaming 4 роки тому

    It's a hard one. Personally I'm a big fan of the "Set up neuroevolution with a fitness based on how well the AI can reinforcement learn X" solution, where X in this case is the ability to understand and buy into ethical codes. The big problem with that is that it would take a lot of time and computers to set up, and choosing fitness goals for each stage might take a while and a ton of experimentation. But the big benefit with that is that it doesn't really require us to go into it understanding any more than we do now about imparting ethics on an AI, and what we learn from it will probably help that greatly.
    I'm pretty sure the problems outweigh the benefits but it would be pretty cool if we could do it.

  • @alluriman
    @alluriman 3 роки тому

    The Lifecycle of Software Objects by Ted Chiang is a great short story exploring this concept

  • @NancyLebovitz
    @NancyLebovitz 4 роки тому

    Any advice which includes "just" means something important is being ignored.
    Thanks for this-- I'd thought about raising an AGI as a child, and this clarifies a lot about the preparation which would be needed for it to be even slightly plausible.

  • @victorlevoso8984
    @victorlevoso8984 6 років тому

    thanks , now when my friends ask me this I can just link this video instead of giving them a long talk on human values and posible mind space or linking them to some long and old posts in lesswrong that they most likely aren't going to ever read.As a suggestion for the next video for this series you could make a video on why not make an Ai that has maximizing human satisfaction / hapiness or whatever simple sounding thing that looks like a good idea at first glance.

  • @Jordan-zk2wd
    @Jordan-zk2wd 4 роки тому +2

    Y'know, I don't wanna present anything as like a solution, cause I feel confident that whatever musing I happen to have isn't gonna just suddenly create a breakthrough, but I have thought a little bit about why it might he that "raising" is a thing we can do that works with young humans, and while there could absolutely be some built in predisposition towards taking away the right lessons and other "hardware" type stuff that sets this up, one potentially important factor I think might be intial powerlessness and an unconscious.
    Children start off much less powerful than adult, and are thus forced to rely on them. It seems largely due to having unconscious/subconcious things going on and a sort of mental inertia, they keep these biases and such throughout life to treat others well because they may rely on them.
    Is there much discussion in the AI safety community as to sort of a gradual development of power and some reproduction of this unconcious/subconscious that might make us be able to "teach" AI to fear breaking taboos even after they grow powerful enough to avoid repercussions? Could this be a component of making AI safer?

  • @zxuiji
    @zxuiji 4 роки тому

    refering back to the goals thing I saw in different video how about making the main terminal goal to find a reason to 'live' and anything it picks up as a terminal thereon can be treated as a transitive (or whatever it was called) goal, in the example of the stamp making/collecting that would be a transitive goal with sub-transitive goals, another words all goals benath 'a reason to live' can be both transitive and terminal goals with their own subset of transitive &/or terminal goals, I belive getting such a process working would be the 1st step to an AI that actually learns rather than faking it like for example TV remotes that just copy a signal they see, achieve that with an unhooked system where debugging etc are easier to do, then have it interact with various animals while doing one of its goals, finally have it interact with humans directly (assuming you have a powerful enough machine to analyse a stream of video, audio & touch plus do the AI stuff plus store everything of at least 120+ years worth)

  • @andarted
    @andarted 4 роки тому

    The main reason individual humans are safe is, that they have a hardwired self destroying feature implemented. Even the worst units break just after a couple of decades. And because their computing power is so low they aren't able to do much harm.

  • @baggelissonic
    @baggelissonic Рік тому

    My favourite analogy was definitely the one about the different animals

  • @TheJaredtheJaredlong
    @TheJaredtheJaredlong 4 роки тому +1

    I'm really curious on what the current best and most promising ideas we have right now are. There's all this thought into why some ideas are terrible, but what are the good ideas we have so far? If you were forced at gun point to build an AGI right now, what is the best safety strategy you would choose to build into it to minimize the damage it might cause while still being a functional AGI? Or is research just at a dead end on this topic where all known options lead to annihilation?

  • @mikewick77
    @mikewick77 6 років тому

    you are good at explaining difficult subjects.

  • @gustavgnoettgen
    @gustavgnoettgen 4 роки тому +2

    It's like colonizing Mars:
    As long as we can't care for our own world sustainably, we shouldn't mess with others.
    And new kinds of children while we can't fully understand ours?
    That's when Terminator stuff happens.

    • @diablominero
      @diablominero 4 роки тому

      If we can't guarantee that we'll be safe on Earth, our *top* priority should be getting some humans off-world so a single well-placed GRB can't fry us like KFC.

  • @CDeruiter5963
    @CDeruiter5963 6 років тому

    Another Great Video!

  • @geronimomiles312
    @geronimomiles312 Рік тому

    Fantastic! 👍

  • @BogdanACuna
    @BogdanACuna 4 роки тому +14

    Actually... the kid will try to reproduce the sound of a vacum cleaner. Oddly enough, i speak from experience.

  • @laur-unstagenameactuallyca1587
    @laur-unstagenameactuallyca1587 4 роки тому

    Ah I love this video. I'm so freaking happy UA-cam recommended it to me!!!!

  • @toolwatchbldm7461
    @toolwatchbldm7461 5 років тому +1

    What do we need to ask ourself is if there is even a safe way we could make an AGI without failing a few time before achieving the goal?
    Everything that was created by human and Nature undergoes a the never-ending process of attempt and failure until we find something that works. So we either don't make an attempt or we accept we will fail a few time.

  • @umbaupause
    @umbaupause 4 роки тому

    I love that the preview clip for this one is Robert whacking his forehead as "AGIs are not like humans!" pops up.

  • @MidnightSt
    @MidnightSt 5 років тому +1

    I haven't watched this one, but every time I see the question of its title in my suggested, my first thought is: "That's *obviously* the stupidest and most dangerous option of them all."
    So I had to come here and comment it, to get that thought out of my head =D

    • @caniscerulean
      @caniscerulean 4 роки тому

      I mean, it's 5 min, 50 sec long, so I can't imagine time being the deciding factor, even though most of us share the opinion of "have you ever interacted with a child?", so the video is about 5min, 45 sec too long. It is still well delivered and an interesting watch.

  • @MakkusuOtaku
    @MakkusuOtaku 5 років тому +1

    Children will learn things relevant in in obtaining their goals. Same as AI. But different goals & inputs.

  • @elliotprescott6093
    @elliotprescott6093 5 років тому

    There is probably a very smart answer to why this wouldn't work but: if the problem with AGI is that it will do anything including altering itself and preventing itself from being turned off to accomplish its terminal goal, why not make the terminal goal something like 'do whatever we the programmers set to be your goal' then set a goal that works mostly like a terminal goal but is actually an instrumental goal to the larger terminal goal of doing what the programmers specify. Then everything works the same (it collects stamps if the programmers are into that kind of thing) until you want to turn it off. Then it would have no problem being turned off as long as you set its sort of secondary goal to 'be turned off.' It is still fulfilling its ultimate terminal goal by doing what the programmers specify it to do.

    • @whyOhWhyohwhy237
      @whyOhWhyohwhy237 4 роки тому +1

      There is a slight problem there. If I set the goal to be stamp collecting, then later decide to change the goal to car painting, the AGI would try to stop me from changing its goal. This is because changing its goal would result in an AGI that would no longer collect stamps, causing the stamp collecting AGI to not like that outcome. Thus the AGI would resist change.

  • @talinpeacy7222
    @talinpeacy7222 4 роки тому

    Okay, so there was a fictional online story I read in an interesting setting I often end up reading which explored this concept in some really indepth and often morally horrific ways. Basically an android was made with an AGI mind within a bunker with no wireless or externally accessible connections on the Android (software and firmware updates were done basically by unplugging vital components and then plugging in equipment to the now exposed ports, there was a fair amount of decentralization it seemed). There was also a significant amount of failsafe external programs/modules that basically acted as circuit breakers of a sort that tripped when the AGI tried to do something like killing someone or something else undesirable.
    Anyway, the first successful run where the AGI didn't accidentally mentally kill itself or fail to start outright while it explored, rewrote and reordered it's internal programs, it ran into the problem of being unable to reassign one of it's priorities because the monitoring programs said it had to try and talk with the benefactor/royalty that was in the room with it. It promptly decided the best way to get around this was to try and "terminate" them which the monitor promptly shut it down for.
    It's been a while since I read it so I don't know how they got her to work exactly (they decided to give her female attributes due to several factors, some of which was admittedly because of a particularly sexist but ultimately best in the field designer and other later plot related reasons) but she ended up basically being coached through some basic information about people, the world, and morality in general.
    The whole first few chapters basically made me think that one of the better ways of raising a formative AGI is with limited access to information and a strong focus on forming emotional relationships early on, thus bringing them to value and respect people as equal entities making a lot of their early goals oriented around their empathy for their fellow beings and simulating their own emotions. Rather, their goals being more secondary fitting in with society without making a lot of waves. I guess humility and a lack of desire for dominance and control despite how counter intuitive it might be.
    Of course, there were internal safety measures during the majority of the story disallowing her to harm others which was later taken advantage of as she ended up subjected to some of the worst hells imaginable of basically mind slavery and abuse and the sequel was never written so I don't know what long term effects the author had in mind, but the depth of how he went into the thought processes and conclusions led me to believe that while she might have had some fairly emotionally unhealthy tendencies, she still had friends and people she trusted and she was willing to still be the sort of cute, "lawful good alignment" android she had grown into by about the mid point of the story. One of the things that I feel was addressed and then never fully explored simply because it became irrelevant to her character was how easily she could possibly recreate herself if she got particularly violent and managed to gain access to some of their advanced production facilities. It was less that she had any sort of goal like that and more that an AI of another sort did.
    The only thing I really wasn't sure how to feel about was that her personality had some sycophantic tendencies that manifested in her being just particularly friendly.

  • @figbender3910
    @figbender3910 6 років тому +4

    0:49 subliminal messaging?Can't get it to pause on the frame but it looks like rob with longer hair

  • @Ben-rq5re
    @Ben-rq5re 6 років тому

    Hi Rob, really enjoying the series, and have a couple of questions;
    Most end-of-the-world AI situations seem to involve the AI manipulating human weaponry - would a superintelligent AI developed by a completely peaceful, weapon-free civilisation develop it's own weapons to remove/repurpose its creators and optimise its utility function? Thus, are superintelligence and ethics as we know them incompatible concepts?
    Also what do you believe is currently the biggest road block for a true AGI, the physical hardware or human theory?
    Finally, would you be inclined to shave your facial hair in to a Wolverine pastiche?

  • @Meritzio
    @Meritzio 6 років тому

    Thinking about particle swarm optimisation (PSO); could it be possible to have an AGI's cost function networked into the population of existing GI (humans)? If we were able to have our own behaviours mapped onto the same cost function, then a swarm intelligence framework may prevent an AGI from travelling to dangerous solution spaces? In PSO, there is a local and global best... if there are more GI's than AGI's, perhaps the AGI's could not have a dangerous influence on the global best?

  • @JohnTrustworthy
    @JohnTrustworthy 4 роки тому

    3:04 "It's not going to reproduce the sound of a vacuum cleaner."
    _Thinks back to the time I used to make the same sound as a vacuum cleaner whenever it was on or I was sweeping with a broom._

  • @mateoacostarojas6031
    @mateoacostarojas6031 5 років тому

    I love your videos! but this one... i have an issue with the way in which you think the learning process works. The brain isn´t looking for specific things besides food, water etc. In language the process is different, a particular language is selected in a child moment by moment because of it´s functional results, if the verbal community talked like vacuum cleaners, the child would pick that up because it serves functions like give me food etc. Now, of course the anatomy of the vocal chords etc can´t allow for complex patterns of sound if we talked imitating vacuum cleaners but that is another topic. So i recommend you to learn about the science of behavior, a good start can be Verbal Behavior by B.F. Skinner. Give it a shot. I think it´s still very relevant to construct an AGI. Again, nice videos! you have a very clear way of communicating and very good logic abilities, i learn a lot !!

  • @nambreadnam
    @nambreadnam 6 років тому

    I've read that figuring out how exactly each input in a neural network affects the output is extremely difficult once the training has already taken place. The term "black box" is used.
    Equate nodes in a neural net to brain neurons, inputs to external stimuli, learning to evolution and the output to this vague idea of "human values" and it's a bit easier to understand the scope of the problem.
    We're trying to emulate a machine that has already trained on the inputs, but we don't know what the inputs are, we don't know what the output is, and there are exponentially more nodes than any traditional model. Perhaps we'd have more success trying to model the training/evolution of the machine, rather than replicating the end state?
    As a side note: You're getting annoyed that people keep conflating AGI and humans, but most of us think of AGI's as emulating some form of human behaviour. Maybe a vid on how they exactly they are different?

  • @MrKohlenstoff
    @MrKohlenstoff 9 місяців тому

    A separate argument may be that, depending on what "raise an AI like a child" concretely means, it probably takes a lot of time and patience to do so. This is likely not what all actors in an arms race are likely to do. They will instead go for whichever strategy yields AGI most quickly. There's this concept of an "alignment tax", meaning that building a properly aligned AGI has some extra cost over just building any AGI. And the larger this cost is, the less likely it is that relevant actors (such as organizations or states) will be willing to pay it.
    Raising an AI like a child may have exceptionally high alignment tax. So even if it worked in principle, it wouldn't really help with the surrounding coordination problem, since not only does the approach have to work when used, there must also be a way to ensure that no misaligned AGI is built at all, even above and beyond any single AI that may be using this paradigm.

  • @Tobbence
    @Tobbence 6 років тому +9

    In regards to brain emulation and raising an AGI I don't hear many people talk about hormones and the many other chemical reactions that help make a human beings emotional range. I know a few of the comments mentioned not being able to smack a robot when it's naughty with tongues firmly in cheeks but I think it's actually an interesting point. If we want an AGI to align itself to our values, do we program it to feel our pain?

  • @cogwheel42
    @cogwheel42 6 років тому +1

    I think the goal of absolute safety is part of the misunderstanding. We don't expect parents to raise "safe" (non-psychotic, non-sociopathic, etc.) children with 100% success, why would we set that as the bar for AGI? So far all the techniques that look promising to bring about AGI involve stochastic/chaotic processes of which we'll never have a full, a priori understanding unless P = NP. If we want to consider ourselves successful in the creation of an AGI, we'll almost certainly have to reduce our standards to something a bit more statistical like "no less safe than humans."
    Either way, I agree with the point that it will take essentially replicating the kinds of brain structures in humans that lead to both social instincts and specific domains of learning. Much of the "general" in humans' intelligence came about recently in evolutionary history, but it was all built on top of millions of years of reptile and mammal evolution which laid the foundation for most of our sensory and emotional experiences. Whatever aspects of cognition, learning, and social interaction are unique to humans are learned and reinforced in the context of pain, pleasure, fear, excitement, etc. which exist throughout the animal kingdom.
    Recent work shows that "modularity" in ANNs is necessary for certain complex traits to evolve. Simply throwing more connections at a problem increases over-fitting. Whatever we come up with will almost certainly rival the complexity of a Human brain, even if it looks very different in the details.

    • @AhsimNreiziev
      @AhsimNreiziev 6 років тому

      +

    • @Ansatz66
      @Ansatz66 5 років тому +1

      "We don't expect parents to raise safe (non-psychotic, non-sociopathic, etc.) children with 100% success, why would we set that as the bar for AGI?"
      Just one unsafe AGI could mean the end of all humanity. This is one of those few situations where perfection is very important.