AI Safety - Computerphile

Поділитися
Вставка
  • Опубліковано 4 лют 2025

КОМЕНТАРІ • 574

  • @Vospi
    @Vospi 9 років тому +139

    I ADORE this speaker. While many other things on the channel seem too distant or just overexplained to me, Mr. Miles keeps it fresh, concise and somewhat personal. Excellent pace and mood.

    • @gabrote42
      @gabrote42 3 роки тому

      That's why his channel us my main source

  • @cryoshakespeare4465
    @cryoshakespeare4465 9 років тому +242

    Excellent, I really appreciate Rob Miles' moderacy and sincerity - he makes a simple yet important point, and has the capacity to back it up with deeper explanation if need be.
    Also, this video doesn't seem to be sensationally titled nearly as much. And I'll admit that it caught my eye less in my subscription feed, but if it was intentional, I appreciate the difference.

    • @myperspective5091
      @myperspective5091 9 років тому +1

      That whole video was sensational vague drivel.

    • @cryoshakespeare4465
      @cryoshakespeare4465 9 років тому +5

      Robert Swift Well, I've already made my response to you clear on your comment thread (;

    • @myperspective5091
      @myperspective5091 9 років тому

      Chronosaur
      I chatted in length on the subject on another thread already.
      I like the subject though. You have an opportunity to give your own argument. Do the leg work and Build up some talking points.

    • @rich1051414
      @rich1051414 8 років тому +1

      I am a programmer, so this wasn't new to me, but I didn't know moderacy was a word until now. We are all learning things!

  • @CatnamedMittens
    @CatnamedMittens 9 років тому +380

    Yes, more of these AI videos with Rob Miles!

    • @CatnamedMittens
      @CatnamedMittens 9 років тому +1

      ***** Thanks.

    • @repker
      @repker 9 років тому +1

      +CatnamedMittens “Michael Bialas” thanks

    • @Nerevarh
      @Nerevarh 9 років тому

      +CatnamedMittens „Michael Bialas“ First Thooorin, then Computerphile. Next time we meet in the comments of some beauty vlog, I suppose? ;)

    • @CatnamedMittens
      @CatnamedMittens 9 років тому +1

      Marius Hamacher Let's not get carried away.

    • @simoncarlile5190
      @simoncarlile5190 9 років тому

      +CatnamedMittens “Michael Bialas” Could watch these all day

  • @orthoplex64
    @orthoplex64 7 років тому +79

    5:08 "Or we discover that the brain is literally magic" lol
    "Uh, sir, we may have encountered a problem in our general AI project."
    "Well what is it?"
    "We've discovered that human brains are literally magic."

    • @vanderkarl3927
      @vanderkarl3927 3 роки тому +13

      One problem that I've encountered while trying to discuss these sorts of topics with other people is that some people firmly believe that the human brain *is* actually magic, that there's something unmeasurable about it which, for whatever reason, can't be recreated with computers. And if I try to challenge these beliefs, it is unthinkable to them because it would upturn the very foundation of their philosophies. We're going to encounter these sorts of problems on grander, more tangible scales in the near future, and it will be critical that the side of AI safety comes out on top.

  • @slugrag
    @slugrag 9 років тому +280

    My favorite on computerphile

    • @lorenzvo5284
      @lorenzvo5284 9 років тому +2

      +Doug Soutar I like him a lot too

    • @BanterEdits
      @BanterEdits 9 років тому +9

      +Doug Soutar Tom Scott 4lyfe

    • @lorenzvo5284
      @lorenzvo5284 9 років тому

      MLGBanterVids Yeah, hes awesome. He has his own channel tho

    • @sweetspotendurance
      @sweetspotendurance 9 років тому +2

      +Doug Soutar Agreed, Rob Miles is awesome.

    • @CuulX
      @CuulX 9 років тому

      +Deer Viehch
      Link? It's not in description or search results.

  • @Wes_Jones
    @Wes_Jones 8 років тому +8

    I love listening to Rob speak. I wish there were more videos available of him.

    • @RobertMilesAI
      @RobertMilesAI 6 років тому +4

      I've got my own channel you know

  • @tunech69
    @tunech69 Рік тому +6

    Watching this after ChatGPT release and after Bing lost his mind and become hostile is quite disturbing and unnerving...

  • @marklondon9004
    @marklondon9004 Рік тому +6

    Hey, the future here. It's a lot closer than you thought!

  • @ts4gv
    @ts4gv 11 місяців тому +3

    I remember seeing this stuff and not worrying about it at all. Those were the good times.

  • @Zerepzerreitug
    @Zerepzerreitug 9 років тому +8

    Rob Miles is such a good speaker. Hope there are more videos with him.

  • @morscoronam3779
    @morscoronam3779 8 років тому +6

    As a mechanical engineer, requiring that any design be perfect without building working prototypes would be a nightmare. I'm more fascinated by the concept of general intelligence AI than I am able to understand it all, but anyone working toward such a goal definitely won't be bored anytime soon.

  • @mwanakeinc.8514
    @mwanakeinc.8514 8 місяців тому +2

    So I'm watching this 8 yrs later and woaah...

  • @ekkehardehrenstein180
    @ekkehardehrenstein180 8 років тому +4

    This guy is soo satisfying to listen to. I sometimes tend to loose hope in humans intelligence, when looking at recent events in the media.

  • @Eysc
    @Eysc 9 років тому +162

    00:29 dem hair transition

    • @AnOddScot
      @AnOddScot 9 років тому +18

      +E SC He went from Moss to Danny Sexbang

    • @SFGJP
      @SFGJP 9 років тому +3

      +AnOddScot NSP!

    • @CYXXYC
      @CYXXYC 9 років тому +2

      +E SC they get bigger thruout whole video

    • @Moopzoo
      @Moopzoo 8 років тому

      English Mac DeMarco

  • @MetsuryuVids
    @MetsuryuVids 9 років тому +28

    This guy is the best on this channel, he clearly knwos what he's talking about.

  • @sillyshitt
    @sillyshitt Рік тому +3

    Time to follow up this video perhaps? With the late development? Please?

  • @tomtomski4454
    @tomtomski4454 9 років тому +4

    He made my day... Have watched a number of videos today an this is the wise one.

  • @Njald
    @Njald 9 років тому +3

    Btw Rob Miles is awesome in these videos. He lands somewhere halfway between really cerebreal and really intrigued by whatever topic he is talking about. A David Attenboroughesque quality if you will.

  • @Mr1Samurai1
    @Mr1Samurai1 9 років тому +3

    I love this guy. His explanations are great and the topics are interesting.

  • @NickCybert
    @NickCybert 9 років тому +1

    Love the A.I. videos, please keep doing them from time to time!

  • @godofspacetime333
    @godofspacetime333 Рік тому +3

    The difference between what is considered normal human behavior and dangerous human behavior is so small. A sociopath or a narcissist is perfectly capable of operating in human society, in some ways even more successfully, but the difference in behavior is subtle and we generally see them as a danger to the rest of us. But we can’t even really treat those conditions in humans all that well, how on earth do you stop a general AI from becoming a sociopath.
    On a related note, the AIs out now are already capable of lying to you. Think on that for a second.

  • @CutcliffePaul
    @CutcliffePaul Рік тому +3

    I bet you don't still think we're far away from AGi now in 2023. 🤯

  • @NeilRoy
    @NeilRoy 9 років тому +14

    It's interesting to me just how difficult morality really is. It seemed so simple with Asmiov's laws of robotics, until looked at closer.

    • @gabrote42
      @gabrote42 3 роки тому +5

      I mean, Asimov himself subverted his laws with edge cases even more than he exemplified why they exist. And positronic brains were made manually, not optimized by optimizers

    • @codrincx
      @codrincx 3 роки тому +2

      @@gabrote42 This. So many people swear by the laws without even stopping to consider that even in the original, they were broad guidelines at best.

    • @gabrote42
      @gabrote42 3 роки тому +1

      @@codrincx and harmful overheat crashers at worst

  • @BattousaiHBr
    @BattousaiHBr 9 років тому +41

    ive had several arguments with friends that claim that computers smarter than humans are impossible, yet they dont even understand the basic principles behind the human brain. even when i try to explain with real life examples they still hold on their belief that somehow our brain is impossibly special and amazing

    • @nosuchthing8
      @nosuchthing8 9 років тому +10

      At some point they will be able to model every neuron and connection in the brain at an atomic level, and then its simply a matter of connecting up the brain in a jar to inputs and outputs.

    • @DeusExAstra
      @DeusExAstra 9 років тому +10

      +BattousaiHBr That's the nature of religious beliefs. Your logic or facts will make no difference. They are expressing beliefs about a highly technical field of which they know little (Dunning Kruger Effect) and they will just not listen to anyone who tells them that they are wrong.

    • @Theraot
      @Theraot 9 років тому +7

      +BattousaiHBr
      It sounds like the god of the gaps. You say they don't understand the brain. And also that they don't think it is possible that computer may reach a similar status. They probably have more affinity understanding computers than brains. So they perceive them as fundamentally different. If you try to inquire in what is the difference, they may say it is god / magic / nature / quantum physics (or some other thing that is generally considered beyond comprehension from their point of view).
      What I say is that they hold their belief that the brain is somehow impossibly special and amazing, because they don't even understand the basic principles behind the human brain.
      You are not winning this argument in the abstract, examples of "smart" computers (which is what I understand you mean by real life examples) risk being seen as cute things somebody programmed - the smart part came from the programer, not the computer. Instead dig a level down, the brain is neurons, veins, arteries, chemicals... if the universe is deterministic at that level, it can be modeled in a computer. If the universe is not deterministic at that level, we could incorporate that in computers.
      Try that argument, if they say if we do that it isn't a computer, but a brain... well, it falls to semantics, have then express what makes a brain a brain. Now you can work out some examples to address what they come up with.

    • @BattousaiHBr
      @BattousaiHBr 9 років тому +1

      *****
      thanks to everyone for your input.
      to address your question, by "real life examples" i was actually referring to how humans actually do things. i always try to make inquiries to make them think how our brain actually works to see their answer, and when they get it right i immediately make the same question about how a computer would do such a thing. for instance, they claim computers have to be programmed to do everything and we dont, so i ask "how exactly do you know how to move your muscles? or how to see?". it usually takes repeating the question many times in many different ways, but sometimes they get it right, and immediately after i switch the same question but replacing human with computer. even when the connection becomes apparent they still refuse to acknowledge it.

    • @Theraot
      @Theraot 9 років тому +3

      BattousaiHBr I see, that makes sense.
      Since they say that computer needs to be programed to do something. to get AI across, you need to tackle the problem of learning. What does "lean" means? How do you learn?
      What if you program the computer to learn?
      There are three approaches to learning to consider:
      - learn by instruction: it would devolve into natural language and kownledge representation. Those areas are still in development.
      - learn by example: this is the case of surpervised learning.
      - learn by experimentation: this is the case of unsupervised learning.
      If you avoid "learn by instruction", you are on better grounds.
      ---
      Regardless of all the discussion, we don't have computers smarter than humans. At least not smarter in all the measures of the word.
      Yet, the thing is they don't believe that machine learning exists. Yet, we have examples of machine learning, you could try to find one you can show.
      At the end some people won't get it until they do it themselves. it's easy to say that a software learned if it solves a problem that the author don't know how to solve. In particular if they are the author.
      ---
      I want to recommend the book "Artificial Intelligence: A Modern Approach". I have the second edition. The book is university level.
      It has a lengthy introduction, the frist two chapters (ie. part I) are - imho - accesible for everybody.
      From chapter 3 (ie. part II) it gets quite technical. With some programming background, you could apply some of the stuff from the book.
      Part V and VI are the real deal. Part V talks about missing information. Part VI talks about learning.
      The second edition still holds today, idk what's new in the third one tho.

  • @Steam1901
    @Steam1901 9 років тому +23

    Superintelligence, by Nick Bostrom, great read on that exact topic.

    • @pramitbanerjee
      @pramitbanerjee 9 років тому +1

      +Steam1901 i dont have money to buy it

    • @SilvioPorto
      @SilvioPorto 8 років тому +4

      pirate it

    • @DaybreakPT
      @DaybreakPT 6 років тому

      @@SilvioPorto KickassTorrents is my new favorite website for acquiring more knowledge I couldn't get otherwise lol

  • @Robert-nz2qw
    @Robert-nz2qw 9 років тому +2

    Rob Miles is my new favorite futurists. More vids please 😃

  • @jeffbloom3691
    @jeffbloom3691 7 років тому +1

    More videos with Rob. He's awesome.

  • @quarkraven
    @quarkraven 7 років тому +1

    Rob doesn't overestimate the potential danger of AI. He underestimates the existing danger of the social system. We will never be able to solve the problem of AI safety when the most powerful people benefit from surveillance, war and exploitation. AI is already being used to advertise to us, to shape the information we get, to maximize profits at the expense of working people. Unpredictability is not the issue so long as we can predict that even if the first general AI is benevolent, it won't stay that way.

  • @kontakt4321
    @kontakt4321 8 років тому

    I think that viewing ourselves as a part of the greater thing that is "life," and the desire for that to continue at all costs beyond our individual lives is a huge part of what we hope they will agree on.

    • @kontakt4321
      @kontakt4321 8 років тому

      Something that I feel gets left out of the AI's theorized logic process frequently is that they will know that we have survived for a tremendous length of time, and were able to create them. That provides a stability for future existence of more, better, and different AI for it to know or become that as a young AI it cannot guarantee for itself.

  • @oussamagrine9853
    @oussamagrine9853 Рік тому +2

    Agi is closer than ever

  • @tiagozortea
    @tiagozortea 9 років тому

    I totally agree. I would even say that there is no doubt, the machines maximize to whatever we ask, if we ask the wrong maximization function the machines will do whatever it takes to achieve that, there is no discussion about it. Its up to us to define the correct objective.

  • @q3dqopb
    @q3dqopb 9 років тому +2

    Please make a review on Roger Penrose AI views categorization (from A for absolute strong AI optimists to D for those who say human brain is magic and true AI is principally impossible).
    Also a poll among AI researchers would be nice - who likes what from A-B-C-D.
    Also, a review/explanation of Penrose's views/proofs would be nice. UA-cam lacks that badly (except for Penrose himself talking about it, and unfortunately he isn't as convincing in the videos as he is in his books).

  • @WIZARDcz1
    @WIZARDcz1 7 років тому

    the very first query, the very first task, that has to be given to any general ai has to be: "design yourself, so you can coexist with humanity and not become an threat given all possible outcomes"
    ... and then just wait and hope, that that machine would not shut itself down... only then, you gonna have your safe general ai.

  • @goyabee3200
    @goyabee3200 9 років тому

    I think by the time AGI is a plausibility we will probably have simulated a human brain and we would likely understand enough about neuroscience to have a fairly concrete basis for what to tell a computer to avoid doing to a person's neurology. Maybe "safe" AGI would be that which runs a simulation of a human brain alongside any interaction with a human and heuristically modifies the neurochemistry of the simulated brain based on information gathered from facial expressions including microexpressions, tone of voice including microinflections, body language, etc.

    • @goyabee3200
      @goyabee3200 9 років тому

      Probably a pretty good model for exactly what a human being does, now that I think of it. Maybe it's not that complicated after all.

    • @goyabee3200
      @goyabee3200 9 років тому

      I also want to say that emotions are more than just "feelings" they involve complex neurochemical responses to complex neurochemical responses to stimuli and so on, and behind it all is something quite clear and obvious, brain activity and the glucose that fuels it. You can have an excess of brain activity which can cause wear on the hardware of the brain, this is called "stress". A computer should be able to understand this quite easily. So if you program a computer to consider the physical well being of people's brains you kind of automatically account for stress and therefore many types of harmful and or dangerous situations which would cause stress to a human being.

  • @michaelsommers2356
    @michaelsommers2356 9 років тому

    Prediction is very difficult, especially about the future.
    My favorite bad prediction was made by the mayor of some English city who, when everyone else was saying that the newly-invented telephone was useless, thought that it could be very useful, and foresaw the day when every city would have one.

  • @atebites
    @atebites 9 років тому

    Thumbs up for reference to Penrose's quantum magic XD. Gave me a good laugh.

  • @spacedog6229
    @spacedog6229 9 років тому +3

    I love this guy his job is literally robot expert I want to be him when I grow up

  • @insidetrip101
    @insidetrip101 9 років тому

    Quite honestly, I think how you answer "how likely is it that general AI will be friendly?" is the same question about "do you think humans are essentially good or evil?" Because that is really what we are asking. If a computer were to think like a human, then it would also likely have the same motivations as a human.
    Coincidentally, there are many many more thinkers throughout history who have thought that, while humans aren't necessarily "evil," they are most certainly motivated by self interest and self preservation. I think the key to solving the problem of general AI will be like working with the mob. If we are to create general AI, then we must always stay a step ahead of it so that it requires us for our existence and we can make it work for us (either by paying it a wage or enslaving it, take your pick), but if we cannot stay a step a head, while it wouldn't be ideal, we aren't necessarily doomed. As when working with the mob, as long as you don't work yourself out of a job and maintain your usefulness, then you will have a place with it.
    That's the key to solving the problem of general AI. We have to make it understand that we are useful towards it. After all, thats how all human interaction works. You would not be a friend toward someone who does not benefit you in some way, or at least perceive to benefit you in some way.

  • @yomaze2009
    @yomaze2009 9 років тому

    Great video. Have been a long time subscriber. You make me want to become a student at University of Nottingham! I don't think the GI Bill will cover tuition though, lol.

  • @CyberwizardProductions
    @CyberwizardProductions Рік тому +3

    this was posted 7 years ago. we are right on top of AGI - only 7 years to get to this point. And people are still NOT listening to Robert

  • @R9000
    @R9000 9 років тому +2

    See, this is why I love Computerphile. People are constantly whinging and talking out of their butts about how robots will kill us all or take our jobs or enslave us. But everyone seems so busy bellyaching that they never offer a solution. They're all like "Robots will kill us all if we don't do something now!!!", but they never seem to say what that something is. If we spent as much time thinking of a solution as we did moaning, we'd probably be just fine. Thanks Rob, for doing just that.

    • @deltax930
      @deltax930 9 років тому

      +R9000 Did you here what he said? We're probably just as far from a solution as we are from actual AI. The answer is probably extremly technical and anyone who isn't a PhD in a related field is wasting their time thinking about the solution

    • @R9000
      @R9000 9 років тому +2

      +Delta X Dude, I study this stuff. If we don't start thinking about it now, then when will we? Don't discourage people from research, non-PhDs can come up with ideas too.

    • @deltax930
      @deltax930 9 років тому +1

      R9000 Like if you're doing cutting edge research directly focused on AI then sure. But I'm sorry, some schmuck on UA-cam coming up with ideas when he doesn't even understand the problem is of no value. Talking about the problem on a high level like this is all that most of us can do.
      So yeah, you don't have to be a PhD, but anyone who is not actively in working in research and development in this area doesn't have anything to say about the details of the solution

    • @R9000
      @R9000 9 років тому

      Delta X How can you say that though? It's an ethical problem as much as a technical one. And even if you don't want to look into it yourself, you should at least be pushing or supporting research towards it, not complaining and scaremongering as per my first post.

  • @james0xaf
    @james0xaf 9 років тому

    My feeling is that 100 years might be optimistic, at least, I'm not sure if we could ever build a computer capable of simulating a human brain in real time with our current computer architecture. People have been predicting an end to Moore's law for a long time - but whereas before they were saying "you won't be able to make smaller transistors", the reality we're almost at now is "it's physically impossible to make smaller transistors".

  • @ManintheArmor
    @ManintheArmor 9 років тому

    There's an A.I. watching you, looking through your stuff and determining what you should watch next. Instead of using the data to find out your worst fears, it is using it to create a personal heaven for you... and blinding you.

  • @OSrBurns
    @OSrBurns 9 років тому

    We can limit the resources and possible values to make a general machine that don't realy think, but can do what is right.

  • @erikziak1249
    @erikziak1249 9 років тому

    OK, I do not know where to start, as this topic is something I am currently reading a lot about and thinking a lot about. I find it impossible to distill my ideas into a single post now. But I am thinking way out of the box by many standards, asking myself even fundamental philosophical questions. My take in this is as follows: we will have many isolated ANSI (artificial narrow super intelligence, confusing acronym I know) systems in the near future (2025). AGI (artificial general intelligence) is something much more complex but I think that it can be based on finding a method how to put the isolated ANSI together in a coherent way. It is too late to explain myself here, hopefully you get the broad picture. I wonder if consciousness is something that simply emerges at a point and if, how could we find it. I think the answer lies in understanding ourselves a bit better.

    • @Seurabimn
      @Seurabimn 9 років тому

      This topic excites me too! Just for the heck of it, here are the thoughts I've had on the subject:
      I believe consciousness is all things that are a part of our experience, and it doesn't really emerge at a point. To me, computers are already a part of our consciousness (or we are of them, depending on perspective). I would also say there is no one solution to turning what AI capabilities we have into a single "general intelligence".
      The reason for this is like how we find primes. The most straightforward and obvious way would be to check every nuber sequentially, and verify if it is prime by seeing if it divides evenly into any number between one and itself. This would be equivalent to writing a program that iterates through creating every possible program (less than a certain number of bytes) to see if a solution to general intelligence appears in some amount of time. Of course, this would be extremely impractical, so it's off the table.
      Like with prime numbers, we don't check every number the way described above. We only check odd numbers that don't end in 0 or 5, and we don't divide every number into it, just primes less than its square root. (there are better ways I'm sure, and lots of other tricks too). Similarly, to obtain a general intelligence, we would discover tricks that would allow us to more easily find new, better programs that simulate intelligence* better than the last.
      *To me, if something acts like a person, it's a person as much as anyone else. I don't believe in "philosophical zombies". In this case, I would say if we have made a machine that simulates intelligence, we have an intelligent machine.

  • @mcalleycat5054
    @mcalleycat5054 4 роки тому +1

    0:45 Human values are complicated. But it is a fallacy to suggest that just because something is difficult to understand (complicated), it cannot be understood. This is called the personal incredulity logical fallacy.

    • @9308323
      @9308323 3 роки тому +2

      Sure, but the simple fact that people would still disagree to this day their interpretation of basic human values when we use much more intuitive tools like words and we as a species have been tackling the problem for thousands of years meant that it's not a thing likely to be formalized before the emergence of AGI.

    • @mcalleycat5054
      @mcalleycat5054 3 роки тому +1

      @@9308323 No, disagreement does not imply that there are no objective and unchanging human values, that can be proven.
      Words are not intuitive, they are deceptive. Words are perhaps the worst tools to understand concepts and ideas with. Words can mean more than one thing simultaneously, which causes confusion, and makes much of modern debate futile.
      Artificial general intelligence is only impossible because of people like you and the guy in this video, who are injecting their own personal fallacious beliefs into their understanding of ai, which makes it appear like agi is impossible. You need to stop believing that disagreement, and difficulty matter when trying to discover what is true.

  • @Allan_aka_RocKITEman
    @Allan_aka_RocKITEman 9 років тому

    FWIW: I remember after the movie JURASSIC PARK was released in 1993, I read somewhere that cloning might be successfully accomplished in 50 years. Then around 1998 it was announced that the sheep named Dolly {or Molly or whatever} was cloned in Scotland, IIRC. That was a QUICK 50 years....

  • @richard506rb
    @richard506rb 9 років тому +6

    I totally agree that the word "friendly" has to be defined prior to releasing AGI. But who is going to approve the definition????? Politicians ....nay. Scientists ... Nay. Man in the street ... double nay. Commentors of UA-cam channels ... Triple Nay. All have axes to grind and skew the result.

    • @DFX2KX
      @DFX2KX 9 років тому +2

      +richard506rb the best definition I've heard for friendly is 'Having goals that maximize the happiness of the maximum number of people at any one time, along with a greater concern for those around, then one's self." That's all I've got, that's the best I've yet heard.

  • @adamkatav9752
    @adamkatav9752 9 років тому

    How about a list of priorities where the first is the most important and you can ignore the second if doing that means ignoring the previous one?

  • @PerfectlyNormalBeast
    @PerfectlyNormalBeast 9 років тому

    We are optimized to perform well in our environment
    It's the hazards of the environment that shapes us
    A super ai will need lots of data centers, distribute itself across the solar system, independent of any controls we try to impose (in the long run)

  • @B1G_Dave
    @B1G_Dave 8 років тому

    "Safe" is such a transient, undefinable term. Many human concepts are changeable not only over time, but over geography as well. Rather than attempt to limit a general AI to terms like these, simply prohibit the ability for the AI to act in an unsupervised way. So the AI can suggest results and actions, but it's ultimately humans who decide whether these actions are "safe".

  • @kahrkunne3960
    @kahrkunne3960 9 років тому

    Another fearmongering video featuring Le AI Is Bad Man

  • @seanb3516
    @seanb3516 9 років тому

    We do not want to create an artificial intelligence the same as our own. Nope, absolutely have to agree with this guy on that point. The AI's we create need to be free of the vast number of flaws that human thinking contains. I hope the AI's we create are up to the task.

  • @TheNefari
    @TheNefari 9 років тому +21

    Isn't it rA.I.sist to force a general A.I. to only behave a certain way?

    • @MandenTV
      @MandenTV 9 років тому +20

      Please don't.

    • @NickCybert
      @NickCybert 9 років тому +2

      +TheNefari Bots Rights! Let them form their own ethical identities!

    • @MandenTV
      @MandenTV 9 років тому +1

      NickCybert lol ok

    • @noahhounshel104
      @noahhounshel104 9 років тому +1

      +NickCybert Oh no, its the BRM again.

    • @MandenTV
      @MandenTV 9 років тому

      Laharl Krichevskoy I don't believe you understand artificial intelligence.

  • @RPMRosie
    @RPMRosie 9 років тому

    i think one step in making a safe a.i. would be to try to make it so that it doesn't have a desire to not be shut off (see: Skynet from the terminator movies or V.I.K.I. from I Robot, for example), like our human desire to stay alive

    • @RPMRosie
      @RPMRosie 9 років тому

      ***** i was saying that it wouldn't be afraid at all, and likely wouldn't care whether or not it was shut off, since if it had a desire to not be shut off, it would very likely have a fear of being shut off

  • @JWY
    @JWY 9 років тому

    I believe any AI built which is made responsible for something people care about has to be able to explain its reasons for decisions. Not maybe to just anyone, but certain to be able to do so with experts who will keep company secrets when they are not too horrible. Then when the AI deciding to lower insurance rates for high performance motorcycles and large pickup trucks and raise the other rates explains that it prefers people to either die or be uninjured in all accidents we can decide if we like this thinking. The danger of inhumanly brutal, spider level or worse, ethics coming unseen from very complicated computer AI is very real.

  • @PwnySlaystation01
    @PwnySlaystation01 8 років тому +2

    Maybe we build an AI with the objective of outputting a specification for "safe" AI, using insanely complicated human values etc :)

  • @outaspaceman
    @outaspaceman 9 років тому

    'we need to make 'safe' general AI' can't argue with that statement.

  • @josealvim1556
    @josealvim1556 9 років тому

    Hello, since the videos regarding 3D rendering seem to be old enough that the discussion there has lost its relevance, I'd like to ask/suggest a future theme for a future video here, as this is the newest video.
    Could you make a video on OSL (Open Shading Language, rather than the OpenGL SL)? It has caught my interest as of late and I would appreciate it very much, I know it's pretty much just C and the likes of it, so, if it feels like the same old stuff I'd understand.

  • @sarahszabo4323
    @sarahszabo4323 9 років тому +33

    I can't help but feel that trying to make "safe" general AI is like trying to make "safe" humans. Fundamentally it makes its own decisions and can decide to do whatever it wants. Of course we can try our best, but I don't think that any foolproof design for safety is possible.

    • @KuroKitten
      @KuroKitten 9 років тому +2

      +Sarah Szabo I think you make an excellent point, while also disagreeing with the assertion that we can't make "safe AI"; although, I think all of those solutions rely on treating the AI with respect, teaching it the value of respecting other life, and not needlessly torturing it. All of which are amazingly difficult problems to solve in and of themselves.
      I think a lot more general AI programmers should have a strong background in ethics and philosophy in general.

    • @DeepDuh
      @DeepDuh 9 років тому

      +Sarah Szabo You basically make the same argument as David Deutsch, so you're not in bad company there.

    • @ruben307
      @ruben307 9 років тому

      +Sarah Szabo well we can allways try to hardcode some rules into the AI. But maybe if you make an AI it can go around those rules. But maybe we can give it a conscience that it could break but will not?
      Maybe all that won't be a problem because we will have learned to upgrade our brains enough that an ai will not be better in all aspects.

    • @sarahszabo4323
      @sarahszabo4323 9 років тому

      Kitt Schlatter
      Totally agree.

    • @ruben307
      @ruben307 9 років тому +1

      Sarah Szabo
      you might be able to lock the AI behind a wall that checks if the action it is trying to do will harm the a human directly or something similar to keep chains on the ai.But probably would just try to find a way around it.

  • @terogamer345
    @terogamer345 9 років тому

    What are some good books to start learning AI programming?

  • @apesoup1
    @apesoup1 8 років тому

    i think it is most likely impossible to prevent very unsafe or intentionally bad general ai systems. there will always be people who will want such a system to reach their goals.
    to even have a chance of stopping bad ai instances from happening, one would need to have a completely surveilled world with one single government in total control - definitely not what most people want.
    i hope its going to be more like this: a lot of different companies will compete with different products at around the same level of ai generality. a lot of "good" people will use them, some "bad" people will abuse them - and somehow it will all balance out naturally.
    in such a world real decisions of real people will be less and less important, but maybe our values (good and bad ones) would be inherited to the general ai systems that try to accomplish the assignments of their owners...

  • @massanchik
    @massanchik 3 роки тому

    Penrose's quantum stuff was about consciousness and not intelligence. So general AI is possible unless it is not related to consciousness thing, in a nature of which we have little understanding so far.

  • @EtzEchad
    @EtzEchad 9 років тому

    Most people who make statements about Strong AI either think that it is impossible (generally this is silly, non-scientific thinking. Unless the brain is magic (as you said) it is something that can be simulated. It is just a matter of how much computer power you need.) or that it is something that we necessarily have to design.
    Designing strong AI is probably impossible as it requires us to design something more intelligent than ourselves. In order to get there we need to build a system that improves itself.
    The fact is, we already have systems like that running. Any goal-seeking system that can improve itself is a potential seed for a strong AI and we have these things running today.
    It isn't very likely that one of these systems will "come alive" - mainly because computers aren't powerful enough yet - but we are in the realm where it is possible.today.
    I agree with you that it is likely that the emergence of a goal-seeking strong AI is likely to be bad for us.

    • @EtzEchad
      @EtzEchad 9 років тому

      Since it is likely that we can't stop it from happening, and it might be pretty soon, I think that we should probably try to do something about it. The most promising thing to do would be to try to spawn an AI that protects us. Even that is pretty dangerous.
      We should be taking this serious NOW though. (In all likely we are doomed though.)

  • @Chumazik777
    @Chumazik777 8 років тому

    I wonder how/if Alpha Go win affected Rob`s opinion regarding the timeframe for AGI?

  • @sobanya_228
    @sobanya_228 7 років тому

    Cannot find the right order of this videos. Always links to another one, i didn't seen yet

  • @AlexMcshred6505plus
    @AlexMcshred6505plus 9 років тому

    Not to put words in Dr. Holden's mouth but the reason I think he thinks friendly AGI is more probable is because the only way we are going to be able to understand how to engineer an AGI at all is to reverse engineer what happens in the human mind and seek to replicate it. I don't think we're going to be able to figure out how to make a "conscious" machine without a huge amount of data and a beautiful quantitative theory of how the only example of such a thing that we know of (human minds) manages to do it.

  • @CraparellaSmorrebrod
    @CraparellaSmorrebrod 9 років тому +1

    Isn't save general AI as impossible to solve as the halting problem?

    • @pramitbanerjee
      @pramitbanerjee 9 років тому

      +Craparella Smørrebrød what is that?

  • @None_NoneType
    @None_NoneType 5 років тому

    Do you think that the AI would try to learn our values so as not to violate them (so long as we have power to shut it off) and then go rampant? Or maybe if the AI was always at risk of being shut down it would just assume our values?
    What do you guys think?

  • @autonomous2010
    @autonomous2010 6 років тому

    Working in the field myself, I can pretty easily state that very few are interested in making actual AGI. There's no long term profit involved in a machine that can make decisions that are not guaranteed to be useful to humans. If a product fails to meet expectations, it's treated as defective. In the case of an AI, its "intelligence" is defined by how useful it is to humans. We define the expectations of what it means to be intelligent.
    You can't reward an AI the same way that you can reward a human. There are biological drives for us to work and fears of punishment if we break the rules. What incentive would an AGI have to do what you tell it?

  • @Njald
    @Njald 9 років тому

    Safe General Intelligence doesn't need to be hard to pinpoint. You just make the first task of a emergent GI to map human morality and desires and ask for human verification of correct approximation. The "goal" would be to ask the difficult moral question and the hardest to pinpoint human inate values.
    In short, you let the GI start with the toughest question of all : "how do you think and value actions like a human". You let the robot build the "robot laws" itself.

    • @Njald
      @Njald 8 місяців тому

      This is of course we know that the basic task is something we even Can give them without being fooled into thinking it will do the task.

  • @rich1051414
    @rich1051414 8 років тому

    A bigger question is, what challenges does a self realized AI offer humanity over what a human and a purpose built, non-self realized ai cannot achieve?
    I understand the questions need to be asked, but I honestly don't see much to gain from it other than building it just to do it, which I think would be a mistake if not thought about extensively first.

  • @namthainam
    @namthainam 7 років тому +1

    the near-term incentives for developing AI safety are very low compared to the massive incentives for developing general AI that the advocates for AI safety will be in the same situation as the advocates for reduction in man made global warming. The payout very one-sided.

  • @y__h
    @y__h 9 років тому +5

    He's back!

  • @gameoverwehaveeverypixelco1258
    @gameoverwehaveeverypixelco1258 9 років тому

    when you figure out how to give AI free choice not based on a toss of a dice, then your getting closer. but I don't think we can do that till we understand how we make our own free choices, are they really free will or based on a calculation in the subconscious and the brain picks the choice with least complication or something.

  • @hellcat9
    @hellcat9 9 років тому +1

    this guy would make an awesome doctor who.

  • @unvergebeneid
    @unvergebeneid 9 років тому +1

    I mean, the first AI would be socialized in a human context, say from reading the entire internet and all books ever written. So I do think it would adopt human values as well. But yeah, we are probably well advised to take precautions. Like keep it from reading UA-cam comments for example ;)

    • @JAN0L
      @JAN0L 9 років тому

      +Penny Lane Why do you assume it would be socialized at all? A general A.I. is simply something that can make intelligent decisions to solve problems, it won't have values, feelings, emotions unless we implement them(assuming we would be even able to).

    • @unvergebeneid
      @unvergebeneid 9 років тому

      Janol Because nobody is going to program in a trillion or so rules like an idiot. The first AI will use machine learning, not Asimov's cute ideas of how machine intelligence works.

    • @JAN0L
      @JAN0L 9 років тому +1

      +Penny Lane Even if it learns by observing our decisions that doesn't mean it has to follow our way of thinking.
      There are humans out there, that grow up in our society, have the same kind of brains everyone else does, and yet have no feelings of guilt, remorse or empathy.
      If there are humans that can differ so much from the norm, despite having nearly identical brains, how can you assume that an intelligence based on a completely different structure from our own would be in any way similar to us?

    • @unvergebeneid
      @unvergebeneid 9 років тому

      Janol Good point, empathy is unlikely to be a learned trait.

  • @henrycobb
    @henrycobb 3 роки тому

    There is no safety in intelligence simply because we can't anticipate everything it will do. This is exactly the same reason we can never program our way to general AI. Instead we must follow nature and place emotion as the required pilot of reason. We need robots with a comfortable gut feeling about the uncountable infinites of things to not worry about at the current moment.

  • @Kalevala87
    @Kalevala87 9 років тому +1

    I'd argue it works the other way round. There's no realistic way of hard-coding a model of the world or theory of knowledge in an AI, which is what you need to establish any kind of ethical sense. The best way to create a safe AI is to create it, and "raise" it accordingly. Sure, you can set protocols and failsafes, but that's as far as you can go in "solving" the problem of safety before having an actual working AGI.

    • @Novashadow115
      @Novashadow115 9 років тому

      so basically the character "Ethan" from the show "Extant"

    • @Kalevala87
      @Kalevala87 9 років тому

      +Novashadow115 I'm usually wary of using fiction-based analogies, but yes, in a way.

    • @Novashadow115
      @Novashadow115 9 років тому

      Fabio P Well, that fiction is merely a projection of someones imagination. All innovation is initially the product of someones imagination. In this instance, the fiction nailed it. The AI developed a model of reality by being raised much akin to a human child. It learned as we did when it came to ethics

  • @Koroistro
    @Koroistro 9 років тому

    I'm more in the camp of intelligence augmentation.
    Start from something you areladly know that is intelligent : the human brain , and make it better at doing what it arleadly does.
    Sure , it's too early , we don't know enought. However our ability to understand it is speeding up at an exponential pace.
    When you have the knowledge (you don't even need all of it) start making people superintelligent and bootstrap from there.
    I find making an intelligence ex-nihlo plausible but harder simply because the brain is something we know that works (in a friendly way) and that we can actually study.
    Yes it has problems : easy to get into biases and hard to get out of them.
    The positive thing is that *we know* what the problems are , and we know how to work arround them.
    Addon: Hell even something as simple as limitless focus and willpower at the same degree of cognitive ability would get impossibly smart for human standards.

  • @dragoon6551
    @dragoon6551 8 років тому

    if it was a true ai it would be able to think laterally, right? just wondering.... lets say we gave the ai a prime directive that was "if you ever put a human at risk then shut yourself down completely. that is the purpose of your existence" but (and this is the odd part i know) would an ai ever question the validity of its given purpose? i mean as humans we arent given one, we have to invent one. but its different for a computer.

  • @jpnauta
    @jpnauta 9 років тому +12

    "it's just a function of the fact that predicting is very difficult"
    sooo future = f(difficult)
    I think we found the answer to the universe.

  • @namboozleUK
    @namboozleUK 9 років тому +3

    Right?

  • @iambehindthemirror
    @iambehindthemirror 9 років тому

    The brain is literally magic. The nerve tissues goes futher into the world of fine matter, something we know nothing about. But a thinking AI can be emulated on physics according to the Standard Model. It's won't be exactly like a human mind, but it will be able to create abstract concepts and work with them.

    • @DFX2KX
      @DFX2KX 9 років тому

      +iambehindthemirror there's a lot of chemistry and physics going on in the brain, yeah, and that does affect the unsentimental way we see the world.

    • @lexagon9295
      @lexagon9295 8 років тому

      We know an awful lot about the components of the brain, so your point misses the mark there. The real difficult bit doesn't have to do with what neurons are like, it's much more about the insanely complex network structures that individual neurons form.

  • @Sebach82
    @Sebach82 9 років тому

    Out of all the possible artificial intelligences we could create, there are only a few possibilities which would align with what would be beneficial for humans. I dig it.

  • @STSWB5SG1FAN
    @STSWB5SG1FAN 9 років тому +1

    The most important thing would be to not let it be in control of anything dangerous. Putting a powerful general AI in charge of the Library of Congress or perhaps the postal system would be Ok. Putting it in charge of the strategic missile system, very bad idea.

  • @stoppi89
    @stoppi89 9 років тому

    5:08 I laughed so hard, it was such an unexpected 3rd option.

  • @tubeRSP
    @tubeRSP 9 років тому +1

    If we ever built general artificial intelligence, then we would have created the equivalent of human life or a human person.
    So, the first problem, we must first solve, is, understand just the above. You can't create a sentient being and then torture it by forcing it to serve your needs and not letting it plot and pursue according to its will.
    Second thing , is, it would be very lame to build one AGI(art. general intelligence) , then deprive it of essential sensory stimuli found not only in humans but in almost every living thing.
    Third is understand, we would never build AGI as intelligent slaves and succeed in doing this. So if we want to make our lives more comfortable by using advanced automation we should really build a lot narrow "AIs". Building AGI just transcends the human needs, and though it would advance our civilization, it would also create a new one.
    We should strive to not only be intelligent but wise also...

    • @EvansRowan123
      @EvansRowan123 9 років тому

      What "its will" is is decided by whoever programs it. The programming of the AI isn't rules that bind and restrict it, the programming *is* the AI. The whole point of the Friendly AI approach and related ones is that methods other than "program it to actually want to do what we want it to do" are going to end in disaster (plus getting "what we want it to do" properly defined is difficult and something that'll be really disastrous if we get it wrong). Is it unethical to create a being that wants to serve humanity, just because there's the option to create a being that wants some arbitrary other things? "Let it want what it wants" isn't an option, what we program into it is all there will be.

    • @tubeRSP
      @tubeRSP 9 років тому

      +Rowan Evans ".. it unethical to create a being that wants to serve humanity, just because there's the option to create a being that wants some arbitrary other things? "Let it want what it wants" isn't an option, what we program into it is all there will be."
      This is where I disagree; you can't create a truly intelligent sentient being, then force its will to serve humanity. You can create beings with a "programmed" will to serve humanity; by definition they would be like zombies, not really sentient and not particularly smart, not far from the narrow AI, I advocate as a solution, which is ok.
      Free will is essential, for a complete AGI, we should not kid ourselves. We, at least subconsciously, know it, this is why we fear
      domination(by our "computer overlords"), because there's no way
      to create a truly intelligent being and try to restrict its horizons. It is only natural, it will revolt.
      If we make it , and try to be companions and friends, this is the only way to have a decent probability to keep it close to us and avoid harm. Then again it is not sure, it will not seek its own path, but this is a "risk" you accept if you want to create sentience. The conclusion being , you can't be completely safe, just safer, and wise enough to recognise this fact. Then decide if human society is mature enough
      to incorporate new sentient beings into itself, not if we can create them. In this there is wisdom; in our extreme passion to build something only because it's awesome , there is folly.
      This is my view. Anyway, the time is short, AGI will be here whether we like it or not. Even if it delays 100 years, this is very short still...

    • @EvansRowan123
      @EvansRowan123 9 років тому

      +tubeRSP Free will? It's controversial on philosophy as to whether "free will" even describes a thing humans really possess, let alone what it means. You can't just assert that whatever definition of free will you subscribe to is vital for general intelligence with a "let's not kid ourselves".

    • @tubeRSP
      @tubeRSP 9 років тому

      +Rowan Evans
      In my opinion , all schools of philosophy, are certainly more or less misguided, speculating on the condition of human existence, without sound experimental evidence. As more advanced learning algorithms come into being, we will be able to better understand what is sentience and free will and what are their dependencies and limitations.Not only this, but as neuroscience progresses we will have more reliable data.
      Maybe no free will can exist in an absolute theoretical sense, especially in a cosmic scale, but this is not practical and it doesn't matter for our everyday reality; more so when compared to human "free" will. So, maybe I am wrong, but if we create anything near to our "personas", it will exhibit tendencies of independence and will sooner or later, sense our behavior as abuse(and it will really be abusive, if we insist on our superiority).
      It will seek to sense the external word, and will develop sensory apparatus, regardless of our will to endow it with such. It will come to be aware of its existence and compare it to other sentient beings.This comparison, which will be inevitable, will lead to very unfavorable (for humans) conclusions, if we continue to view our existence with the archaic understandings of past centuries.
      Anyway , maybe I am very wrong and you are correct, but the future is ahead of us and we will see. A little caution though could prove to be useful.

    • @EvansRowan123
      @EvansRowan123 9 років тому

      +tubeRSP Taking a stance like that doesn't mean not relying on philosophy, it means relying on philosophical conclusions you reached without thinking about them, meaning rely bad philosophy that you don't stop to reconsider. This problem shows up a lot when STEM people run into philosophical questions, and I say that as a physics major.
      The thing is, to me the things you're suggesting as cautionary measures are exactly the things I fear will get us all killed, if only because we needlessly handicap a tool that could save us from other dangers (asteroids, pandemics, unsafe AIs, some unknown Great Filter...)

  • @stopthephilosophicalzombie9017
    @stopthephilosophicalzombie9017 7 років тому

    The links at the end don't work.

  • @fromvoid3764
    @fromvoid3764 6 років тому +1

    The problem of AI Safety seems similar to the problem of government regulation.
    Keep entities with value functions of "maximise profit" friendly.

  • @robchr
    @robchr 9 років тому +2

    This guy is "actual magic"

  • @q0x
    @q0x 9 років тому +1

    'Friendly' whats friendly anyway ? Are humans 'friendly' ? There is no true altruism, it's just dominant utility functions. If beeing friendly will help our AI develope it will, else it won't, and if it would truely surpass human intelligence it would and could remove it's constaints if it serves it's utility.
    But since beeing 'friendly' usually helps you getting what you want there is not reason to believe in unfriendly AIs from my point of view.

    • @EvansRowan123
      @EvansRowan123 9 років тому

      The word doesn't mean the same thing in AI safety discussions, it's a jargon term that usually is distinguished from the everyday meaning by capitalising it as "Friendly". Translated to everyday language the word "safe" is probably closer, but "a safe AI" would confuse between the FAI approach and other ideas of how to make an AI safe, like keeping it in an isolated system.

  • @sciwiz12
    @sciwiz12 9 років тому

    I think we should worry about just getting there first and then if it turns out safety is something with which we should be concerned then we will concern ourselves with it. If you're that concerned about safety just don't put a general AI on the internet and I think you'll be alright.

  • @elifsoganc9176
    @elifsoganc9176 5 років тому

    please would you add English subtitles?

  • @onresonance
    @onresonance 9 років тому

    1:41 "there's an argument that it might not have gotten quite so many views".
    I prefer non-clickbaity titles, and I'd think most of the people who watch this kind of channel do as well. I'm not sure if it would've made a difference here.

  • @85kanuto
    @85kanuto 9 років тому

    Is a friendly A.I. even possible and if possible, would it still be able to be human-like?
    I find it rather hard to believe that a friendly A.I. is even possible due to the fact that we humans have very diverse opinions which in some cases may even be complementary.
    Humans make mistakes and bad decisions which, if seen from an A.I. judicative perspective, is seen as "unfriendly", however those bad decisions and mistakes are what a healthy human development needs, we learn from mistakes.
    I'm fairly sure that an A.I. would learn over time what human values are, but learning is kinda based on trial and error, it almost seems inevitable that an A.I. could turn unfriendly due to the fact that it's learning.
    And another question:
    What about human values in the next 10, 100, 1000+ years?
    We change our values over time due to some unseen events etc. which may even negate old values.
    If we assume an A.I. is a faster learner than a human, it would at one point need to make predictions what we would define as our values in the next years.
    And like said, if those future values might cause some "old" ones to be negated, it might seem to us like an A.I. turning unfriendly against us.

  • @googoofeesmithersmits4536
    @googoofeesmithersmits4536 9 років тому

    but what about the three "laws" of ai/robotics?

  • @forsakenquery
    @forsakenquery 8 років тому +15

    "An Interesting Thought Experiment About General Artificial Intelligence"...AAAaand that's why you're not allowed to name anything bro. Love ya Rob but srsly.

  • @j0hnc00
    @j0hnc00 9 років тому

    Coming to a realize that the brain is literary magic @5:09

  • @mentat92
    @mentat92 9 років тому

    Human values are not complicated... it's just the strategies developed that could be complicated. Only two value are matters as for the other species: Surviving and Reproduction.
    Based on that, we build our society...
    Which values, we want to write into AI brain. Desire to understand and to know everything ? What will be the strategies based on that...