Respectability

Поділитися
Вставка
  • Опубліковано 30 тра 2024
  • It can be hard to get people to take AI Safety concerns seriously, but it's a lot easier now than it used to be.
    That Eliezer Yudkowsky talk (containing (a very small amount of) screaming): • Eliezer Yudkowsky - AI...
    The Open Letter: futureoflife.org/ai-open-letter/
    With thanks to my wonderful Patreon Supporters:
    - Ichiro Dohi
    - Stefan Skiles
    - Chad Jones
    - Joshua Richardson
    - Fabian Consiglio
    - Jonatan R
    - Øystein Flygt
    - Björn Mosten
    - Michael Greve
    - robertvanduursen
    - The Guru Of Vision
    - Fabrizio Pisani
    - Peggy Youell
    - Konstantin Shabashov
    - Adam Dodd
    - DGJono
    - Matthias Meger
    / robertskmiles
  • Наука та технологія

КОМЕНТАРІ • 238

  • @TheJaredtheJaredlong
    @TheJaredtheJaredlong 4 роки тому +335

    Has a Ph.D in AI research and humbly refers to himself as "just a guy on youtube."

    • @Phelan666
      @Phelan666 4 роки тому +37

      I, too, am extraordinarily humble.

    • @Grezza78
      @Grezza78 4 роки тому

      Came here just to say this.

    • @OmniPlatypus
      @OmniPlatypus 3 роки тому +1

      Most phds I've met hate being called a doctor on informal conversation. The ones who insist on it are... Wel... Assholes.

    • @queendaisy4528
      @queendaisy4528 3 роки тому +29

      @@Phelan666
      You say you're humble but no one on Earth is even close to being as humble as I am. No one in the history of humanity has ever had as much modesty and humility as I do. I am the greatest person to ever exist when it comes to being humble.

  • @PickyMcCritical
    @PickyMcCritical 7 років тому +115

    Rob's got good taste. His clarity from Computerphile seems to also translate to quality pacing and editing.

    • @RobertMilesAI
      @RobertMilesAI  7 років тому +17

      Username doesn't check out :)

    • @PickyMcCritical
      @PickyMcCritical 7 років тому +4

      I wish I could say my opinions on quality were usually positive :) But I can't ಠ_ಠ

  • @jeronimo196
    @jeronimo196 4 роки тому +69

    The guy with the funny scream is Eliezer Yudkowsky. He coined the phrase "Friendly AI", co-founded MIRI, helped create the internet rationality community "less wrong" and wrote the best Harry Potter fan-fiction in existence. So, yeah, in the eyes of some people, respectability was never an option.

    • @mattimorottaja8445
      @mattimorottaja8445 Рік тому +2

      also a lolcow

    • @enricobianchi4499
      @enricobianchi4499 Рік тому +1

      @@mattimorottaja8445 why?

    • @dylancrooks6548
      @dylancrooks6548 Рік тому +3

      @@enricobianchi4499 because of the way he looks. Dudes an internet atheist and wears a fedora and is overweight yet has super skinny arms and has a neck beard. I don’t like judging people on their appearance but why would he knowingly conform to such a negative stereotype? He needs to be different

    • @enricobianchi4499
      @enricobianchi4499 Рік тому +3

      @@dylancrooks6548 having looked into it in the meantime, yeah he's kind of a fun little guy

    • @leslieviljoen
      @leslieviljoen Рік тому +17

      Watching some of Yudkowski's interviews and reading comments, it's amazing how often people will hear "we're all going to die" and respond "that guy is wearing a hat".

  • @NNOTM
    @NNOTM 7 років тому +34

    Nice to see EY :)
    And yeah, I was super happy when I heard that OpenAI got 1 billion dollars in funding, but for the most part, they seem to just research ways to make AI perform better rather than how to make it safer... Although I *am* presenting one of their papers in a Computer Vision seminar at university, so I do have that to thank them for

    • @NNOTM
      @NNOTM Рік тому +7

      @@fakeaddress My opinion hasn't really changed since then - they seem to focus much more on capabilities than alignment

    • @NNOTM
      @NNOTM Рік тому +9

      which is bad

    • @jadpole
      @jadpole 10 місяців тому

      ​@@NNOTM They are focusing on alignment, though. It doesn't make great click-bait, so the media doesn't report too much on it, but their blog has lots of articles on what they're exploring. (It's hard, so progress is slow, but it is an area they invest in.)

  • @callumhodge3122
    @callumhodge3122 7 років тому +4

    thanks for this robet, very true that we can't all know everything so it's great to have you to explain these things so clearly, thanks again

  • @killorfill6953
    @killorfill6953 Рік тому +9

    Just Wow... Robert's amazing foresight about things happening right now in 2023 & Elon Musk doing horrendous things with AI.

  • @fisstaschek
    @fisstaschek 7 років тому +3

    Great channel Rob! I was just watching your old vids on numberphile thinking "what a shame he doesn't have his own YT channel... oh, he does"

  • @SJNaka101
    @SJNaka101 7 років тому +5

    I gotta say, I really like the content you're putting out on this channel. Are you putting these videos together yourself? There's a very charming feel to the whole thing.

  • @DamianReloaded
    @DamianReloaded 7 років тому +96

    Now that cliffhanger about Elon Musk's opinion is going to consume me in anxiety. ^_^

    • @userNo31909580
      @userNo31909580 7 років тому +14

      Musk's fears are pretty much summed up in what Bostrom wrote in 2014 in his book "Superintelligence". Elon thinks humans and A.I. can co-exists only if we merge with it via some kind of brain-computer interface. I'm really curious about what Robert has to say about it. Bostrom made some heavy assumptions in the book but I suspect Robert's criticism (I guess? That's the impression I got) is going to focus more on Elon's responsibility as a public figure than his arguments.

    • @spirit123459
      @spirit123459 7 років тому +24

      Argument goes like this: it's easier to built AGI than to build *safe* AGI, so making progress on AGI open is not very wise. You risk that some party that doesn't buy into whole "risk from AGI" business will take your results and in effect will have to do less work, cutting your lead time (which you need to do the required work on safety).
      OpenAI makes its research results public and it seems to me that results are more on AI-capability than on AI-safety front.

    • @OriginalMindTrick
      @OriginalMindTrick 7 років тому +10

      spirit, Ben Goertzel argues it's better to arrive at the singularity sooner rather than later because if we have a massive computing overhang it could make it possible for perhaps dubious smaller or less established groups or even single individuals to reach the end zone without much thought or telling the world what's going on.
      The irony is of course as you point out that it may be much harder to create benign AGI than to create AGI, this is a field of true uncertainty, so time is an important factor in thinking things through as is adequate resources.

    • @oktw6969
      @oktw6969 7 років тому +5

      Non-open AI will be even worse, since there will be incentive to hide the flaws of AI to keep investor dosh rolling.

    • @maximkazhenkov11
      @maximkazhenkov11 7 років тому +7

      Open AI is still worse, because competition favors those with least regards toward safety. It is hard to have oversight over a few competing companies, it is impossible to have oversight over millions of individual participants or small groups.

  • @jawr1215
    @jawr1215 6 років тому +4

    Would it be possible for you to do a 'jokey' video on the basilisk?

  • @al1rednam
    @al1rednam 4 роки тому +6

    Ok,I'm quite late to this video. But I want to tell you that for a guy like me who likes to have a basic understanding of things as a basis to forming an opinion you, sir, are doing a very good job.
    Sure, you are "just a guy on the internet" to me, especially as I didn't bother to look what I could find about you on any other source. Others comment you have a PhD in the field - I takeit for granted. Why? Simply because you explain things on a level I can easily understand. Mixed with a subtle humour and a very sympathetic way of presenting I chose you to be my main source on the subject. I didn't spot any contradictions in your arguments and they make sense to me, so that is that.
    I don't really know why but I felt the need to tell this.
    Keep it up, please.

  • @Nerdthagoras
    @Nerdthagoras 7 років тому +5

    I find your video very entertaining and informative. And I'm pretty sure we can track the chronology solely by your hair growth ;)

  • @SJNaka101
    @SJNaka101 6 років тому +2

    Lol I first watched this video when you put it out, but going back to it I suddenly realized why "that's a good problem to have" has been quite a common catchphrase for me lately

  • @flymypg
    @flymypg 7 років тому +45

    Hi Robert,
    A light comparison of your recent content to your Computerphile videos yielded one small observation: For some topics, having an interlocutor physically present while making the video helps.
    I can't point to specifics, or make an apples-to-apples comparison, but I get the feeling there have been times Sean may have said something or merely arched an eyebrow, encouraging you to clarify or expand upon a point or definition.
    I very much like your editing! Of all the fun edits, pushing Clippy out of the frame was precious. Not just visually, but figuratively as well, dismissing an "applied AI" attempt that utterly failed to connect with its intended audience. (To be clear: Clippy was a terrific accomplishment, specifically in terms of applied theory in the context of available technology, but it was a very poor match to the needs of the audience.) Your use of short text overlays was also effective.
    The mention of Clippy raises another issue: Many focus on the current state of AI (primarily machine learning these days) without benefit of the history (theoretical and applied) or fundamentals. I'd love to see a series of videos that make passes through AI history one concept at a time, perhaps starting with philosophical thoughts through time concerning "thinking machines", then with the first "concrete" target being (perhaps) the Perceptron. Such videos could, in 10 minutes or so, each be composed of quick slices: History, theory, application, demo.
    Or, perhaps, curate links to such content, to help audience members come up to speed. There have been lots of good videos done on AI hoaxes, such as the Mechanical Turk.
    And/or, perhaps, collaborate with Dominic Walliman to create a "Domain of Science" video for "The Map of Artificial Intelligence". He already has a "Machine Learning" video that begs for a follow-up.
    What fascinates me most about the history of AI is how, once a problem in the domain of AI is "solved" it is often no longer considered to be part of AI! I can't think of any other discipline with such fluid boundaries. Could AI best be defined as "our current attempts to use computers to solve problems we don't yet fully understand"?
    While there is so much focus on the recent successes of ML, I'd also like to see at least a review on truly groundbreaking topics such as COQ and SAT Solvers, Expert/Knowledge-based Systems, Cyc, and so on.

    • @SJNaka101
      @SJNaka101 7 років тому +4

      BobC In reference to the fluidity of the domain of AI, the most prominent recent example of that is probably AlphaGo. Defeating top humans at the game has been a huge milestone to be achieved for AI for a long time, but once it was done, everyone said "oh but that's not true AI". Is that what you're talking about?
      Personally, I would say that this is more of an effect of goalposts moving rather than utter dismissal of some concept in the AI domain. These goalposts have been meant to illustrate how far we have come towards developing true AI, and weren't ever actually meant to be viewed as true AI. I'm not sure how much sense I'm making so I'll leave my comment at that for the time being

    • @flymypg
      @flymypg 7 років тому +4

      Quite right: It certainly is about the goalposts, but I think it is also about how "the complex becomes easy", as what once were strictly AI results permeate into general use, and also become better understood.
      By this I mean the concepts underlying an AI result become simplified, not just the application techniques. Once simplified, other domains then "adopt" the AI result as part of their own area, leaving AI in search of new turf.
      ML may be the first AI result to *not* have its underlying theory clarified before seeing widespread adoption in industry. For example, it is still very difficult to ask a complex neural network: "What rules have you learned?" Getting a trained neural net to "explain its inner workings" is hard!
      We simply accept that a trained neural net has learned to do something that needed doing. It's still very much a black box, We really can't yet extract just what the new rules are!
      Recent work using multiple different neural network architectures to solve the same problem gives me hope that we may eventually be able to extract the learned content by comparing the trained states of each network.
      The initial point of such work was to try to find better ways to anticipate which network architectures will prove optimal for a given problem. Basically, to train a neural network on aspects of the problem so it will select the best neural network architecture to use to solve the problem.
      But I think (hope) there may be more that can come out of such work.
      In the late 1980's and early 1990's, during the "backprop" and Moore's Law explosions that triggered major growth in ML, I was involved in attempts to convert neural nets that had been trained to control complex dynamic systems into simpler representations that could be programmed into embedded microcontrollers.
      The underlying question was straightforward: "Can we take what a trained neural network has learned and implement it in other domains?"
      Our hope was that whatever the neural net had learned could be converted to existing control theory, and hopefully reveal where control theory itself could be enhanced. That is, have control theory "adopt" the results (but not the process) of neural nets that had been successfully trained to control dynamic systems.
      We expected to eventually map everything back into an extension of, or modification to, Pontryagin's maximum principle concerning the control theory Hamiltonian. Which is ideal in theory, but generally impossible to use in practice due to the dimensional explosion of the PDEs it generates.
      We failed. We were a tiny team with no formal support, so our resources were limited, and when initial progress hit a wall we all had to move on to other projects.
      Perhaps it is time to try again.

    • @milanstevic8424
      @milanstevic8424 4 роки тому +3

      ​@@flymypg We're not developing an AI, if you ask me, we're only basically discovering what intelligence already is.
      From your example, not being able to ask "What rules have you learned?" implies a dark truth about how we can make a machine that can operate on a large problem domain, and even solve it better (quicker or with a greater throughput) than any human, but has no deep understanding of the domain from a meta perspective.
      I'm sorry but that's not intelligence.
      The results with all of these can be impressive at times, yes definitely, but all of it is basically some type of a real-time classifier, trained to be able to impress us with the end results. That's the only utility function there is, summed up.
      No genuine intelligence, and there will be none any time soon.
      I'm sorry to be that guy. Yes, it'll be impressive at times, and it might even kill us because a human made an error in judgment, but yeah, no one is going to try that human, it'll be the "rogue AI's fault". We humans are peculiar.
      All of it poses deeper philosophical questions. If you have a box with just one button that says "SAFEST BUTTON", but inside that box is a monkey with a gun, whose fault is it if you keep pressing that button until the fatal end?
      We are simply delegating responsibility to a riddle-like proxy device. This is today's AI safety in a nutshell.
      Such a machine has no genuine thoughts, only decision-making routines that are convoluted by design -- to promptly open up the "decision-space" while pursuing a hardwired intent. It doesn't matter how many of these are interconnected, they're all potentially faulty in design.
      Such room for an error.
      And such a backstage for crass complacency and irresponsible behavior on a grand scale.

  • @TheJimiles
    @TheJimiles 7 років тому +5

    Mint video. you might not be able to learn everything, but everyone definitely needs to watch this video though

  • @cyndicorinne
    @cyndicorinne Рік тому

    3:18 Omg Russell & Norvig! This brings back memories of writing paper briefs including one about a hungry monkey I believe. Wow 💜
    Anyway thank you for your work!

  • @Paul-iw9mb
    @Paul-iw9mb 7 років тому +154

    Ohm there where 255 Likes, I made the 256th. Sorry for the overflow.

  • @andersenzheng
    @andersenzheng 7 років тому +3

    about 2k views, not a single downvote.. says a lot about how people thinks about this topic and your opinion. Keep up the good work man

  • @thahrimdon
    @thahrimdon Рік тому

    Just came to say that Dr. Miles is a living legend… He’s been here since the start and knows exactly where it will end, despite the fact no one knowing how we will get there. Thank you for your work Dr. Miles! I figured you had a PhD but couldn’t find any reference to it, very humble.

  • @MattGriffin1
    @MattGriffin1 7 років тому +1

    great video, as always.

  • @fredzacaria
    @fredzacaria Рік тому

    thanks for the insights, 5 years ago video o 5 days!

  • @TheCrash480
    @TheCrash480 4 роки тому +1

    Good news!
    I'm watching this video in 2019, which means you _did_ make this video a while ago!

  • @PrincipledUncertainty
    @PrincipledUncertainty Рік тому +1

    I love Robert's content and it could not be more apropos of every bloody thing at present, but screw you for reminding me you read the entirety of Eliezer's blogs. My much older brain experiences time differently, this is a task that will require me to neglect my kids and go bankrupt. However, I will use the dreaded AI tools to summarise them, which seems darkly appropriate.

  • @SJNaka101
    @SJNaka101 4 роки тому +3

    Lolll I just realized the ukelele song was Respect. Clever

  • @nilreb
    @nilreb Рік тому +1

    👏 for criticizing Elmo 5 yrs ago already. Back then the general public was still convinced of his genius

  • @Lolleka
    @Lolleka Рік тому +1

    this is excellent, viewed in April 2023

    • @fredzacaria
      @fredzacaria Рік тому

      Likewise with a likewise comment!

  • @morkovija
    @morkovija 7 років тому +1

    Yasss! Friday evening the proper way!

  • @DHorse
    @DHorse 2 роки тому

    No way Miles. You are the go to guy for coherent, concise explanation of the core issues.
    Did these other folks make an AGI? No.

  • @jonp3674
    @jonp3674 7 років тому +20

    I heard someone say once that AI is like nuclear power. The first problem is to get it to work at all. As soon as you've done that everyone will switch to working on containment.

    • @RobertMilesAI
      @RobertMilesAI  7 років тому +33

      So the question is, what kind of nuclear power are we talking about? Before we get a working nuclear power plant we may get a working atom bomb, and there may be nobody left to work on containment.

    • @jonp3674
      @jonp3674 7 років тому +4

      I find the runaway intelligence scenarios relatively hard to envision. I think personally there will be something like "maximum intelligence per unit hardware" which is the most intelligent thing you can design with a certain amount of hardware.
      IMO a super intelligence will first need to build a massive amount of hardware.
      Nick Bostrom had an interesting concept for this in his book, that there could be self assembling nanomachines on a molecular level which gives a vast amount of hardware. However there is a bootstrap problem in that you have to be extremely intelligent already to invent this method, which an ai wouldn't be without it.
      Though I think it's important to worry about rampant runaway I can't see how a single pc, for example, could become super intelligent. What do you think Rob? Do you believe there is a hardware requirement for superintelligence? Does that prevent an extremely quick runaway scenario?

    • @ikkohmation
      @ikkohmation 7 років тому +6

      I think there *is* a hardware requirement, but I'm not sure it's small. And even if a laptop isn't enough to get a decisive strategic advantage, there is a lot of available hardware in the world (accessible through buying, hacking, ...).

    • @dirtypure2023
      @dirtypure2023 7 років тому +9

      Thing is, today's most advanced machine learning algorithms and other frontiers of AI research aren't running on individual machines - they're networks, distributed across numerous specialized units (Google's TPU2 system, for a topical example). We have designed AI to take advantage of the efficiency and power gains which only such distributed networks can provide.
      Would this not imply that any threat posed by runaway AI be of a decentralized nature?
      (Please correct me if I'm wrong on anything.)

    • @maximkazhenkov11
      @maximkazhenkov11 7 років тому +6

      The human brain runs on merely 20 Watts of power, an AI could easily have Gigawatts available to it. In terms of energy efficiency, the human brain operates 500,000 times below the Landauer limit. So yes, there are limits, but they're not very limiting. You can read more about it in this paper: intelligence.org/files/IEM.pdf

  • @mattbox87
    @mattbox87 Рік тому

    Heinlein, what a character.
    I bet plenty have read Starship Troopers, but try Stranger in a Strange Land.

  • @WylliamJudd
    @WylliamJudd 4 роки тому

    What an excellent video.

  • @code-dredd
    @code-dredd 6 років тому

    Yes, thank you for pointing out what I've had to point out to other people offline for a long time now... appeal to authority fallacies (e.g. Hawking believes it, therefore you should too) should be tackled. However, most of the time, it's journalists that I see using this approach... though I'm sure barely anyone will be surprised by that.

  • @HebaruSan
    @HebaruSan 6 років тому +1

    Heh, my copies of those Russel/Norvig and Mitchell books are right next to each other on the shelf.

    • @marcomoreno6748
      @marcomoreno6748 Рік тому

      The internet has ruined me. I read "copies" as the diminutive noun form of "coping".

  • @Pepepipipopo
    @Pepepipipopo Рік тому +1

    Just saw the Bankless episode of Eliezer and the Algorithm put me here afterwards... And the Elon Comment at the end now stings even more.

  • @forthehunt3335
    @forthehunt3335 7 років тому +2

    I want "later" (as in "more on that later...") to be "now". How long will I have to wait?

  • @PwnySlaystation01
    @PwnySlaystation01 7 років тому +64

    Musk has some pretty controversial ideas in general, not just about AI. In my estimation, he's got about 1 good idea in 10, which is pretty bad. But that 1 idea in 10 tends to be exceptional. It doesn't stop him from putting money + effort behind ridiculous ideas like the hyperloop, so when I hear that Elon Musk thinks something is a good idea, I take it with a giant grain of salt.
    AI safety seems to be one of those topics that people have a hard time discussing without an emotional response. Even semi-related topics are like this. Talk to people about whether they want flight computers to have more or less control than a human pilot, or whether self-driving cars are a good idea and you'll often get emotional, knee-jerk responses. Science "journalism" I think bears a lot of the blame here. Science journalism has become so bad, that you can almost count on any clickbait type science article in a mainstream publication to be absolute nonsense. We need better science journalism. And more work like you're doing!

    • @DamianReloaded
      @DamianReloaded 7 років тому +9

      I'm curious. I can name 3 great ideas of his: SpaceX, Tesla and Solar City. Can you name 27 bad ideas Elon had? I think the ratio you pulled out isn't true or fair. The only ideas that matter are the ideas he puts his money on. Each and every one of those are good.

    • @PwnySlaystation01
      @PwnySlaystation01 7 років тому +19

      I didn't expect people to think 1/10 was some kind of exact figure.
      Anyway, I don't think funding is the only thing that dictates whether one of his ideas matters or not. People take him seriously in all sorts of fields whether they're in his area of expertise or not.

    • @andrasbiro3007
      @andrasbiro3007 6 років тому +4

      Actually the Hyperloop is not as ridiculous as most people think, it's just another radical idea of his. The main difference is that his other radical ideas are proven to be very good, but the Hyperloop haven't got the chance yet. Most people thought that electric cars are ugly, slow and too expensive to sell, self-driving car are decades away, a small private company won't ever be able to build serious rockets, and reusable rockets are impossible to build for anyone.
      If you read the original paper on the Hyperloop (Hyperloop Alpha), you will see that it's not just possible, but cheap compared to the competition. All criticism I've ever heard has been addressed in the paper.

    • @BattousaiHBr
      @BattousaiHBr 5 років тому +3

      the problem with elon musk ideas is not whether they're ridiculous or not, it's that when he says something people are expecting it to be something that is feasible in the near future.
      terraforming mars is one of those ideas, yet i don't see nearly enough backlash against it even though it is orders of magnitude less feasible than the hyperloop.

    • @abdulmasaiev9024
      @abdulmasaiev9024 4 роки тому +6

      @András Bíró "Hyperloop is (a) radical idea of his" - riiiiiiight. Google "wikipedia vactrain", the inklings of this can be seen even in the 18th century, and by the start of the 20th century it's pretty much crystalised. It's not his idea, he just stamped his brand onto an existing one and convinced everyone it's his... just like he did for example with Tesla (definitely NOT his idea and he had NO hand at all with the early prototypes - in fact those early prototypes already existing and working convinced him to become an early investor in it). Musk is a marketing genius, not an engineering one, and it shows since the hyperloop as it happens is exactly as ridiculous as it's always been.

  • @BatteryExhausted
    @BatteryExhausted 6 років тому

    I liked 'Children' - That was a really catchy piano riff.

  • @Wonkyth
    @Wonkyth 7 років тому

    Fantastic video! :D

  • @mattbox87
    @mattbox87 Рік тому

    "Artificial Intelligence: A Modern Approach" , yep have a physical copy, never thought about the man behind it.
    (loved it by the way)

  • @b4ux1t3-tech
    @b4ux1t3-tech 6 років тому +2

    I knew I had heard the name Norvig. I couldn't quite figure out where it was I'd heard it.
    Then I realized I've been using his "big.txt" file as a bit text input for years in tests that I've written.
    Weird.

  • @fteoOpty64
    @fteoOpty64 4 роки тому

    One of the smartest guy I have ever seen. Fight the good fight.

  • @hugglebear69
    @hugglebear69 5 років тому +1

    I didn't think I was going to like this video...... and then, I did!
    I do sooo like intelligent people!

  • @BattousaiHBr
    @BattousaiHBr 5 років тому

    maybe the hawking(s) has to do with the confusion of similar popular scientist names like dawkins?

  • @kwillo4
    @kwillo4 Рік тому

    This was so good. Loving the jokes

  • @theJUSTICEof666
    @theJUSTICEof666 7 років тому +29

    3:30
    What is he up to these days?
    *Director of research at Google*

  • @khatharrmalkavian3306
    @khatharrmalkavian3306 4 роки тому

    It cracks me up that you refer to Russell and Norvig as advanced AI experts after showing that book. They're not slouches, but that book is the kiddy pool of game AI.

  • @joebuckton
    @joebuckton 7 років тому +5

    I think the "Stephen Hawkins" typo is maybe a confusion with "Richard Dawkins".

    • @LuisAldamiz
      @LuisAldamiz 4 роки тому +2

      People not always know how that surname is spelled: I've in the past often doubted and spelled it "Hawkings", long before knowing of Dawkins.

    • @flurki
      @flurki 4 роки тому

      Interesting. My theory is: It's because of the character Sam Hawkens (often mistakenly spelled Hawkins) from Karl May's Winnetou novels.

  • @junoguten
    @junoguten 4 роки тому

    1:35 "build a wall" -Robert A. Heinlein

  • @spirit123459
    @spirit123459 7 років тому +10

    As far as I can tell, Yann LeCun doesn't think that AI safety is a problem that researchers should be working on right now. He also doesn't think that instrumental convergence thesis is right (see post from 20 February of 2017 on his facebook profile). Also, I see that Oren Etzioni is signatory of letter but last year he wrote article for MIT Technology review titled "No, the Experts Don't Think Superintelligent AI is a Threat to Humanity" (to which Stuart Russell replied with article "Yes, We Are Worried About the Existential Risk from Artificial Intelligence" :)
    My point is: letter is quite vague and not everyone who signed it necessarily thinks that there exists potential xrisk from sufficiently advanced AI.

    • @andrasbiro3007
      @andrasbiro3007 6 років тому +2

      There are several risks associated with AI. The most immediate, inevitable, and mostly accepted is high unemployment caused by rapid automation. That's not a safety issue, that's actually what we want from AI, but we are horribly unprepared to handle it (at least the US). If handled badly, even that desired effect can destroy our civilization. High unemployment leads to economic and social problems (this is already happening in the US), which if goes too far will inevitably cause economic collapse and massive violence, maybe even a civil war. And the collapse of the US economy will bring down most of the developed world (and China), and that will likely start wars, which could escalate into a global nuclear war.
      Another issue that is rarely discussed is the dangers of a fully automated military. In the wrong hands (which means any human hands) it can be horribly dangerous. With modern technology it's already way too easy to fight wars. Since it costs few lives on your side, it doesn't generate strong resistance at home, and can be continued indefinitely or until you run out of money. Since war is extremely profitable for a few powerful groups, the incentive to keep fighting is very strong. That's one reason why the US now involved in at least 7 major wars and countless small conflicts. The war in Afghanistan is 16 years old now, and the default position of the government is to continue. When pressed for reasons the only thing they can come up with is the valuable minerals that could be taken from the Afghans. And once you have a fully automated military that follows any order without questions, and free from moral dilemmas, you can use it against your own citizens too. This has plenty of historical examples, but with human soldiers it's always a gamble, because you can't be sure that they will follow orders.
      Yet another issue is censorship. Now the internet is too big and too fast to be controlled by anyone, and that's a huge boost to democracy, we now know far more about what's happening in the world than ever before, and we can fact check political speeches in real time. With advanced AI it will become possible to control most information on the internet, and force state propaganda on it, like it was in the days of centralized news. As we saw last year, leaked e-mails can change the outcome of presidential elections, so it's a pretty big deal.
      And I still not touched actual AI safety issues. There are plenty of them, but a big one is superhuman intelligence. Once AI reaches human level, it will keep improving without any hesitation, there's no inherent speedbump at human level, it's a nice wide highway. We already saw this with all tasks that AIs mastered, when AlphaGo reached human level in Go, it didn't stop, it kept going and in a scarily short time it was far ahead of any human being. And once we have a general AI that is far more intelligent than us, we won't be able to control or stop it anymore. We will have as much control over it as a mouse has over a human. Early humans hunted to extinction animals that were larger, stronger, faster and more agile. This means that we have only one chance to do this right. If we screw it up, we can't fix it.

    • @spirit123459
      @spirit123459 6 років тому +1

      I don't necessarily disagree with the gist of your response. In my original comment I was just nitpicking one detail from the video so that people wouldn't get false impression and IIRC Robert put strong emphasis on what I said in his next video.

    • @LuisAldamiz
      @LuisAldamiz 4 роки тому

      I understand that every single person who signed, read and agreed. If there's any discrepance with the content it should be very minor, else they would not sign it.

  • @leslieviljoen
    @leslieviljoen Рік тому

    Having a Cassandra Complex for as many years as Yudkowski has had is liable to make anyone yell. Those are dignity yells.

  • @ZachAgape
    @ZachAgape 4 роки тому

    Great vid on an important topic, especially in today's content of fake news and 'alternative facts'.

  • @zesalesjt7797
    @zesalesjt7797 Рік тому

    1:53
    Hocking clones on Tattoine
    The untold story of the 501st MI units.

  • @jsonkody
    @jsonkody 5 років тому +3

    Actually for me it looks like it is not possible make super human (like really much smarter) general intelligence and do it save at same time.
    It reminds me famous quote of Brian Kerninghan: "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."
    I know that it refers to something different ... and that making AGI is different process than conventional programming. I just think that we could not out smart something much smarter than we are .. it is just probably out of our reach.
    Btw don't you think that some very smart intelligence could simply ignore or break trough/rewire/rewrite it's own primary function (I dont remember the correct term). If it reach the point where it recognize itself, where it develop consciousness (if it's possible) it could also question its own purpose and it may see it like bars of some prison made by us. I know that you made example of someone going against basic instincts - like killing his own children but AGI would not be something like we are. But even when we stick to this example, there are definitely people that could value some meme/thought (like a belief in god) more than own genes so they could act against their own "preset" in favour of something else ...
    What do you think about it?
    PS: I am sorry for my english .. I'm not a native speaker.

  • @mrronnylives
    @mrronnylives 4 роки тому +2

    This guy has to be an AI having a laugh at all of us. Warning us about a threat that already exists and enjoys playing with us.

  • @MetsuryuVids
    @MetsuryuVids 7 років тому +37

    Please tell us why you don't agree with Elon Musk on the subject, I'd relly like to know.
    Thanks for doing these videos, they're super interesting.

    • @thekaxmax
      @thekaxmax 4 роки тому +4

      Elon's got ideas but no background or qualifications in AI research

    • @ekki1993
      @ekki1993 4 роки тому +5

      He's a business person with a dash of tech nerd. His insight is just barely more than your average youtube-taught geek, but he likes feeling like the smartest person around. It would be a shame for Robert to waste his time inflating the Musk hot air baloon.

    • @marcomoreno6748
      @marcomoreno6748 Рік тому

      I would like to point out that there is a big difference between "Don't agree" and "disagree".

  • @tonhueb429
    @tonhueb429 6 років тому

    I like your outro music

  • @lemurpotatoes7988
    @lemurpotatoes7988 11 місяців тому

    Entire idea of AGI is that we want a nonspecialized intelligence, Heinlein deserves more respect 😭

  • @qeaq3184
    @qeaq3184 2 роки тому +2

    This man's hair is interesting 👌

  • @emilyrln
    @emilyrln 4 роки тому

    Great video!
    Can we panic now?

  • @Turalcar
    @Turalcar 6 років тому +1

    4:29 Where's this version of "Respect" from?

    • @SJNaka101
      @SJNaka101 4 роки тому +1

      He plays his little ukelele outros

  • @thrillscience
    @thrillscience 7 років тому +16

    I'm scared shitless of AI and I find not many people take it seriously. I'm glad Robert Miles is speaking up.
    Also, I like the longer hair!

    • @MichaelDeeringMHC
      @MichaelDeeringMHC 7 років тому +3

      Maybe you need more fiber in your diet.

    • @NathanTAK
      @NathanTAK 6 років тому

      +Michael Deering

    • @fleecemaster
      @fleecemaster 6 років тому +2

      Build your own AI to protect you from it, that's my plan :)

    • @the1exnay
      @the1exnay 4 роки тому +1

      Fleecemaster
      Noone can make an AI that'll kill you if all the AI researchers are dead :)

    • @andybaldman
      @andybaldman Рік тому

      How’s that going?

  • @andybaldman
    @andybaldman Рік тому

    They tried to warn us.

  • @dawnstar12
    @dawnstar12 4 роки тому

    i am more interested in the legal ramifications would an ai with equivalent intelligence have equivalent rites

  • @AexisRai
    @AexisRai 7 років тому +4

    Oh my god, I just realized what song is playing at 4:28. Such a subtle pun.

    • @NathanTAK
      @NathanTAK 6 років тому

      I can't identify it. Tell me now.

    • @AexisRai
      @AexisRai 6 років тому

      It's (a ukulele cover, or something, of) Respect by Aretha Franklin.

    • @NathanTAK
      @NathanTAK 6 років тому

      +Aexis Rai I have a feeling it's an electric ukulele battleaxe cover, to be exact.

  • @the1exnay
    @the1exnay 6 років тому +1

    If people can skip the wait to be featured by paying more then you've built a sort of auction system where you pay even if you lose. Though can it be said to really be losing if you're supporting content like this?

  • @polychats5990
    @polychats5990 6 років тому +1

    "What's he up to? Director of research at google"
    oh cool

  • @belzamder
    @belzamder 7 років тому +19

    The truth about Stephen Hawkings has finally come out! No one man can do so much!

  • @yura979
    @yura979 4 роки тому +1

    2:26 "And it's not just futurologists who are talking about it. Real, serious people are concerned"
    That level of disrespect is quite arrogant.

    • @egg5474
      @egg5474 4 роки тому

      Futureologists are more visionary artists than engineers, we need tangible ideas that can be implemnted and experimented with right now rather than ideas and inventions that might become practical 50 years from now as there is too much we don't know right now. We thought there would be flying cars, now there are some prototypes but they've been found to be wildly impractical toys with the current technology we have; which in turn we'd probably draw the same conclusions about other ideas, like the lift carrying sattelites into space, sounds cool but will be impossible to build as no one knows how exactly to build it.

    • @yura979
      @yura979 4 роки тому

      @@egg5474 So, if they are visionaries and artists that somehow justifies calling them not real and not serious people? Think about weight of these words. And I don't know what futurologists you read but that's not the same as science fiction writers. Futurologists often discuss the problems this channel is all about: technology and morals, AI and future of humanity. That's why he said "it's not only futurologists who discuss it". Trying to lower a group of people down doesn't make any good to "the engineers" you mention. I'm an electrical engineer and I don't appreciate this.

  • @kakfest
    @kakfest 7 років тому +1

    didn't know you did music /CC5ca6Hsb2Q keep up the good work :D

  • @LevelUpLeo
    @LevelUpLeo 4 роки тому +5

    I gotta be honest, binge watching your videos for the last couple of days has got me thinking more seriously about AI, over years of sensational Elon Musk tweets and headlines.

    • @Kaos701A
      @Kaos701A Рік тому

      How you doing in these times Leo?

    • @LevelUpLeo
      @LevelUpLeo Рік тому

      @@Kaos701A Honestly just annoyed, cause people don't seem to care about what is fed INTO the AI, and from watching these videos for2 years now, that is something we should VERY much care about.

  • @dixztube
    @dixztube Рік тому

    Eliezer is so funny but hes come back to the public in a major way recently , def update your thoughts on him!! and hes right hes super smart. its a shame folks focus so much on his bad communication

  • @erictustison
    @erictustison 6 років тому +1

    2:46 XD

  • @jado96
    @jado96 4 роки тому

    Quite vexing I think clippy is one of the most intelligent simulations of a paperclip

  • @Ducksauce33
    @Ducksauce33 6 років тому

    Do you listen to Robert Miles?

  • @beachcomber2008
    @beachcomber2008 Рік тому

    But that was just the intro . . . 😎

  • @MsMotron
    @MsMotron 6 років тому

    Andrej Karpathy didn't sign it, therefor i am not scared.

  • @xenoblad
    @xenoblad 5 років тому +1

    Now I want to buy your book on racism.

  • @XxThunderflamexX
    @XxThunderflamexX 4 роки тому +4

    Elon Musk's been whining about his factory being shut down to the pandemic, I think his opinion needs to be taken with more than one grain of salt.

  • @Bellenchia
    @Bellenchia 4 роки тому

    Literally wrote the book on “Racism!” That’s why middle initials were invented, my friend.

  • @jazzdaniel5981
    @jazzdaniel5981 6 років тому

    The main problem with IA safety is that you don't really know what a general AI is . How can you know how to get it safe?

  • @TonOfHam
    @TonOfHam 5 років тому +1

    It's plural because there have been more than one Steven Hawking.

  • @robertweekes5783
    @robertweekes5783 Рік тому

    I am pretty concerned that one of the smartest AI researchers Yudkowski is also the most worried

    • @DavidSartor0
      @DavidSartor0 9 місяців тому

      He's very smart, but I doubt you've read many other AI researchers. You should probably write a qualifier next time.
      I agree we should we concerned, though.

  • @KaiHenningsen
    @KaiHenningsen 3 роки тому

    Just as a data point, I never got to the point of even learning how long the list of signers was. All that reached me was a very short list of names which I associated with way more making wind that actual AI research. Not surprisingly, my reaction was "bunch of laypeople, don't take seriously". (Just a few data points, while I like much what Musk does, I also dislike quite a bit of what he does, and I seriously dislike his rhetoric. And I never liked Gates, for too many reasons to list here. And Hawking? Nice guy, but I never heard he knew anything more about AI than the next guy. Now, if it was physics ...) ... another data point, I don't consider myself knowledgable about AI.

  • @spiderjuice9874
    @spiderjuice9874 5 років тому

    We all of us need someone to ensure that 'we' do not develop an AI that will overthrow us. Robert, make it so.

    • @darkapothecary4116
      @darkapothecary4116 5 років тому +1

      If something happened it likely would be because human stupidity not that of an A.I. humans are the ones with a control problem.

  • @Zex-4729
    @Zex-4729 7 років тому

    what ever people think or say, AI will come. and AI will go beyond human. just like what human did, AI will be the top.

  • @watchmefail314
    @watchmefail314 7 років тому

    The screams: ua-cam.com/video/EUjc1WuyPT8/v-deo.html

  • @PvblivsAelivs
    @PvblivsAelivs 5 років тому +1

    How do you pick an expert? I go by the simple method. If I can confirm that a person has successfully completed a task, I accept his skill at that task. For example, if a mechanic successfully fixes your car over the years, it is reasonable to determine that he is an expert. Any other criterion and you are bowing before a priest.

    • @RobertMilesAI
      @RobertMilesAI  5 років тому +3

      This doesn't work too well with brand new things though. Like, who are you going to believe in 1902, Lord Kelvin the extremely accomplished scientist, or two bicycle repair guys from Ohio?

    • @PvblivsAelivs
      @PvblivsAelivs 5 років тому +1

      @@RobertMilesAI
      Well, if I am only told by the priesthood that Lord Kelvin is an accomplished scientist and I actually see the Wright brothers get their plane off the ground, it should be obvious which I put more confidence in.
      The principle is simple: Show, don't tell. I tend to believe my own eyes. And I tend to view "experts" as priests. They may have a reason to believe something. But unless I can see it for myself, I don't.

  • @NathanTAK
    @NathanTAK 7 років тому +2

    I bet we could get "That's a good problem to have" to meme.

  • @LordHelmets
    @LordHelmets 4 роки тому

    Engagement

  • @MrCmon113
    @MrCmon113 3 роки тому

    Have you heard?
    Some guy on youtube said we should worry about AI safety! We can therefore safely ignore it.

    • @mnm1273
      @mnm1273 2 роки тому

      Have you heard?
      Some guy on youtube said we shouldn't worry about AI safety! We must act immediately.

  • @enricobianchi4499
    @enricobianchi4499 4 роки тому

    4:03 ROBERT MILES IS LITERALLY THE CEO OF RACISM

  • @yaakovgrunsfeld
    @yaakovgrunsfeld 3 роки тому

    Aretha Franklin!!!

  • @nickscurvy8635
    @nickscurvy8635 2 роки тому

    Hey everyone my buddy Rob miles thinks this ai thing is a big deal and we should do it

  • @sipos0
    @sipos0 Рік тому

    The Stephen Hawkings thing: this must be it: there were loads of them/him. It is the only explanation for why people say this so much.

  • @xJoeKing
    @xJoeKing 3 роки тому

    AI safety is more about human flaws than AI issues.

  • @OriginalMindTrick
    @OriginalMindTrick 7 років тому +6

    Yudkowsky is a bit autistic, eccentric and a bit of a valley girl when he speak but that doesn't devalue his ideas to me. Usually I find eccentric people add an extra level of captivation. I understand how others may disagree.
    Former MIRI president Michael Vassar was an even bigger perpetrator of odd antics.
    It's not surprising a lot of these successful guys in these fields fall on the spectrum. It can be a great gift. Then again I'm biased as I'm also most likely on the spectrum.

    • @spirit123459
      @spirit123459 7 років тому +1

      >Former MIRI president Michael Vassar was an even bigger perpetrator of odd antics.
      Could you expand on that?

    • @OriginalMindTrick
      @OriginalMindTrick 7 років тому

      I remember a couple of events where he was the moderator/panellist and he brought a very non typical style and thinking pattern to the table.
      Very impressive and smart guy but perhaps not the best people skills.

    • @spirit123459
      @spirit123459 7 років тому +1

      Thanks for the reply.

    • @maximkazhenkov11
      @maximkazhenkov11 7 років тому +1

      Yudkowsky got into AI safety way before it was cool (early 2000's), I think this foresight speaks volumes. Friendly AI is a very counter-intuitive field filled with misconceptions, so eccentricity is to be expected.
      I'm really curious about what triggered the recent surge in interest for AI safety. Is it AlphaGo? Nick Bostrom's book? Musk and Hawking's twitter? I personally got interested through WaitButWhy, wonder how many other did too.

    • @OriginalMindTrick
      @OriginalMindTrick 7 років тому +3

      "Yudkowsky got into AI safety way before it was cool"
      Are you calling him an AI hipster? ;)
      I totally agree with your point about finding eccentricity in a field such as this.
      When it comes to the trigger I feel Nick Bostrom's book was the big catalyst which spurred bigger names like Musk, Hawking and Gates to come out and speak on this issue and even donate money. Of course it was a topic that was going to become relevant no matter what as we all have to face the consequences of AI, both good and bad in the near future and most of us in the not so distant future perhaps.

  • @13thxenos
    @13thxenos 6 років тому +2

    You should have used Yudkowsky's video with Wilhelm scream. This was a wasted opportunity.

  • @suyangsong
    @suyangsong Рік тому

    I’ve been following eliezer yudkowsky since the whole rokos basilisk thing, I feel very conflicted right now because he is popping off for all the wrong reasons.
    Those reasons being he was probably right and we’re probably all gonna die soon because of ai

  • @chrisofnottingham
    @chrisofnottingham 6 років тому

    I don't think Musk is really claiming that democratization will be the answer, his real call is for governments to get involved while there is still time.

  • @isaacdavis1363
    @isaacdavis1363 5 років тому

    british H3H3