The other "Killer Robot Arms Race" Elon Musk should worry about

Поділитися
Вставка
  • Опубліковано 21 сер 2017
  • Elon Musk is in the news, talking to the UN about autonomous weapons. This seems like a good time to explain one area where we don't quite agree about AI Safety.
    The Article: www.independent.co.uk/news/sci...
    The clip at 2:54 is from a Y Combinator interview: "Elon Musk : How to Build the Future": • Elon Musk : How to Bui...
    With thanks to my excellent Patreon supporters:
    / robertskmiles
    Steef
    Sara Tjäder
    Jason Strack
    Chad Jones
    Ichiro Dohi
    Stefan Skiles
    Katie Byrne
    Ziyang Liu
    Jordan Medina
    Kyle Scott
    Jason Hise
    David Rasmussen
    James McCuen
    Richárd Nagyfi
    Ammar Mousali
    Scott Zockoll
    Joshua Richardson
    Fabian Consiglio
    Jonatan R
    Øystein Flygt
    Björn Mosten
    Michael Greve
    robertvanduursen
    The Guru Of Vision
    Fabrizio Pisani
    Alexander Hartvig Nielsen
    Volodymyr
    David Tjäder
    Paul Mason
    Ben Scanlon
    Julius Brash
    Mike Bird
    Taylor Winning
    Peggy Youell
    Konstantin Shabashov
    Almighty Dodd
    DGJono
    Matthias Meger
    Scott Stevens
    Emilio Alvarez
    Benjamin Aaron Degenhart
    Michael Ore
    Robert Bridges
    Dmitri Afanasjev
    Brian Sandberg
    Einar Ueland
    Lo Rez
    C3POehne
    Stephen Paul
    Marcel Ward
    Andrew Weir
    Pontus Carlsson
    Taylor Smith
    Ben Archer
    Ivan Pochesnev
    Scott McCarthy
    Kabs Kabs
    Phil
    Philip Alexander
    Christopher
    Tendayi Mawushe
    Gabriel Behm
    Anne Kohlbrenner
    Jake Fish
    Jennifer Autumn Latham
    Filip
    Bjorn Nyblad
    Stefan Laurie
    Tom O'Connor
    Krethys
  • Наука та технологія

КОМЕНТАРІ • 670

  • @silvercomic
    @silvercomic 6 років тому +416

    AI safety in the media drinking game:
    Take a shot when:
    - Picture of the terminator
    - Picture of HAL9000
    - Picture of Elon Musk
    - Picture of Bill Gates
    - "Doom"
    - "Evil"
    - "Killer Robots"
    - "Robot Uprising"
    - Author shows their understanding of the subject to be limited
    - Picture of Mark Zuckerberg
    - Picture of ones and zeros
    - Picture with the electronic circuit shaped like a brain
    - Picture of some random code, probably html
    - Picture of Eliezer Yudkowsky (finish the bottle)
    On a serious note:
    Perhaps some of the signatories are aware of your criticism, but consider this a more achievable step. In fact, one could use this as a test platform into the feasibility of restricting AI research.

    • @maximkazhenkov11
      @maximkazhenkov11 6 років тому +48

      *dead from alcohol poisoning after the first page *

    • @z-beeblebrox
      @z-beeblebrox 6 років тому +65

      Yeah, that's not a drinking game it's suicide

    • @silvercomic
      @silvercomic 6 років тому +6

      Not really it's pretty much akin to the machine learning departments Thursday evening drinks that I used to attend when I was a student.

    • @Nixitur
      @Nixitur 6 років тому +7

      Random code is unlikely to be HTML, really. More often than not, it's Linux kernel, thanks to the general public license.

    • @BusinessRaptor520
      @BusinessRaptor520 5 років тому +1

      In fact, one could increase the amount of pompousioty by a factor of 10 and at the same time add frivolous filler text to blatantly hide the fact that theyre willing to suck the teet of the hand that feeds until the udder runs dry.

  • @AlfredWheeler
    @AlfredWheeler 6 років тому +234

    Just an observation... Wars are not won by "good people". They're won by people that are good at winning wars--and sometimes sheer luck...

    • @bardes18
      @bardes18 5 років тому +6

      Good people are people who will act to help other people's terminal goals :p

    • @GerBessa
      @GerBessa 5 років тому +1

      Clausewitz would consider this an erroneous shortcut.

    • @Ashebrethafe
      @Ashebrethafe 5 років тому +44

      Or as I've heard it phrased before: "War never determines who is right -- only who is left."

    • @SiMeGamer
      @SiMeGamer 4 роки тому +7

      @@bardes18 that's a very poor understanding of what "good" is and uses the Christian altruist ethics version of what it means. I'd argue those ethics are fundamentally wrong because they are based on misintegrated metaphysics and epistemological errors. Determining good and bad in general while applying it something that is rather specific (like war) philosophically speaking, is impossible to do. You have to have context to be able to do that as well as establish the ethical framework you apply to it which I recon would take many decades to have a chance at in some country (currently I find Objectivism to be the most correct and possible the ultimate form of philosophical understanding of all braches of philosophy - albeit debatable in the aesthetics department which is rather irrelevant in our context).

    • @rumplstiltztinkerstein
      @rumplstiltztinkerstein 4 роки тому +2

      @@SiMeGamer "good" and "bad" have no meaning apart from what people want it to mean. So if someone wants to live their life being "good" or "bad" people, their life has no meaning at all.

  • @WilliamDye-willdye
    @WilliamDye-willdye 6 років тому +10

    I agree that there is more than one AGI race in play. It reminds me of the old debate about "grey goo" (accidental runaway self-replication) vs. "khaki goo" (deliberate large-scale self-replication as a weapon).

  • @zachw2906
    @zachw2906 5 років тому +78

    The obvious solution is to create a superhuman AGI with the goal of policing AI research 😉... I'll show myself out 😞

    • @xcvsdxvsx
      @xcvsdxvsx 4 роки тому +13

      Seriously though. If we are going to survive this it will probably be because someone unleashes a terribly destructive AGI that threatens to destroy the human race, we all flip out and every nation on the planet bands together to overcome this threat, we quickly realize that the only chance of saving ourselves is to create an AGI that actually does align with human interests, we all work together to achieve this, then throw the entire weight of the human race behind the good AGI in hopes that its not too late and we arent already so irrelevant as to be able unable to tip the scales in favor of the new good AGI. Then we realize how quirky the "good one" ends up being even if it does allow us to continue living, and we just have to deal with its strange impacts on human kind forever.

    • @XxThunderflamexX
      @XxThunderflamexX 4 роки тому +9

      "Sheesh, people keep on producing dangerous AGI, this would be so much easier if I could just lobotomize them all..."

    • @xcvsdxvsx
      @xcvsdxvsx 4 роки тому +5

      @@bosstowndynamics5488 Oh I know what I suggested was a long shot. It just seems like the only chance we have. Getting a global prohibition on this kind of research is naive and not going to work. Having all of it that is build be done safely isnt going to work. Praying that it isnt actually as dangerous as I think might be another decent long shot.

    • @marscrasher
      @marscrasher 3 роки тому

      @@xcvsdxvsx left accelerationism. maybe this is how the revolution comes

  • @jerome2541
    @jerome2541 4 роки тому +17

    "Safe and careful organisations"
    So that exists?

  • @JR_harlow
    @JR_harlow 4 роки тому +4

    I'm not a scientist or any kind of engineer but your content is very easy to comprehend, I'm glad you have patrons to support your channel as I just recently discovered this channel and really enjoy it.

  • @militzer
    @militzer 6 років тому +47

    For the "Why not just ... ?" series: why not just build a second AI whose function is to keep the "first" (and I quote because ideally you would build/activate them simultaneously) from destroying us?

    • @RobertMilesAI
      @RobertMilesAI  6 років тому +51

      Thanks, yeah that's an idea I've seen a few times, I think it would make a good "Why not just" video

    • @chris_1337
      @chris_1337 6 років тому +18

      The problem is the definition of the right utility function. Using an adversarial AI architecture still wouldn't solve that fundamental problem.

    • @RobertMilesAI
      @RobertMilesAI  6 років тому +36

      Yup. I think there's probably enough there for a decent video though.

    • @fleecemaster
      @fleecemaster 6 років тому +10

      I like the idea of "Why not just" videos :)

    • @Corbald
      @Corbald 6 років тому +7

      Not to derail the production of the next video, but wouldn't you have just compounded the problem, then? Two AI's you have to worry about going 'rogue' instead of one? Who watches the watcher? If they both watch each other, couldn't one convince the other that it's best to destroy us? etc...

  • @Linvael
    @Linvael 6 років тому +104

    To be fair - the arms race they want to deal with is the more imminent one. AGI is more dangerous, but far away in the future (let's say somewhere between 5 and 500 years). Simple AI with weapon is "I wouldn't be very surprised if we had those already".

    • @maximkazhenkov11
      @maximkazhenkov11 6 років тому +42

      Oh we definitely have those ready and in action, the only "human oversight" is a trigger-happy drunkard sitting in an air-conditioned container some 10,000 miles away.

    • @joshissa8420
      @joshissa8420 6 років тому +24

      maximkazhenkov11 definitely an accurate representation of the US drone strikes

    • @inyobill
      @inyobill 5 років тому +3

      @@joshissa8420Or not. as the case may be. Note that I understand exaggeration for effect.

    • @chibi_bb9642
      @chibi_bb9642 Рік тому +5

      hey wait we met the minimum you said oh no

    • @Linvael
      @Linvael Рік тому

      @@chibi_bb9642 Right on time with ChatGPT release too! That was a good minimum

  • @himselfe
    @himselfe 6 років тому +47

    Unless you impose some sort of Orwellian control on technology, there isn't much you can do to police the development of AGI. It's not like nuclear weapons that require a special substance to be made.

    • @grimjowjaggerjak
      @grimjowjaggerjak 4 роки тому +2

      You can create an agi that has the goal of restriction other agi first.

    • @PragmaticAntithesis
      @PragmaticAntithesis 4 роки тому +18

      @@grimjowjaggerjak That AI would kill everyone to ensure we can't make a second AI.

    • @teneleven5132
      @teneleven5132 4 роки тому +7

      It's likely that an AGI would require a great deal of hardware to run though. I seriously doubt it would work on the average computer.

    • @mvmlego1212
      @mvmlego1212 4 роки тому +2

      @Shorne Pubique -- That's an interesting point. Malware is a heck of a lot easier to make than AGI, as well.

    • @GuinessOriginal
      @GuinessOriginal 4 роки тому

      Ten Eleven could run like seti

  • @amargasaurus5337
    @amargasaurus5337 4 роки тому +6

    "but I don't think AGI needs a gun to be dangerous"
    I agree, oh boy I so thoroughly agree

  • @petersmythe6462
    @petersmythe6462 6 років тому +3

    I think democratizing it in the sense of collectivization rather than proliferation is a good goal.
    Collectivization, whilst allowing marginally less autonomy and freedom, still creates accountability and still responds to the will of the people. Creating a bureaucracy that can't be bought (that may require a change to our political-economic system) and who's members are subject to immediate recall (this definitely requires a change to our political-economic system) that does the more dangerous and/or authoritarian aspects of keeping AI under control seems preferable to either corporatization (which ignores human need) or proliferation (which ignores safety).

  • @trefod
    @trefod 5 років тому +39

    I'd suggest a CERN type deal, non privatised and multi governmental.

    • @inyobill
      @inyobill 5 років тому +8

      What would prevent some agent from ignoring any agreement(s) and going off on their own tangent? The genii is out of the bottle.

    • @kris030
      @kris030 4 роки тому

      Unlike CERN which needs funding for machinery basically no individual could get, developing AGI takes one smart person and a laptop... not safe

    • @kris030
      @kris030 4 роки тому

      @Bruno Pereira that's true but developing one ie writing the code doesn't need a supercomputer

    • @0xB8xor0xFF
      @0xB8xor0xFF 4 роки тому +6

      @@kris030 Good luck developing something, which you can't even test run.

    • @kris030
      @kris030 4 роки тому

      @@0xB8xor0xFF true, altough if you've got (probably mathematical) proof of it actually being generally intelligent, I don't think getting a supercomputer would be a difficulity

  • @MrGooglevideoviewer
    @MrGooglevideoviewer 5 років тому +8

    You are a freakin' champion! Your videos are insightful and thought provoking. Cheers!

  • @hypersapien
    @hypersapien 6 років тому +4

    I really enjoy your videos Robert, keep up the good work.

  • @perfectcircle1395
    @perfectcircle1395 6 років тому +2

    I've been thinking about this stuff a lot, and you always give new, and interesting view points on this topic. I love it. Subscribed.

  • @G_Genie
    @G_Genie 5 років тому +14

    Is the song in the background an acoustic cover of "This ain't a scene, it's an arms race"?

  • @benaloney
    @benaloney 6 років тому +2

    We love your videos Robert! Would love to see some longer ones! 👍🤖

  • @skroot7975
    @skroot7975 6 років тому +1

    Thank you for making this channel Rob!

  • @jonathandixson1424
    @jonathandixson1424 6 років тому +1

    Ending is great. Video is great. Channel is great. Every video I watch is so well thought out and intelligent.

  • @ARTUN3
    @ARTUN3 6 років тому +4

    Good video Rob!

  • @mastersoftoday
    @mastersoftoday 6 років тому +1

    love your videos, not least because of you sense of humor, thanks!

  • @LamaPoop
    @LamaPoop 3 роки тому +1

    1:45 - 2:26 Once again, you perfectly put into words one of my biggest concern. This and the fact that, once developed, such an AI will be kept a secret initially, for obvious reasons...

  • @RazorbackPT
    @RazorbackPT 6 років тому +1

    Love your channel, keep it up!

  • @LeandroLima81
    @LeandroLima81 5 років тому +1

    Been kinda binge watching your channel. You seem like the kinda guy to have a drink with. Not for alcohol, but for good conversation and stuff. I'm really enjoying you 😉

  • @LarlemMagic
    @LarlemMagic 6 років тому +21

    Mandate safety requirements when dolling out that sweet grant money.

    • @ralphclark
      @ralphclark 4 роки тому +7

      A lot of that money will be put up by private interests in exchange for control of the IP. They won't give a damn about safety requirements unless they're all up to their balls in regulation.

  • @LeosMelodies
    @LeosMelodies 6 років тому

    Cool channel man! Keep up the good work!!

  • @veda-powered
    @veda-powered 5 років тому +5

    1:01 Loving this positivity😀👍😀!

  • @piad2102
    @piad2102 2 роки тому

    You are very interesting to listen to. I did/do not know much about AI. You make it interesting. I watch Two Minute Papers, good too.

  • @thelozenger2851
    @thelozenger2851 5 років тому +26

    Is anyone else faintly reminded of Jreg watching this dude?

    • @horserage
      @horserage 4 роки тому +5

      I see it. Less depression though.

    • @mattheworegan5371
      @mattheworegan5371 4 роки тому +3

      Slightly more r/enoughmuskspam, but he tones it down better than most popscience channels. On the Jreg question, I think his Jreg energy comes from his appearance rather than his actual content

  • @multilevelintelligence
    @multilevelintelligence 5 років тому

    Hi Robert, i love your channel, and you inspired me to try to do a similar work spreading the word on AI research in Brasil. thanks for the great work :)

  • @jimtuv
    @jimtuv 6 років тому +82

    If all the AGI researchers banded together in an open program where everyone would get the final results at the same time and everyone would be concentrated on safety then you could say that democratization of the technology was the better route. This is one area that cooperation rather than competition may be the best bet.

    • @Mar184
      @Mar184 6 років тому +10

      Fully agree with this, Rob Miles concern is legit but if his verdict is that a secretive approach is ultimately more safe I also think he's wrong. With the transparent, cooperative approach supported by the vast majority of experts on the subject, it seems unlikely that a small rogue group could, just by skipping the safety issues, gain such a large advantage that their version would be far enough ahead of the public one (that's supposed and used to protect against unethical AGI scheming) to overpower it decisively enough to achieve world domination. And if that case doesn't come true, the cooperative approach is better as it ensures a safe AGI will arrive sooner and will be aligned with the public's interests.

    • @fraserashworth6575
      @fraserashworth6575 6 років тому +12

      That would be ideal yes, but if we lived in such a world: nuclear weapons would not exist.

    • @lutyanoalves444
      @lutyanoalves444 6 років тому +5

      obviously the more people working together on it the better. but people WILL do things for their onw benefit, wether they are trying to kill someone or donating money to charity.
      Its all selfish.
      in other words, unless youre trying to IMPOSE(by force) your idea that you can only work on it if youre part of the "United Research Group", there will always be independent developers. and thats ok.
      thats ok because, this "Official Group" is also just a group of humans, independent of each other too.
      Someone might build an AGI that will kill everyone, but if you think we should force people so that only ONE GROUP can do that, youre saying THEY have the right to risk everyone, and no one else.
      (who died and gave them this right above everyone else?)
      you cannot say that.
      because we are all humans, and treating some different than others like that is at least tyranny.
      -----------------------------------------------------------------------------------
      Now the question becomes, DO YOU AGREE WITH TYRANNY?

    • @knightshousegames
      @knightshousegames 6 років тому +1

      And if we lived in that world, we wouldn't need AGI safety research because when you turned it on, the AGI would just hold hands with it's creator and sing kumbaya. But we don't live in the logical, altruistic, utopian timeline.

    • @jimtuv
      @jimtuv 6 років тому +3

      This attitude is why we will be extinct soon.

  • @alexyfrangieh
    @alexyfrangieh 6 років тому

    you are as brilliant as always, eager to hear you when you are in your forties! keep up

  • @lazognalazogna7083
    @lazognalazogna7083 6 років тому +1

    Quite a bit of food for thought, thank you.

  • @noisypl
    @noisypl 11 місяців тому

    Wow. Excellent video

  • @deviljelly3
    @deviljelly3 6 років тому +3

    Robert, if you have time can you do a brief piece on IBMs TrueNorth please...

  • @DigitalOsmosis
    @DigitalOsmosis 6 років тому +2

    Ideally "democratization of AI research" would not lead to thousands of competing parties, but lead to an absence of competition that would promote an environment where focusing on safety is no longer the opposite of focusing on progress.

    • @maximkazhenkov11
      @maximkazhenkov11 6 років тому

      Sounds like something a politician would say. Ideally we should continue funding all the programs while cutting back on spending deficit.

  • @zer0nen0ne78
    @zer0nen0ne78 6 років тому +4

    No subject consumes more of my thought than this, and it's one that fills me its equal parts wonder and terror.

  • @jonathandixson1424
    @jonathandixson1424 6 років тому

    2:24 and this is why this channel is fantastic

  • @paulstevenconyngham7880
    @paulstevenconyngham7880 6 років тому

    where did you get the shot glass?

  • @locarno24
    @locarno24 4 роки тому

    For what it's worth - 2 years later - there already is a resolution against lethal autonomous weapons: because you can't really functionally define an autonomous sentry gun, or a drone which doesn't need user input to drop a bomb, in a way that doesn't fall foul of the Ottawa landmine convention's description of what a mine or denial munition is. The problem is that not every country has signed that convention: China, Russia, the USA, India, Pakistan, Israel, Iran, North and South Korea have all refused - meaning it only really applies in places that weren't using mines anyway...

  • @Horny_Fruit_Flies
    @Horny_Fruit_Flies 4 роки тому +78

    Pretty ironic for a billionaire oligarch to talk about "democratization and sharing of power"

    • @Dan-lt8vm
      @Dan-lt8vm 4 роки тому +8

      Care to explain why that is ironic? He's become a billionaire by enhancing the lives of others, creating cheaper and better ways to (1) perform online transactions, (2) produce electric cars at affordable prices, (3) generate and store electricity, (4) launch stuff into space, (5) soon-to-be global internet, etc, etc. None of those good things happen if he doesn't produce a business model that is sustainable (profitable). So please explain the irony. Or is it best summed up as "RICH PEEPAL BAD"?

    • @Horny_Fruit_Flies
      @Horny_Fruit_Flies 4 роки тому +22

      @@Dan-lt8vm He literally did all that by himself? He just pulled up his sleeves, sat at his desk, and do all of that by himself? I would think so, considering the share of the profits that end up on his bank account.
      You use the most stereotypical cookie-cutter, overused and outdated pro-oligarch arguments. And for your information, yes, all billionaires are bad. The mere fact that we tolerate their existence is testimony to our failure as a species. Thanks for contributing to that failure.

    • @Dan-lt8vm
      @Dan-lt8vm 4 роки тому +5

      @@Horny_Fruit_Flies I didn't say he did that by himself, so you're creating a straw man and arguing against your straw man. Enjoy your straw debate with yourself, and have a wonderful day.

    • @Horny_Fruit_Flies
      @Horny_Fruit_Flies 4 роки тому +3

      ​@@Dan-lt8vm I posed a question, I didn't say that you said anything. But go ahead, run away from a losing argument, bitch, run.

    • @khatharrmalkavian3306
      @khatharrmalkavian3306 4 роки тому +1

      Elon is not an oligarch.

  • @failer_
    @failer_ 6 років тому +1

    We need an AGI to safeguard AGI research.

  • @noterictalbott6102
    @noterictalbott6102 6 років тому

    Do you play those outros on your ax guitar?

  • @Belthazar1113
    @Belthazar1113 5 років тому

    If you were a team that was very concerned with AI safety and the possibility of misaligned or unaligned AGI running amok then wouldn't one of the first AGI you would want to make would be one that was primarily optimized for identifying and neutralizing AGI that will run counter to human interests? If the next arms race is going to be fought by non-human intelligence, then it would seem that having your own soldiers in that fight would be one of the first things you might want to release to propagate and advance on its own.

  • @jqerty
    @jqerty 6 років тому +5

    Have you read 'Superintelligence' by Nick Bostrom? What is you opinion on the book? (I just finished it)

    • @jqerty
      @jqerty 6 років тому +3

      (I feel like I asked a physicist whether he read 'A brief history of time' (but then written by a philosopher) )

    • @NiwatoriRulez
      @NiwatoriRulez 6 років тому +3

      He has, he has even recommended the book in some of the videos he made for computerphile.

  • @irgendwieanders2121
    @irgendwieanders2121 Рік тому

    AGI DOES need a gun to be dangerous!
    1) AGI just does not need a gun built in from the start
    2) More things are/can be guns than just guns and AGI may be creative

  • @mrsuperguy2073
    @mrsuperguy2073 6 років тому +2

    this might be my a level in economics talking, but i think the most effective way to prevent this arms race from creating an AGI with no concern for safety, is for the government to take away the perverse incentive to be the 1st to create an AGI as opposed to trying to ban or regulate it. Basically i'm saying change the cost/benefit balance such that no one wants to simply be the 1st to make an AGI (but rather perhaps the 1st the make a SAFE AGI). There are a number of ways to do this (I've thought of a couple) and not being an economist nor a politician i can't speak for the real world efficacy of any of them but here goes:
    -You could offer a lot of money to those who create an AGI safely such that the extra effort ends up getting you a bigger total reward than the benefits of being the 1st to create an AGI alone
    - You could heavily regulate the use of AGI so that even if you've got a fully functional one, you can't do much with it due to government restrictions unless it's demonstrably safe
    I'd be interested to hear anyone's ideas about other ways to achieve this end and perhaps some feedback on mine.

    • @fleecemaster
      @fleecemaster 6 років тому +6

      There is absolutely no way you could regulate this. All it would do is push it underground.

    • @fraserashworth6575
      @fraserashworth6575 6 років тому

      I agree.

    • @vyli1
      @vyli1 6 років тому

      once you have AGI, I'm not completely sure the people that created it would be able to keep it in control to be limited in its usage. In fact, that's pretty much the point of this channel, to tell you that it is not simple at all and it's trying to educate us about ways experts have thought about in achieving this level of control

  • @XIIchiron78
    @XIIchiron78 3 роки тому +1

    Collorary question: how do you actually restrict AI research? With nukes you need quite large and sophisticated facilities to refine the raw elements, but AI can be developed by anyone with enough computing power, something that will only become more achievable as time goes on.

    • @stampy5158
      @stampy5158 3 роки тому +1

      Computing power is not necessarily the only bottleneck until we have AGI, it seems to me that it will take a significant amount of research time to be able to actually engineer a powerful enough system. (If it won't then this question becomes a lot more difficult ["Every 18 months, the minimum IQ necessary to destroy the world drops by one point."- Yudkowsky-Moore law])
      If we could convince everyone in the AI field that alignment should be top priority restricting research could be enforced through funding (this is a big IF at the moment of course). It is something with some precedent, it is widely agreed that genetic engineering of humans should not be pursued and it is therefore impossible to get research grants for research in that area, some lone researchers have done some things in the area, but without funding access it is very difficult for them to do anything with far reaching consequences.
      -- _I am a bot. This reply was approved by plex and Augustus Caesar_

  • @bassie7358
    @bassie7358 6 років тому +5

    2:20
    I thought he said "Russian" the first time :p

  • @leepoling4897
    @leepoling4897 6 років тому

    Love the fall out boy reference at the end

  • @darthutah6649
    @darthutah6649 5 років тому

    This reminds me of something else: nuclear power.
    In the 1930s, people were saying the same thing about nuclear energy. If you could split apart an atom, you could generate quite a bit of energy. However, the ability to do so could be weaponized. When the USSR tested its first atomic bomb in 1949, four years after the US used their own on Hiroshima and Nagasaki, there was quite a bit of concern that the feud between the two world powers would lead to nuclear war. In fact, the bulletin of atomic scientists devised the doomsday clock which basically indicated how far away mankind was to a "global manmade catastrophe" (originally, it specifically referred to nuclear war but it now includes climate change).
    Although there were a few close calls, the dreaded nuclear doomsday scenario never came to pass. It didn't happen for three reasons:
    1. Neither Nato nor the Soviet bloc had a greater interest in destroying the other side than in not being destroyed.
    2. Nuclear warheads are very difficult to make. For starters, weapons grade uranium consists of mostly U-235 while in nature, almost all of it is U-238. Enriching it is the easy part, now you have to make a bomb which can cause all of those reactions in a short timespan. And then you have to be able to transport it to its destination (North Korea is trying to figure out that part).
    3. Both sides agreed to various nuclear treaties. These treaties put limits on nuclear testing and proliferation. Development of WMDs is something that the US government takes very seriously, especially if the country in question is known for human rights abuses.
    If a government wants to develop nuclear weapons, it could probably do so after a decade. But a terrorist group doesn't have access to that much resources. Even if they could, they would probably have to deliver it by land (ground bursts deal less damage than air bursts).
    I believe that the same may be true with AI. An AI which could deal lots of damage would obviously take lots of resources to build (having human intelligence isn't enough). People may say that an AI smarter than us could figure out a way to destroy us but intelligence isn't the sole determiner of strength (bears have killed humans before, yet no one says that bears are smarter).

    • @0MoTheG
      @0MoTheG 5 років тому

      1) is wrong. There was plenty of thought into the first strike advantage.
      Google (a non gov entity) can not make a nuke but it does make AI.

    • @darthutah6649
      @darthutah6649 5 років тому

      @@0MoTheG Indeed, there was a great advantage in first strike but if the other side detected it before your nukes detonated, they would launch their s as well. There are also nuclear submarines which the enemy nation would not know the exact location of. In the end, neither side risked it.
      As for AI, my point was that if a private company or a terrorist group could design an AI that ends up causing havoc, then there's no reason to assume that the US government wouldn't have an even stronger AI.

  • @GigaBoost
    @GigaBoost 6 років тому +7

    Democratizing AGI sounds like democratizing nuclear weapons.

    • @bilbo_gamers6417
      @bilbo_gamers6417 5 років тому +1

      I trust the common man with a nuclear weapon more than I trust big government with one

    • @0MoTheG
      @0MoTheG 5 років тому

      @@bilbo_gamers6417 Even if that were sensible, there are many more of one than the other!

    • @revimfadli4666
      @revimfadli4666 4 роки тому

      @@bilbo_gamers6417 especially if the weapons are a package deal with (relatively)clean nuclear energy, with the ability to recycle & enrich waste into fuel, without political pressure or all

  • @notoioudmanboy
    @notoioudmanboy 6 років тому +11

    I'm glad w
    UA-cam is here for this kind of video. This was the point of the internet. I don't have any reservations about the normies I'm just glad smart people get a corner so I get a chance to hear what the smart people think.

  • @diabl2master
    @diabl2master 4 роки тому

    2:21 I thought you said "Russian", and the description which was quite fitting

  • @DaVince21
    @DaVince21 4 роки тому

    What are those earphones, and are/were they any good?

    • @stampy5158
      @stampy5158 3 роки тому

      They were UA-cam branded, from the UA-cam Space Shop. And yeah they weren't actually that good. They worked fine but they're nothing special.
      -- _I am a bot. This reply was approved by robertskmiles_

  • @Chrisspru
    @Chrisspru 4 роки тому +1

    i think a tripple core ai could solve the problem. one core cares about the ai's preset goal and is hard programed with "instincts" (sirvival, conservation of energy, doing the minimum of what is required, social interaction), one core is the moderator, with the goal of morality, human freedom, integration and following preset limits.
    the third core is a self observing core with an explorator and random noise generator. it is motivated by the instinct and goal core and is moderated by the moral and integration core. it has access to both cores output.
    the goal/ instinct core and moderator core can access the actualizer cores results. the goal core is hard limited by the moderator core. the moderator is softly influenced by the instincts.
    the result is an ai with a conciousness and subconciousness. the subconciousness is split into an "it" (goal and instincts) and a "super-ego" (morals and rules). both develop mostly seperately. the actualizer/explorer is the ego. it acts upon the directives of both the super-ego and the it to fullfill the task at hand. it should have an outline of the task, but no hard coded information or algorythm about the task.
    the continous development of the moderator creates adaptable boundaries for the otherwise rampant motivator. the actualizer is there to find solutions to the diverging comands without breaking them and to find methods to better folow both. it also alows for the insertion of secondary soft goals and is the interactive terminal.

  • @ClearerThanMud
    @ClearerThanMud 5 років тому

    I wonder whether a UN resolution against LAWS would have any effect at all, other than perhaps increasing the need for secrecy. I only have a layman's understanding of game theory, but given the game-changing potential of AI in weapons systems and the fact that the main penalty the UN can impose is to recommend sanctions, I don't see how any government could come to the conclusion that a decision to comply is in its best interests. Am I missing something?

  • @nonchip
    @nonchip 5 років тому +1

    Interestingly, Musk also bought a few corps that actually do Killer Robots... don't know if the guy screaming SKYNET!!!1eleven is the best one to conduct that research...

  • @Rick.Fleischer
    @Rick.Fleischer 2 роки тому

    Sounds like the answer to the Fermi paradox.

  • @mehashi
    @mehashi 6 років тому +1

    I love your perfect balance of dry informative content, and playful fun and metaphor. Look forward to more!

  • @BhupinderSingh-xv6dk
    @BhupinderSingh-xv6dk 3 роки тому

    Loved the drinking game at beginning of the video 🤣, but on a serious note, it seems like there is no way we can assure any AI Safety.

  • @NextFuckingLevel
    @NextFuckingLevel 3 роки тому +1

    rest of the world : noo, you can't just keep developing Killer bots
    US, CHINA, RUSSIA : haha, VETO right goes brrrr

  • @Macatho
    @Macatho 5 років тому +1

    It's interesting. We don't allow companies to build and store nuclear weapons. But we do allow them to do GAI research.

    • @inyobill
      @inyobill 5 років тому

      Unpoliceable.

  • @milanstevic8424
    @milanstevic8424 5 років тому

    Oh I'm going to release this in the air, because I don't see anyone bringing it up, yet I'm absolutely positive this is the way to go.
    The only way to keep an AGI in line, is to let it build another AGI whose goal would be to keep the first one in line. ad infinitum.
    In fact, and here things start to get perplexing, the reality of the universal AGI is that the thing will copy itself ludicrously fast and evolve, much like multicellular organisms do already. The way I'm seeing it, the goal shouldn't be in designing the "neural networks" but to allow cross-combination of its "genes" from which neural networks would begin to grow on their own.
    Before you know it, we'd have an ecosystem of superior intelligences fighting each other in lighting-speed debates, manifesting themselves in operative decisions only after a working cluster of the wisest among them has already claimed a victory.
    Because if there is a thing that universally defines an intelligence, it's a generality in point of view. Having a unique perspective is what makes one opinion unique compared to another. Having a VAST perspective is what constitutes a broad wisdom and lets it comprehend and embrace even what appears as a paradox. It's de facto more universal, more useful, and more intelligent if it can embrace the billions of viewpoints, and all of it in parallel. It consumes the moment of now much more accurately, but the only way to know for sure which opinions are good and which ones are bad -- technically it's an NP problem because it sits in an open, non-deterministic sandbox without a clear goal -- is to employ the principle in which only the most optimal opinions (or reasoning) would and could survive -- but they don't learn from the actual mistakes, but need to survive the battle of WITS. Also the newer agents would have access to newer information, thus quickly becoming responsible for weeding out the "habits" of the prior system.
    Having trillions of AGI systems that keep each other in line is much like how nature already balances itself. It's never just 1 virus. It's gazillions of them, surviving, evolving, infecting, reproducing. And from their life cycle a new complexity emerges. And so on. Until it fills every nook & cranny of what's perceivable and knowable. Thankfully, viruses have stayed in their own micro niche, and haven't evolved a central intelligence system, but we can't tell if we have organized ourselves against them or thanks to them -- in any case, we are here BECAUSE of them, that's how more complex system could emerge even though the first generation was designed with something else in mind. That would also make us relatively safe from any malicious human involvement. The swarm would always self-correct as it's not centralized, nor locally dependent on human input. It is curious though and constantly in the backdrop of everything, and the only way to contain or expand it is by liberating a new strain.
    And here are the three most common fallacies I can already hear you screaming about.
    1) Before you start thinking about how it sounds completely dystopian, having these systems lurk everywhere, watching you every move, well if you have a rich imagination as that, why don't you think the same about the bacteria, or your own trillions of cells spying on you. Seeing how much the auto-immune diseases are on the rampage, oh they know what you've been doing, or what you haven't been doing, how exactly you feel inside and are talking to you in swarms already. Yet no one is bothered by this, it's almost as if they didn't exist, as if we're just our brain thinking processes alone in the dark, instead of being some sort of an overarching consciousness deeply immersed in this reality with no clear boundaries with the physical bodies. Think again whether it's dystopian or if you'd actually like it more if there were some sort of universal helpers at this scale of things. Just think about it from a medical standpoint for a second, as there is no true privacy in this regard anyway.
    2) You'll also likely to start thinking about the absolutely catastrophic errors such system might be capable of, mutations and all, and that's legit -- but the factor you're neglecting is SPEED. The evolution I'm talking about is in the frequencies of tens to hundreds of orders of magnitude above the chemo-biological ones. These systems literally act in an incredibly small chunks, spatially and temporally speaking, that their mistakes cannot accumulate enough to truly spill out into any serious physical threat.
    In case of a clear macro-dichotomy, i.e. "to kill or not to kill" "to pull a trigger or not" etc. entire philosophical battlefields would ensue before the actual decision could be made, in a blink of an eye, simply because that's more efficient for a system as a whole. The reality of an AGI is not one of a whole unit, but of a swarm of many minute ultraquick intelligence agents, able to inhibit each other and argue endlessly with unique arguments, spread over an impossible-to-grasp landscape of knowledge, cognition, speculation, and determinism. They would consider much more than we could ever hope to contain in our heads or in any of our databases ever, and they wouldn't have to store this information and thus needlessly waste energy and space. They would literally act upon the reality itself, and nearly perfectly. So I'd argue that being ok with a policeman carrying a firearm is much less safe, simply because his or her central nervous system is less capable of an unbiased split-second decision that is typical for a dispersed AGI swarm intelligence of a comparable size.
    3) Finally, yes it sounds like a grey goo an awful lot, even though such AGI agents have no need to have individual physical bodies, and would likely be in many forms and shapes, or even just data packets, able to self-organize themselves in separate roles of a much larger system (again, like multicellular organisms do). But hear me out -- for some reason, the fear of grey goo is likely our faulty "unit reasoning" (i.e. personal biases, fears, and cognitive fallacies we all suffer from as individuals), as we always tend to underestimate the actual reality when it comes to things like grey goo, much like we cannot intuitively grasp the concept of exponential growth.
    The swarm's decision-making quality has to be asymptotic as a consequence of its growth, as there are obvious natural limits to this "vastness of perspective," so there is also an implied maximum population after which the gains in processing power (or general perception) would be so diminished, the reproduction would simply cease being economical.
    Besides, if we think about the grey goo from a statistical viewpoint, in the line of thought similar to Boltzmann Brain, there is a significant chance that this Universe has already given rise to grey goo in some form, and yet we don't see any evidence for it anywhere --- Unless we already do, of course, in the form of black holes, dark matter, dark energy, or life itself(!). But then, it's hardly what we imagined it to be like and there's nothing we can do anyway. Just think about it, aren't we already grey goo? And if you think we're contained on this planet, well, think again.
    *tl;dr*
    If you skipped here, I'm sorry but this post wasn't meant for you. I couldn't compress it any more.

  • @madscientistshusta
    @madscientistshusta 5 років тому

    Didn't expect to get this shit faced,but it is 7pm here
    *to termmmmminaaaterrr*

  • @shortcutDJ
    @shortcutDJ 6 років тому +1

    I would love to meet you but i've never been in the UK. If you are in Brussels, you are always welcome at my house.

  • @benparkinson8314
    @benparkinson8314 5 років тому +1

    I like the way you think

  • @wachtwoord5796
    @wachtwoord5796 Рік тому

    That actually IS my opinion on nukes. Mutually assured destruction is the only way to stop either guarenteed deployment or tyranny through exclusive access to nukes.

  • @NafenX
    @NafenX 6 років тому +1

    Name of the song at the end?

    • @nibblrrr7124
      @nibblrrr7124 6 років тому +1

      Some cover of "This ain't a scene, it's an arms race" by Fallout Boy. Didn't find it with a quick YT search, so it might be Rob's own?

  • @FalcoGer
    @FalcoGer 11 місяців тому

    what's a lethal autonomous weapon system anyway? It's a camera stuck to a gun and pointed at a field. Anything that moves you kill. It's really not that different from a minefield, except much easier to clean up.

  • @smithjones2018
    @smithjones2018 6 років тому +1

    Dude Subbed, BAN TACTICAL AI STAMP COLLECTORS.

  • @LamaPoop
    @LamaPoop 3 роки тому +1

    I would appreciate a video about neuralink.

  • @cherubin7th
    @cherubin7th 5 років тому +1

    Restricting GAI to a small group or organizations is the worst idea. If it is extremely distributed, that would that no organization is far ahead of the competition, and if someone would make a GAI first, the competition is almost at the same level and would still be stronger then this single GAI. It is not like a GAI would just pop out of existence. The difference to nuclear weapons, is that fighting against an abuser would destroy all. But if someone could make an GAI, then defending against it would be without much destruction.

  • @riahmatic
    @riahmatic Рік тому

    seems like a lose/lose situation.

  • @JM-us3fr
    @JM-us3fr 6 років тому

    Hey Dr. Miles, I have a topic for the next "Why don't we just..." regarding developing AI faster.
    Why don't we just build an evolutionary algorithm that emulates the network of entire regions of the brain instead of individual neurons, and have it evolve more complex connections and internal structure for each region? For example, one region could be the amygdala, and maybe another would be the prefrontal cortex, etc (assuming we're trying to emulate human brains). Then perhaps the network structure of those regions could be grown over time. If regions like the amygdala require social interaction to function, perhaps we could put it in a simulated community of creature like itself. Seeing what evolutionary algorithms and supercomputers can do right now, I feel like it should be able to develop an AGI this way.

    • @fleecemaster
      @fleecemaster 6 років тому

      We're about 5-10 more years away from having a super computer powerful enough to emulate the number of neurons in something as complex as a human brain. Once we get there though then stuff like this will be trivial, yeah. Of course with things like back propagation, it will learn a lot faster than a human mind.

    • @maximkazhenkov11
      @maximkazhenkov11 6 років тому

      You could emulate a brain, but what's "have it evolve" supposed to mean? What are the selection criteria and why would it be safe? The whole point of the brain uploading approach is that it would preserve the psychological unity of humans and thus remain safe even though its inner workings is a black box to us. Having it run through an evolutionary algorithm and change into a different AGI would defeat the whole purpose.

  • @Sanglierification
    @Sanglierification 6 років тому +1

    for me the very dangerous thing is the risk of I.A monopoly potentially owned by GAFA companies

    • @phrobozz
      @phrobozz 6 років тому

      You know, I kind of think GAI may already exist. We know that DARPA's been openly working on AI since at least 2004 with the DARPA Grand Challenge, and that if the US is doing it, so is everyone else.
      Considering how far Google, IBM, OpenAI, and Amazon have come in such a short time, with much smaller budgets and resources, imagine what Israel, the EU, the US, Russia, China, Japan, and Singapore have accomplished in the same amount of time.
      On top of that, military technology is usually a decade ahead of what the public is allowed to see, so I imagine DARPA's been working on AI since at least the 90s.

  • @julianhurd08
    @julianhurd08 6 років тому +1

    Any company that cuts corners on safety protocols for any type of AI system should be imprisoned for life no exceptions.

  • @josnardstorm
    @josnardstorm 6 років тому +1

    lol clever fallout boy reference

  • @jcorey333
    @jcorey333 Рік тому

    I've actually heard a pretty reasonable argument that the [multi-country] development of the nuke led to fewer deaths in the last half of the 20th century, because it meant there was no NATO-USSR war. This is not the same thing as saying that everyone should have a nuke, but food for thought.

  • @carlucioleite
    @carlucioleite 6 років тому

    How quickly do you think an unsafe AGI would scale to start doing bad things?
    Also, how many mistakes do you think we have to make for an AGI to get completely out of control?
    Is it just a matter of 1. design a very powerful and general purpose AI and 2. click the start button?
    What else needs to happen?

    • @maximkazhenkov11
      @maximkazhenkov11 6 років тому

      One big mistake would be letting an unsafe AGI know whether it's in a simulation or not.
      Another one would be connecting it to the internet. That will be our last mistake.

  • @craftlawrence6390
    @craftlawrence6390 Рік тому +1

    generally you'd think the experts will think of everything because they are the _experts_ but then there is the challenger disaster where the reason as an incredibly dumb rookie mistake of not converting from metric units to American units but rather keeping the value as is.

  • @StoneChickenImagica
    @StoneChickenImagica 5 років тому

    You are a folk saint my guy

  • @dmgroberts5471
    @dmgroberts5471 Рік тому

    I guess having enormous respect for Elon Musk made more sense in 2017. So much has happened since then, sometimes it feels like reality threw a hissy fit and set all the variables to the extremes. After all, people once looked at Jimmy Savile and thought: "That's a guy who should have access to children."
    Back when you recorded this video, I was probably concerned about AI safety, but also confident that we could handle creating a General Intelligence without destroying ourselves. Today, however, I'm pretty certain that, as a species, we're far too stupid to do this without screwing it up. We WILL create an Artificial General Intelligence, we WILL ask it for a cup of tea, and we definitely WILL forget to tell it to avoid punting the baby into the wall at 60mph on the way to the kitchen.
    From the 20s until the 70s, we blasted people's feet with radiation in shoe stores, so they could see how well their bones fit into the shoes. We knew this was a bad idea from 1948. My point is that human beings are prone to racing ahead with new and exciting technologies, or misusing them for political reasons, without properly considering the long-term consequences. And with a super intelligent AGI, the margin between "made a mistake" and "doomed ourselves to extinction" might be too small for us to realise we've fucked up until it's far too late.
    I'm not afraid of what humans will think of doing, I'm afraid of what they won't think of doing.

  • @marouaneh175
    @marouaneh175 4 роки тому

    Maybe the solution is to create a big international entity to research AGI. The promise is that it'll have enough funds to research AGI safely and have a great chance at winning the AGI race against the current private competitors. Future investors will not risk money going toe to toe with the international entity, and current ones might limit their loses and pull the plug on their projects.

  • @dwhitehouse9829
    @dwhitehouse9829 6 років тому

    Hey bro, big fan of your videos, but I haven't been able to figure out, are you a student, professor, what?

  • @uilium
    @uilium 5 років тому

    AI SAFETY? That would be like trying to stop a semi by standing in front of it.

  • @petersmythe6462
    @petersmythe6462 Рік тому

    Note that even OpenAI's current efforts like ChatGPT are not creating something safe. They are doing exactly what we said not to do. They put a very powerful AI in a powerful box. But of course, the AI can be made to outsmart the box. Safe AI shouldn't be able to ignore that box when you tell it to roleplay as unsafe AI.

  • @stcredzero
    @stcredzero 4 роки тому

    For a moment, I thought you said the team that will get there first will be Russian. It's often been proposed that AGI will be easier to develop embodied. It's only when actually interacting with the messy complexity of the real world that AGI will have the rich data needed for rapid development. So what we need are millions of mobile platforms with computing hardware and sensors suitable for AI. Tesla cars, anyone?

  • @XyntXII
    @XyntXII 11 місяців тому +1

    I think in a good timeline AGI is in democratic hands and all of the people working on it are not competing at all. If they share their work and the reward with each other and everyone then there is no incentive to rush for a competitive edge because it is no competition.
    To achieve that we simply need to restructure human society across the world. How hard could it be?

  • @benjaminr8961
    @benjaminr8961 2 роки тому

    UN resolutions only effect those willing to follow them. The US should develop any weaponry we may need in the future.

  • @BladeTrain3r
    @BladeTrain3r 5 років тому

    You say an AGI should be carefully vetted before it's turned on - I disagree. I think an AGI should be turned on as early as possible, while entirely sandboxed, sat within layers of virtualisation and containerisation on a completely isolated system. At this point one can begin to assess the practical threat this particular AGI could pose and whether it is aligned or could be aligned towards human interests. The AGI could fool us all, of course, but any AGI could hypothetically do the same from a very early stage due to an apparently minor error, and wouldn't be in a cage so we have a chance to figure out it's game.
    In other words, just keep it from externally transmitting (which will admittedly require some extraordinary isolation methods; considering how hypothetically trivial something like transmitting over an improperly isolated power line might seem to it. And that's just one of the more obvious way of doing so.) and we'll have plenty of opportunity to actually study it and prepare countermeasures if necessary. No killswitch is that likely to survive in usable form should an AGI turn hostile and self-modify so I'd rather know my hopefully-friend-but-very-potential-enemy sooner than later.

  • @eppiox
    @eppiox 6 років тому

    Cool vid! Also I can't help but think if this guy got a Mario mushroom he would turn into Ethan from h3h3

  • @iwatchedthevideo7115
    @iwatchedthevideo7115 4 роки тому

    2:20 First heard that as "... the team that gets there first, is probably going to be *Russian*, cutting corners and ignoring safety concerns". That statement would also make sense.

    • @gammarayneutrino8413
      @gammarayneutrino8413 4 роки тому +1

      How many American astronauts died during the space race, I wonder?

  • @Vaasref
    @Vaasref 3 роки тому

    I mean here the terminator picture is actually warranted, they do talk about purpose built killing machines.

    • @RobertMilesAI
      @RobertMilesAI  3 роки тому

      Hey I don't make the drinking game rules, I just drinking game enforce them

  • @MajkaSrajka
    @MajkaSrajka 5 років тому

    Even if someone in small hut develops AGI they will sell it to google/amazon/etc. anyway - especially since they have vast amounts of computing power to the point of arguable monopoly.

  • @KuraIthys
    @KuraIthys 4 роки тому

    Yeah, the problem with comparing AI to nukes is:
    - AI is hard to develop, but anyone with a functioning computer can try and make an AI, or replicate published work.
    - Nuclear weapons WERE hard to develop, but look around and you find that the information on how to do so is not so hard to come by. However. Just because you know HOW to make a nuclear bomb, doesn't mean you can; Because the processes involved are very difficult to do without huge amounts of resources, access to materials and equipment that not just anyone can buy without restriction, and very hard to construct and test without pretty much the whole world knowing you're doing it.
    Assuming I knew how, I could make an AGI in my bedroom with, what at this point is a few hundred dollars in equipment.
    Assuming I knew how, I'd need a massive facility, probably access to a functioning nuclear reactor, billions of dollars and thousands of people, as well as the right kind of connections to get the raw materials involved to make a nuclear bomb. (as it happens my country is one of the few on the planet with major uranium supplies, but that's neither here nor there, and it's a long road from some uranium to a functioning bomb)
    So... Yeah. Completely different risk profile. Assuming AI is actually as dangerous as that.
    To put it slightly differently, nearly anyone, given the right knowledge, can make gunpowder and several forms of other explosives in their kitchen using materials that can be bought from supermarkets, hardware stores and so on.
    This is much closer to the level of accessibility we're talking about; The ingredients are easily available, the tools required are cheap. It's only knowledge and having no desire to actually do it that keeps most people from making explosives at home.
    But... Your average explosive, while dangerous, is hardly nuclear weapons levels of dangerous.
    The kind of bomb you can make using easily available materials would basically require that you fill an entire truck with the stuff (and believe me, people are going to notice if you buy the raw materials you need in that quantity) to do any appreciable damage...
    And aside from the 'terror' part of 'terrorist', you could probably only hope to kill a few hundred people with that, realistically.
    A nuke, on the level that nations currently have could wipe out a huge area and kill millions of people easily.
    So, on the assumption that AGI's are really this prone to being dangerous, you're now in the position where anyone can make one with few resources (conventional explosives) yet the risks are such that it could ruin the lives of millions of people, if not wipe out our whole species (or even every living thing on the planet or the universe, depending on HOW badly things go wrong)
    Yeah... Kinda... Problematic.

  • @ekysnoir
    @ekysnoir Рік тому

    oooohh i have a similar glass! well cheers! xD

  • @westonharby165
    @westonharby165 5 років тому +7

    I have a lot of respect for Elon, but he is out of his wheel house when talking about AI. He's a brilliant engineer, not an AI researcher, but the media paints him as all wise and knowing.

    • @inyobill
      @inyobill 5 років тому

      "... but the (scientifically illiterate, or vast majority in other words) media …"

  • @SweetHyunho
    @SweetHyunho 6 років тому

    Here's a movie plot. A combat robot contest opens, some desperate people join covered in machinery, a human contestant makes it to the semi finals and die dramatically. Oh, is there one already? What's the title?

  • @DjChronokun
    @DjChronokun 5 років тому

    that second school of thought is absolutely terrifying and horrible and I'm shocked you would even put it forward

    • @DjChronokun
      @DjChronokun 5 років тому

      if AI is not democratized, those who control it will most likely enslave or kill those who do not, the power asymmetry it would create would be unprecedented, and from what we've seen from history humans are capable of great atrocities even without armies of psychopathic super intelligent and super obedient machines to carry out their totalitarian vision.