Are AI Risks like Nuclear Risks?

Поділитися
Вставка
  • Опубліковано 29 вер 2024

КОМЕНТАРІ • 400

  • @diablominero
    @diablominero 5 років тому +212

    What made the Demon Core so dangerous was that physicists thought they were too cool to use safety precautions. How do we prevent that in AI research?

    • @SenorZorros
      @SenorZorros 4 роки тому +27

      generally the problem is averted by Isolates test benches, big red buttons, power cutoffs and if it goes really wrong enclosed boxes on which we can pour several meters of concrete. problem is, many researchers are also too cool for those.

    • @AugustusBohn0
      @AugustusBohn0 3 роки тому +14

      the Solarwinds hack is still fresh in my mind as I read this, and if we can't keep people who make a tool as important and far-reaching as Solarwinds from taking shortcuts, like setting a very important password to [name of company]123, then I really don't know what we can do other than maybe train/indoctrinate people to have a deep rooted belief that mistakes and shortcuts will lead to disaster

    • @spaceanarchist1107
      @spaceanarchist1107 3 роки тому +11

      @@SenorZorros Chernobyl - researchers turned off safety systems in order to perform experiment

    • @MarkusAldawn
      @MarkusAldawn 2 роки тому

      The general method has been to fuck up and get people killed, and then go "safety would have prevented this," and then implement safety.
      Not sure we have the luxury of that this time around.

    • @idiomi8556
      @idiomi8556 Рік тому

      @Augustus Bohn I fail to see the issue with your solution? Fixes that issue and a bunch of others

  • @jupiter4602
    @jupiter4602 4 роки тому +3

    The problem of "If AIs produce everything we need, how do we get achievement and satisfaction when we are outperformed forever?" is based in a mindset of competition, which, in a world where every human is properly provided for (with or without AI) will probably naturally fade. Even further, satisfaction and achievement can be and are gained in today's world by people who are outperformed by others, and it's not as if social competition between people will become meaningless, as there is more to be gained from doing something than the general purpose of doing that general task (for example, generally chairs are made to be used as chairs and to fulfill the functions that people use chairs for most of the time. However, a person can make a chair not just to be another chair, but as a gift, as a learning tool, as a hobby, and so on. These functions are not invalidated just because chairs can be made at a higher quality and functionality by a machine than a handmade one).

    • @m4d_al3x
      @m4d_al3x 4 роки тому

      That sounds just like playing sports just for the sake of playing.
      Thats so boring! Sports are only fun if you win.

    • @luigivercotti6410
      @luigivercotti6410 Рік тому

      An issue with that is, that the current economical systems simply do not incentivise this approach. It's hard to feel satisfied and happy when you're simultaneously starving half to death

  • @yoppindia
    @yoppindia 4 роки тому

    the reason this analogy is not right is because, nuclear materials and technology required to use it is expensive. whereas forAI it might be expensive to build initially but very easy to copy once its build.

  • @Jcewazhere
    @Jcewazhere 3 роки тому

    In addition to morons jumping in front of them for fun I can imagine we'll have people riding on the roof while the car does the driving.
    Also algorithm the for comment.

  • @MetsuryuVids
    @MetsuryuVids 7 років тому +182

    Is that "I Don't Want To Set The World On Fire" at the end?
    Amazing ahahahah
    Also amazing video.

    • @Xartab
      @Xartab 7 років тому +9

      On an ukulele, if I'm not mistaken

    • @NathanTAK
      @NathanTAK 7 років тому +11

      +Batrax An ukulele? Perhaps an electric battle axe ukulele?

    • @Xartab
      @Xartab 7 років тому +4

      Of course, how silly of me to miss that!

    • @ommurg5059
      @ommurg5059 4 роки тому +1

      Sure enough!

  • @benjaminbrady2385
    @benjaminbrady2385 7 років тому +300

    "We will have a problem with morons deliberately jumping in front of them for fun"
    Thanks for the idea!

    • @NathanTAK
      @NathanTAK 7 років тому +37

      I think our self-driving car algorithms will have to be programmed to run them down.

    • @DavidChipman
      @DavidChipman 7 років тому +26

      Car: "You asked for it, idiot!"

    • @sandwich2473
      @sandwich2473 5 років тому +8

      This problem already exists, to be honest.

    • @e1123581321345589144
      @e1123581321345589144 5 років тому +3

      Check this out.
      www.nytimes.com/2018/12/31/us/waymo-self-driving-cars-arizona-attacks.html

    • @adrianalexandrov7730
      @adrianalexandrov7730 4 роки тому +7

      actually you can already jump in front of some volvo SUVs and they would automatically break.
      Good idea for some trolling.
      Just need to identify the right one correctly and for it to be functional. Kinda leap of faith right now.

  • @grahamrice1806
    @grahamrice1806 7 років тому +233

    Forget "what if my robot ignores the stop button?", what about "what if my robot ignores my safe word!?" 😅

    • @PickyMcCritical
      @PickyMcCritical 7 років тому +36

      Sounds kinktastic.

    • @NZAnimeManga
      @NZAnimeManga 5 років тому +24

      @@PickyMcCritical please assume the position

    • @treyforest2466
      @treyforest2466 5 років тому +9

      The number of likes on this comment must remain at exactly 69.

    • @TheUntamedNetwork
      @TheUntamedNetwork 5 років тому +19

      Well... if its terminal goal was your pleasure you in for a hell of a ride!

    • @xcvsdxvsx
      @xcvsdxvsx 5 років тому

      ew

  • @IsYitzach
    @IsYitzach 7 років тому +60

    I've been wondering what people mean by "ignite the atmosphere."

    • @osakanone
      @osakanone 5 років тому +27

      big whoosh burny burny aarrghhh dying smusshfffppprrwwwfffhhhggwgglffpfpffttttBANGBANGffppfftthhssshhhhppfsssttttttttt...
      Only like, a few centuries in length.

    • @RobKMusic
      @RobKMusic 4 роки тому +10

      Accidentally causing a runaway fusion reaction of the nitrogen molecules in the air all around us turning the earth into a little star for a few days or however long it would take to completely burn off the atmosphere of the planet essentially extinguishing 99% of life.

    • @kungfreddie
      @kungfreddie 4 роки тому +3

      @@RobKMusic we had 100s of fusion reactions in the atmosphere the last century.. and it didnt happen!

    • @bastion8804
      @bastion8804 4 роки тому +4

      @@kungfreddie First of all, no we don't. Not even close. Second, 100 fusion bombs are tame compared to what they're describing.

    • @kungfreddie
      @kungfreddie 4 роки тому +9

      @@bastion8804 yes we have! The number of atmospheric test (not counting underground detonations) of thermonuclear devices over 1 MT is 67(!). And that's just over 1 MT. The total number of atmospheric tests are 604! And it's probably a minority of them that are not thermonuclear.
      So I'm sorry.. it wrong about that!

  • @matteman87
    @matteman87 7 років тому +112

    Love this channel, keep up the good work!

  • @williambarnes5023
    @williambarnes5023 5 років тому +34

    AI risks are NOT like nuclear risks.
    For example, the AI has a chance of _winning._

  • @vanderkarl3927
    @vanderkarl3927 4 роки тому +95

    "The worst case scenarios for AI are *worse* than igniting the atmosphere, and our understanding of AI is probably less complete than their understanding of nuclear physics was."
    This sentence is one of the most bone-chilling, brain-melting, soul-rending, nightmarishly terrifying statements that the human race has ever produced, and we've produced some really, really nasty ones.

    • @MorgurEdits
      @MorgurEdits Рік тому +1

      So I wonder what are the arguments for that conclusion that it is worse.

    • @MrAwawe
      @MrAwawe Рік тому

      @@MorgurEdits a superinteligence could potentially launch rockets into space in order to harvest all the material in the galaxy, say for the purpose of making stamps. It could therfore be a threat to potential other civilizations in the universe, and not just to life on earth.

    • @alexanderpetrov1171
      @alexanderpetrov1171 Рік тому +12

      @@MorgurEdits Imagine an AI that intents to not just destroy humanity, but make it suffer as much as theortically possible... And then do the same with all other life in the Universe.

    • @AtticusKarpenter
      @AtticusKarpenter Рік тому +2

      @@alexanderpetrov1171 Also, such AI can genetically modify people to make them feel suffering much stronger than nature intended. And at the same time, do not let go crazy or somehow else move away from suffering. So there is no christian Hell, but people can create it through misuse of advanced AI, lol.
      The reverse is also possible - it is not for nothing that the intellect is the strongest thing in the universe.
      (Google translator made my fancy phrases look even weirder, but oh well)

    • @mithrae4525
      @mithrae4525 Рік тому

      @Freedom of Speech Enjoyer There's several Black Mirror episodes based on the premise of people uploading their minds into computers, notably one in which it's used by old or ill people as a sort of heaven. In the scene of its rows of servers maintaining those minds and their paradise world I couldn't help wondering what would happen if there was some kind of fault with the system. If that were possible, introducing AI into the scenario would raise all kinds of interesting possibilities simply from its failure to understand what would constitute a paradise.

  • @godlyvex5543
    @godlyvex5543 Рік тому +6

    I think the economic risks are only big risks because of the idea that everyone NEEDS a job to make money to live. If something like UBI were implemented, maybe it wouldn't be a catch-all solution, but it wouldn't be nearly as bad as if everyone were unemployed with the current system.

  • @NathanTAK
    @NathanTAK 7 років тому +297

    The solution to the self-driving car trolley problem:
    1. Choose one of the options at random
    2. Hit them
    3. Turn around, hit the other one too.
    No moral dilemma present in making a choice!

    • @ccgarciab
      @ccgarciab 5 років тому

      Naþan Ø MTD

    • @sweeper7609
      @sweeper7609 5 років тому +4

      A: The only way this situation may happen is because:
      1-Bug. We can't do ethic with buggy car.
      2-The car have been programmed to be malicious. We can't do ethic with malicious car.
      3-The driver cause this situation. If the car can take contrôle it should kill the lonely human.
      4-The crowd cause this situation. Kill the crowd, I don't want idiot crossing anywhere because "Lol, I don't care the car will save me".
      B: The only way this situation may happen is because:
      1,2-Same
      3-Same but hit the wall
      4-Same but only one human cause this situation.
      C: The only way this situation may happen is because:
      1,2,3-Same
      4-Same with more human

    • @iruns1246
      @iruns1246 5 років тому +13

      Solution: Make every self-driving AI take a test in simulated settings. The test should be rigorous (maybe millions of scenarios) and designed by a diverse ethics committee and open to democratic debates. After that, as long as the AI passed the test and the copy is operating in a sufficient hardware, there shouldn't be ANY liabilities. Maybe after there are accidents, the test should be reviewed, but there shouldn't be criminal charges to anyone. Treat it like doctors following the procedures.

    • @eadbert1935
      @eadbert1935 3 роки тому +1

      the issue with the self-driving car is: we worry so much about these questions of morality and liability, we forget that self-driving cars would automatically reduce moral question by not being AS fallible as humans
      like, we have 90% fewer accidents (i made this number up, i don't have sources for this) and we worry about what to do with the last 10% instead of being happy with 90% reduction
      ffs, letting ANYONE decide until we have a better solution is a superior moral choice to waiting for the better solution

    • @adenjordan3659
      @adenjordan3659 3 роки тому

      You all probably dont give a shit but does anyone know a method to log back into an Instagram account?
      I somehow forgot the login password. I would love any assistance you can offer me.

  • @showdownz
    @showdownz 4 роки тому +16

    Love your Videos. Just want to bring up another concern surrounding the UBI (universal basic income) which could easily be a result of AI effectively taking over the job market. This is one that I feel is under discussed, and it falls under the classification of absolute power corrupts absolutely. Once the people are out of work they will be forced into seeking other means to sustain themselves (in this case the UBI). The UBI could easily be corrupted. IE requirements could be placed on an individuals beliefs or behavior in order to qualify. This could start out subtle, but could eventually lead toward very oppressive control. Some of these controls are already being implemented in places like China. Meaning AI could lead to a society with not only the wealth focused in a select few but also the power and freedom as well.

    • @taragnor
      @taragnor 4 роки тому +2

      Yeah honestly this is the real risk of AI. It's not Skynet, it's the mass unemployment coming from automation of the majority of low education jobs. As a society you need some tasks for people of lower intelligence to perform, and if you replace all those jobs, there will be nowhere for those people to go.

    • @m4d_al3x
      @m4d_al3x 4 роки тому +1

      Invest in weed and alcohol production, NOW!

    • @fieldrequired283
      @fieldrequired283 4 роки тому +2

      As opposed to what we have right now, where... people just starve to death in the streets?
      This isn't an AI problem, this is a Bad Government problem, and one we have with or without the presence of AI.

    • @caffeinum
      @caffeinum Рік тому

      @@fieldrequired283es but governments only exist and work because “lower jobs” make >50% and they were NEEDED by industrial revolution to play along, that’s why the “rich” have incentives to share profits. When “rich” handle all of their tasks using AI, frankly there’s no need to ask permission from lower qualification people
      Edit: And this IS REALLY BAD

    • @fieldrequired283
      @fieldrequired283 Рік тому +1

      @@caffeinum
      (3 years later)
      This is, once again, not an AI problem. This is a corporate greed problem. "What if AI makes it so megacorporations are even better at being evil" is still a smaller scale problem than a misaligned AGI.
      A properly aligned AGI in the hands of a sociopath would make for the greatest tyrant the world has ever known.
      An _improperly_ aligned AGI in the hands of even the most pure-hearted saint will spell the eradication of approximately everything in the future light cone of its inceptuon.

  • @economixxxx
    @economixxxx 7 років тому +17

    i swear i was thinking of how much id like to see more of this channel then boom! new vid. awesome job mate!

  • @janhoo9908
    @janhoo9908 7 років тому +78

    So where did you get your narration superpowers from then? love your unagitated and reflexive tone.

    • @martinlevosofernandez3107
      @martinlevosofernandez3107 7 років тому +39

      he aaso has a secondary power that lets him make good analogies

    • @NathanTAK
      @NathanTAK 7 років тому +5

      +Martín Fernandez That seems to be one of the most useful superpowers on the planet. I _really_ wish I had that.

  • @theJUSTICEof666
    @theJUSTICEof666 7 років тому +14

    5:13
    Not superpowers.
    I repeat, not superpowers.
    Yes I'm talking to you mad scientists.

    • @osakanone
      @osakanone 5 років тому

      This is such bullshit, gosh

    • @martinsmouter9321
      @martinsmouter9321 4 роки тому

      But if I try it on like a billion minions it might work.🥺

    • @nolanwestrich2602
      @nolanwestrich2602 3 роки тому

      But can I get superpowers from high voltages, like Nikola Tesla?

  • @Nulono
    @Nulono 7 років тому +17

    You're sitting really close to the camera…

    • @fleecemaster
      @fleecemaster 7 років тому +1

      No, you're just sitting too close to your screen...

    • @NathanTAK
      @NathanTAK 7 років тому +8

      ...it just occurred to me that he could be sitting. I always assumed he was standing.

  • @MAlanThomasII
    @MAlanThomasII 5 років тому +5

    Actually, they weren't all sure that they wouldn't ignite the atmosphere. One of them-it might have been Fermi?-even put the odds the night before at no more than 10% . . . which is 10% more than you really wanted it to be.
    I don't have it in front of me, but you can find a well-annotated discussion of this in Ellsberg's _The Doomsday Machine: Confessions of a Nuclear War Planner_ (which, to be fair, is clearly arguing a point, but the references are good).

    • @eragonawesome
      @eragonawesome 3 місяці тому

      I will say I've heard two different versions, and one of them seems better supported than the other. One is that Fermi estimated the odds of igniting the atmosphere at 1/10, the other is that he was very confident in their math, but thought there was a 1/10 chance there was some extra, basically magical, effect they had never seen before or accounted for which would ruin all their math and cause the atmosphere to ignite anyway.

  • @anonanon3066
    @anonanon3066 3 роки тому +2

    Regarding igniting the admosphere:
    What about modern day atomic bombs? They are said to be much much much more powerful than the first ones.

    • @eragonawesome
      @eragonawesome 3 місяці тому

      Doesn't matter, in order to get a self sustaining reaction started, the *entire atmosphere* would need to be heated to millions of degrees. There is No Technology On Earth capable of generating that much energy, full stop.
      It's possible to get a relatively small region to the requisite temperature and pressure, namely the region inside of the initial fireball might be enough, but not the whole atmosphere. It simply takes more energy to fuse atmospheric gas than is released by said fusing atmospheric gas, meaning the reaction cannot become self sustaining

  • @hynjus001
    @hynjus001 5 років тому +9

    The driverless car problem brings to mind the matrix. To make the metaphor, good human drivers are like agents but in the future, driverless cars will be like Neo. They'll have such fast reaction times and such advanced deterministic prediction that they'll be able to avoid catastrophic situations before humans even recognize them to be possible.
    Car: What are you trying to say? That I'll have to crash into some humans?
    Morpheus: No, driverless car, I'm trying to say that when you're fully developed, you won't have to.

    • @seraphina985
      @seraphina985 5 років тому +1

      This is very much my issue with such false moral dilemmas is that they could only exist in the event that the AI has already made at least one but in reality more likely a series of fatal errors in order to even get into a situation that it can't get out of in a safe manner. It's a chain of critical events and the focus should be on ensuring that said chain of critical events is broken before all possible safe resolutions have been foreclosed by prior errors.

    • @DoubleThinkTwice
      @DoubleThinkTwice 4 роки тому +3

      The real thing here is overlooked all the time though. If you cannot stop safely, you have been driving too fast (no matter what the allowed *maximum* speed on that street is). This is true for human drivers as much as it is for AI.
      If you are in the situation that you are going too fast and have to make the decision to either run over either a group of nuns or a mum with her baby, then the solution is not to go too fast in the first place.
      As far as humans are concerned, this is already in the legal code here in Austria. No matter what the street signs say, if you run over somebody and the police determines during the court trial that you were overestimating your ability to break for a given overview of the street and the speed you were going at, then you are fully or partially liable (up to the courts).
      So if you are going down a narrow street with cars parked on either side and you go 50 km/h and run over a child that's going over the street without looking, then you are liable for having gone too fast.
      And yes, you are correct, on top of *all of that*, a machine will react faster than a human too.

    • @m4d_al3x
      @m4d_al3x 4 роки тому

      Car: What are you trying to say? That i will be able to dodge accidents?
      Morpheus: No, when you are fully developed you won't have to.

    • @adrianalexandrov7730
      @adrianalexandrov7730 4 роки тому

      ​@@DoubleThinkTwice totally agree, humans driving badly is an educational problem not slow reaction time problem. If you've run over someone then you drove too fast and/or to close to something obstructing your view.
      We've managed to structure that knowledge and to learn teaching it to fellow humans: Britain's roadcraft and Scandinavia going for zero road deaths.

    • @fieldrequired283
      @fieldrequired283 4 роки тому

      A good driverless car will get into fewer of these lose-lose dilemmas than a human driver will, but if we ever want a car to go faster than 5 mph, you'll have to decide what sort of choice it makes in a situation like this.
      The level of caution necessary to never get into any sort of accident is a level of caution that will also never get you where you want on time. Nobody would use a perfectly safe self-driving car.

  • @DrSid42
    @DrSid42 Рік тому +2

    I like the idea of terminator crushing the last human skull, and thinking: they were worried about AI being racist ? IMHO LOL.

  • @mikolajpiotrowski6043
    @mikolajpiotrowski6043 4 роки тому +11

    4:57 Maria Curie died from radiation poisoning BUT it had nothing(or not so much) to do with her research, she was a volunteer medic in mobile x-ray diagnostics unit during WWI(there wasn’t any kind of protection for staff) So all of personnel recived radiation equal to sum of doses for every scan.

    • @Xazamas
      @Xazamas Рік тому +4

      Also, modern x-ray machines give you much smaller dose because both sending and receiving x-rays has gotten significantly better since WWI.
      Reminds me of the "radiologist tells you it's perfectly safe to have your x-ray taken, then hides in adjacent room or behind lead cover" -joke/meme. Reason for this is that while reassuring you by subjecting themselves into few stray x-rays would cause them no measurable harm, doing this with every patient would eventually add up to a significant, harmful dose.

    • @fss1704
      @fss1704 Рік тому

      Dude, they didn't know what they're doing, handling radioactives in their bare hands is no good.

  • @stevent1567
    @stevent1567 6 років тому +5

    That is amazing. I'm very happy that there are people like you preventing AIs from raking in all kinds of stuff in order to become giant blobs sucking on earth so I can play more Factorio.

  • @RecursiveTriforce
    @RecursiveTriforce 3 роки тому +3

    1:12
    Even old Computers beat up humans in chess but people still have fun.
    Shooters are still fun even though Aimbots exist. (No fun against them but against other humans)
    NNs for games like Starcraft and Dota2 are beating professionals.
    Games are not less fun because someone is better than you at it...
    Improving oneself is not less rewarding because others are still better...
    Why should people be unable feel true success?
    Am I missing the point he tries to argue?

    • @RobertMilesAI
      @RobertMilesAI  3 роки тому +5

      Games are still fun, sure, but games aren't as rewarding as doing things that meaningfully benefit others.
      I like making these videos, but if nobody else watched them (perhaps because they were getting much better personalised lessons on all the concepts from AI systems) I wouldn't find it satisfying to make them.
      I wouldn't be happy, as a scientist, to simply improve myself and learn things that I have never known. I want to learn things that nobody has ever known, and I can't do that if the frontiers of all science are now far beyond the reach of my brain.
      Maybe you can enjoy planting some vegetables even though they're cheaper and better at the supermarket? But I don't imagine a CEO getting much satisfaction from running a company, choosing not to use a superhuman AI CEO and knowing that their insistence on making decisions themselves is only hurting their business overall, their work reduced to a vanity project.
      I think people want to be *useful*

    • @RecursiveTriforce
      @RecursiveTriforce 3 роки тому +3

      Thanks for clarifying!
      So you mean that they feel like they are making a difference and truly have their place instead of "only" having the feeling of achieving their goals themselves.
      That makes a lot more sense...
      So fun might stay fun, but actual purpose decays. (Because it requires people to be positively affected which an AI could do better [and will have already done])

  • @dannygjk
    @dannygjk 5 років тому +7

    People will jump in front of self-driving vehicles to try to discredit them not just because they are morons.

    • @circuit10
      @circuit10 4 роки тому

      Dan Kelly Well that is being a moron

    • @martinsmouter9321
      @martinsmouter9321 4 роки тому +1

      @@circuit10 to say it with our host: that depends on their terminal goals and is a combination of an ought and an is-statement.

    • @martinsmouter9321
      @martinsmouter9321 4 роки тому

      @@circuit10 or better formulated:
      ua-cam.com/video/hEUO6pjwFOo/v-deo.html
      Edit: and a little bit different reasoned, but mostly the same.

  • @lukaslagerholm8480
    @lukaslagerholm8480 7 років тому +5

    The including of images of articles, websites, research papers and alike is really good and I love it. Keep it up and dont be afraid of letting them hang around for a little longer so that more people notice them and actually read, theyre quite often very intresting and on point. Keep up the good work!

  • @AgeingBoyPsychic
    @AgeingBoyPsychic 4 роки тому +2

    There will always be artistic achievement. No AI will be able to produce art that poignantly describes the human condition of being completely superseded by its artificial creation, as well as the human meatbags experiencing that reality.

    • @cfdj43
      @cfdj43 4 роки тому

      Artistic creation is currently only a tiny sector of employment and gets even less once everyone in the production side is replaced by Ai. It seems likely that a human artist would be kept to gain the marketing benefit of "made by a human" in the same way "handmade" exists now

    • @fieldrequired283
      @fieldrequired283 4 роки тому +1

      A sufficiently advanced AI can just simulate a person suffering from existential malaise and then execute on what they would have done without any error. If it's smart enough, it could even conceivably come up with art more poignant than any human artist could even imagine.

    • @spaceanarchist1107
      @spaceanarchist1107 3 роки тому

      @@fieldrequired283 there are already programs that can produce art, music, and poetry, some of which can convincingly imitate the work of humans. But I think that human beings will continue to produce art for purposes of self-expression. Even if an AI can produce something equally good or better, people will still want to express themselves.

    • @fieldrequired283
      @fieldrequired283 3 роки тому

      @@spaceanarchist1107
      I don't need convincing on the merits of self-expression. My argument was made very pointedly to underline a mistake in OP's reasoning.
      Your argument is on a completely different axis on which I do not need correction.

    • @KANJICODER
      @KANJICODER Рік тому +1

      @@fieldrequired283 Better yet, we can give the A.I. flash memory or something so after the limited read/write cycles it dies of old age, just like humans.

  • @LordMarcus
    @LordMarcus 7 років тому +3

    I see it not unlike aviation safety: I think, no matter how good we get before the first ever general AI is turned on, the unknown unknowns or unknowable unknowns will only crop up after we've turned the machine on. A good deal of aviation safety only happened/s because we found out about a problem due to an accident.
    Even in the cases of narrow AI -- say, self-driving cars -- it's not going to be maximally safe to start (though there's good reason to believe it'll be a lot safer than human-driven cars to start). People are going to become injured or killed as a result of artificially intelligent systems (excluding those we design specifically to do that).

  • @magellanicraincloud
    @magellanicraincloud 7 років тому +3

    I agree with you Rob about the Universal Basic Income. CGP Grey did a great (terrifying) video called "humans need not apply" where he raised the question if what do you do when people are not only unemployed but unemployable simply because they are human?
    Unless we have some means of providing for the vast, overwhelming majority of people who don't own the robot factories, what are they supposed to do? How are they supposed to be able to afford food and shelter? These are social questions which we need to be discussing right now because the time when the solutions will be needed is right around the corner.

    • @alexpotts6520
      @alexpotts6520 5 років тому +2

      The way I think about this is in terms of Maslow's hierarchy of needs. A society where the AI owners gobble up all the wealth, and 99% of us are destitute, is obviously terrible, there is an overwhelming lumpenproletariat which may be falling short of even the first level of the hierarchy: food, shelter, survival.
      A UBI world would be interesting. We'd all have plenty to live on, especially since goods are much cheaper in a post-work world because there are no labour costs in production - indeed if you subscribe to the labour theory of value (not sure I do) then pretty much all goods are worthless at this point. So we're doing well on the first couple of rungs, indeed we have all the material wealth we could possibly want, and the third level - love - is really beyond the remit of the state even in principle. (Well, maybe AIs could make humans fall in love with each other. Is it ethical to mess with people's brains in this way to make them happy? That's kind of off-topic.)
      But where the UBI proponents fall down is that they get stuck halfway up the pyramid. Careers, or if not that then some sort of mastery at an amateur level (and remember, these AIs will outcompete us at everything, not just our jobs), is largely necessary for the higher rungs of self-esteem and self-realization. The only way I can thing of getting us round this is for AIs to wirehead us - in short we become a race of vegetables. Is that what we want?
      In summary, UBI is certainly an improvement on "do nothing", but it's hardly a satisfactory solution. There must be something better, or at least UBI can only be part of the solution even if it is an important part.

  • @flymypg
    @flymypg 7 років тому +47

    The way I see it, this entire argument is upside-down. The risk of harmful AI isn't going to be handled by simply not making powerful AI. The general question needing to be addressed is one that's been asked and answered many times during the history of technological advancement and deployment:
    How do we deploy a new technology SAFELY?
    Implicit in this question is a caveat: We must not deploy a new technology until we have sufficient confidence that it is safe. The greater the risk of damage or harm, the greater the safety confidence level must be. That raises the next question, again asked and answered (often in hindsight) many times through history:
    How do we know if an application of new technology is safe before deploying it?
    I think you see where this is leading. It all comes down to testing. Lots and lots of testing. Rigorous testing.
    From what I'm seeing so far, too many AI researchers presently suck at this kind of testing.
    Testing should be baked-in to the development process itself. I'm not just talking about the tiny training and test/validation sets used to train neural nets. Even the largest of those are minuscule compared to their real-world environments (when you take rare outliers into account).
    Self-driving cars provide a key example: Most developers of this technology rely on trained drivers and engineers to acquire their training data, and do it in relatively restricted environments (close to the development lab). That is flawed because it can't yield enough representative samples: The drivers doing the driving, providing the samples, aren't representative of all real-world drivers in the rest of the real world.
    That is, the AI isn't trained by watching while a bad driver make mistakes. It only gets to see other cars behaving badly, not knowing what's going on inside those other cars.
    Contrast this with the approach taken by comma.ai. Drivers are self-selecting, and the comma.ai acquisition system simply records what they do. In post-processing, data from all drivers is combined to train a model of what an "ideal" driver should do in all observed situations.
    The new instance of the trained model is then run against every individual driver's data set to identify situations in which human drivers failed to make the ideal choices. This is then used to create "warning scenarios", in which the driving AI is securely in control, but where it suspects other drivers may not be.
    These contextual "warning scenarios" are sadly missing from most other self-driving car projects. And it all has to do with where and how the data is obtained and used, and is less about the structure of the AI itself.
    I've worked developing products for several safety-critical industries, including commercial nuclear power, military nuclear propulsion, aircraft instrumentation and avionics, satellite hardware and software, and the list goes on.
    The key factor isn't "what's being developed", it's "how do we test it". At least as much effort goes into testing a safety-critical system as into the entire rest of the development effort (including research, design, implementation, production, sales, marketing, field support, customer support, and so on).
    When you know your system is going to literally be tested to death, you want every step of all your other processes to have a primary goal of ensuring those tests will succeed the first time.
    Thorough testing is terribly difficult, immensely time-consuming and fantastically expensive. Way too many developers simply avoid this, and use their customers (and the general public) as test guinea pigs.
    This is pretty much what many AI researchers are doing. They are largely clueless about testing for robust, real-world safety. They seem to always be surprised when a system finds a new way to fail.
    They need to stop what they are doing and spend a year working in a safety-critical industry. Gain some hands-on perspectives. Learn to ask the right questions about development, testing and deployment. Be personally involved in the results.
    I could go on and on about the specific techniques used when developing systems that MUST NOT FAIL. Since that's statistically impossible (despite our best efforts), we must also ensure our systems FAIL GRACEFULLY and RECOVER RAPIDLY.
    This comment is getting long enough, but I'll relate an example:
    I joined a project to design an extremely inexpensive satellite that had to operate with extremely high reliability. The launch costs were three orders of magnitude greater than the entire satellite development budget! The launch was to be provided for free, but only if we could prove our satellite would work. Otherwise, they'd simply give the slot to another payload with greater odds of success.
    We couldn't afford rad-hard electronics. So we did some in-depth investigation and found some COTS parts that were made on the same production line as some rad-hard parts, and were even designed by the same company. And the parts we needed were available in "automotive grade", which is a very tough level to meet (it's beyond "industrial grade", which in turn is beyond "commercial grade").
    Our orbit would occasionally take the satellite through the lower Van Allen belt, so we had to ensure we'd survive not only the cumulative exposure (that rapidly ages electronics) but also the instantaneous effects (which create short-circuits in the silicon and also bit-flips).
    We "de-lidded" some of our ICs and took them on development boards to the Brookhaven National Laboratory to be bombarded with high-energy heavy ions from the Tandem Van de Graaff accelerator.
    The results were far worse than we expected: When in the radiation field, at the 95% confidence level we could expect to get just 100 ms of operation between radiation-induced resets.
    I had to throw my entire (beautiful, elegant) software architecture out the window. Instead I set my development environment to cycle power every 100 ms, and then I evolved the software and its architecture until it could finish one complete pass through its most critical functions within that time.
    If more time was available before the next reset, only then would less-critical (but still massively important) functions be performed. Fortunately, this was the typical case outside of the Van Allen belt.
    The most difficult part of the process was choosing what was critical and what wasn't. That in turn demanded a radical rethinking of what the satellite was going to do, and how it would get it done.
    The end result was a satellite design and implementation that was extremely reliable and highly functional, yet still small and cheap.
    The moral of the story? There was no way we could test simply by tossing something into orbit and seeing how it did. Similarly, AI researchers should not be permitted to simply toss their creations into the wild.
    We needed to create a test environment that was at least as hazardous as what would be experienced in orbit. Similarly, AI researchers need to pay much more attention to accurate environmental simulation, not just statistical sampling.
    We needed to make optimal use of that test environment, both because it was expensive, but also because we wouldn't have much access to it. Similarly, AI researchers need to perform rigorous in-depth testing on a time scale that matches the pace of development, so it will be performed often enough to continually influence the development process.
    As my story shows, the effects of good testing can be massive. You must be willing to occasionally feel like an idiot for not predicting how bad the results could be. Still, feeling like an idiot is to be preferred over the feeling you'll have when your system kills someone.
    And that satellite? It never got launched. We were a piggy-back payload on a Russian launch to Mir, and Mir was immediately and suddenly decommissioned when Russia joined the ISS coalition. NASA would never allow a payload like ours anywhere near the Shuttle or ISS. And our mission wouldn't fit in a CubeSat package.
    Finally, let's look at how cars are tested. A manufacturer designs and builds a new model, then sends several of them to the US government (NHTSA) for crash testing, and other groups also do their own crash testing. These days, if a car gets less than 4 out of 5 stars, it will receive a terrible review, both from the testing group and in the press. Independent of the risk to people in their cars, the risk of a bad review poses a risk to the existence of the company.
    That is, the crash testing process and the press environment makes the customer risk "real and relevant" to the car manufacturer. When this was not the case, we saw companies and lawyers place a dollar value on the lives lost and the potential for future death, then make corporate decisions solely on that cost basis.
    That is, the risk of corporate death wasn't as high as the risk of customer death.
    So, to me this means there must be independent testing of AI systems prior to wide deployment. These tests must convert the risk of product failure into a direct risk of corporate failure, of bankruptcy, of developers and researchers losing their jobs and reputations.
    That, in turn, will help ensure that developers do their testing so well that the independent public tests will always pass with flying colors.
    And keep the public safe(r).
    Until someone figures out how to game the tests (such as the ongoing diesel emissions testing scandals).
    Making better tests will always be an issue, one that will grow in parallel with making better AI.

    • @ylluminarious151
      @ylluminarious151 7 років тому

      Yeah, I don't think the concept of a general AI is a good idea in the first place, and you've definitely got a point that a poorly tested and poorly taught AI will be a disaster of epic proportions. Sadly, I fear that such an AI will be what gets out first and will illustrate the utter carelessness and thoughtlessness of the people developing it.

    • @maximkazhenkov11
      @maximkazhenkov11 7 років тому +2

      This is applicable to Narrow Intelligence, but not to General Intelligence or Superintelligence. Only the latter types are apocalyptic in proportions.

    • @AexisRai
      @AexisRai 7 років тому

      BobC With those credentials I would strongly suggest you join some AI company where you think your product development expertise would do a lot of good, then. Especially if you think the problem formulation among experts in AI is so wrong /and/ dangerous enough to be very deadly.

    • @dmdjt
      @dmdjt 5 років тому +7

      I'm afraid, AI in general suffers from an inherent untestability.
      Tests can only be as good, as our domain knowledge. Most systems are too complex, to test every possibility so we use our understanding, to find the edge cases to get the best test coverage we are capable of.
      But we use AI, were we don't have the domain knowledge - that's the point behind AIs.
      An AI models the domain and simplifys it. It's model will never be perfect. How could we even find these imperfections, without the complete knowledge of the domain?
      This is already a problem with our current, primitive AI. In other systems, we know the model, that we can test - in AI we don't even know the model.
      But what happens, when we do not/can not control the domain anymore?

    • @allan710
      @allan710 4 роки тому

      This is true for current AI. For any idealised future AGI (Artificial General Intelligence) testing isn't possible. The whole point of AGI is unlimited potential (just as humans). How do we test humans in order to prevent them from killing people? That's very hard, but it's possible (yet unethical) because humans are aligned among themselves. We can predict what are the values and behaviours for a vast amount of people. What about AI? If we don't solve the alignment problem, then it's boundless. The goals of the AGI may be known, but the actions aren't easily predictable. The real problem is that AGI isn't in the realm of human technology anymore, after all, we expect that only an AGI could test another AGI, but should we trust them? AGIs are something that may only appear far in the future, but the implications of their possible existence are far too problematic to be ignored. A rogue nuclear missile might be able to destroy a city. A rogue AGI might be able to convert all matter of the universe in paperclips (not really, but it can vastly affect things in a universal scale)

  • @guitarjoel717
    @guitarjoel717 Рік тому +1

    Lmao; I just was listening to a podcast called hardfork; episode from May 12, 2023, and they literally interviewed someone who jumped in front of a driverless car 😅

  • @showdownz
    @showdownz 4 роки тому +1

    I hadn't thought of the racist problem. When given the choice of hitting a person of one race vs another race, the car could choose one race based on the average income of a person of that race based on statistics. Then the insurance company would have to pay less for that persons death (which is often related to their projected lifetime income, and of course how good a lawyer the remaining family members can afford). This could also be true for men vs women, old vs young, etc. And this might not even be illegal (everything else being equal).

  • @pierQRzt180
    @pierQRzt180 Рік тому +1

    it is not necessarily true that where machines dominate (but allow humans to exist) is difficult to feel achievement. In chess, computers can obliterate everyone, but winning a tournament among humans is still quite an achievement.
    In running, cars can obliterate everyone, but completing the task within X time (let alone win) is an achievement.
    In esports, at least in many, there are hacks that let a player win easily. But with hacks removed winning a tournament is quite an achievement.
    This to say, one can create the conditions for feeling of achievement. About this there is an episode of "Kino's Journey" that touches exactly this point.

  • @newtonpritchett9887
    @newtonpritchett9887 Рік тому +1

    3:35 The pregnancy problem was true in my case (or me and my wife’s case) - instagram was showing me ads for baby products before I’d told my family.

  • @zappawench6048
    @zappawench6048 3 роки тому +1

    Can you imagine if we are the only planet in the universe which contains life yet we killed ourselves and everything else off with our very first nuclear explosion? God would be like, "What the actual fuck, dudes? Talk about ungrateful!"

  • @ValentineC137
    @ValentineC137 Рік тому +1

    "Like global thermonuclear war, that's an issue"
    Technically correct is the best kind of correct

  • @zappawench6048
    @zappawench6048 3 роки тому +1

    Talking about igniting the entire atmosphere and wiping all life off the face of the Earth forever; outro music is "I don't want to set the world on fire". Savage.

  • @Skatche
    @Skatche 5 років тому +3

    I've got no ambition for worldly acclaim
    I just want to be the one you love
    And with your admission that you feel the same
    I'll have reached the goal I'm dreaming of...

  • @The8thJester
    @The8thJester Рік тому +1

    Ends the video with "I Don't Want to Set the World on Fire"
    I see what you did there

  • @ToriKo_
    @ToriKo_ 6 років тому +9

    What I thought about the whole achievement thing: I don't think actual people need 'actual achievement'. For example, just because we have made computers/AIs that can beat every human ever at games like Chess and Go, doesn't mean that people don't get a real sense of achievements for playing those games (especially at high level play).

  • @midhunrajr372
    @midhunrajr372 5 років тому +2

    I think the difference here is: The number of all nuclear bombs the ever used is very small compared to the number of AI systems that we are possibly going to use in the future. And while the theory and the intention of 'bombs' are sort of the same, AI systems are going to be completely different one another.
    Well don't get me wrong, I love AI. But the risk they can produce is far greater than nuclear bombs. We really really needs lot of precautions.

  • @MarcErlich44
    @MarcErlich44 7 років тому +3

    You should add a link for your patreon. Also, I want a collaboration with you and Isaac Arthur, the Futurist. Check him out on youtube if you don't already know him.

  • @JinKee
    @JinKee Рік тому +1

    4:26 can we talk about ChatGPT writing software that actually works? Apparently it can also solve the trolly problem efficiently half the time.

    • @KANJICODER
      @KANJICODER Рік тому

      To quote "Johnathan Blow" : "If chat GPT can write your code, you aren't coding anything interesting."
      Though I am pretty confident the machines will take over and kill us. They won't even do it intentionally, they will just take our jobs while we starve to death from crippling poverty.

  • @iruns1246
    @iruns1246 5 років тому +1

    Solution to self-driving problem: Make every self-driving AI take a test in simulated settings. The test should be rigorous (maybe millions of scenarios) and designed by a diverse ethics committee and open to democratic debates. After that, as long as the AI passed the test and the copy is operating in a sufficient hardware, there shouldn't be ANY liabilities. Maybe after there are accidents, the test should be reviewed, but there shouldn't be criminal charges to anyone. Treat it like doctors following the procedures. Even when the procedure failed, as long as it was followed, the liability is not on the actors.

  • @Liravin
    @Liravin Рік тому +2

    if there wasn't a timestamp on this video i might have not been able to tell that it's this ancient

    • @BORN753
      @BORN753 Рік тому +1

      Those videos were being recommended to me for like 2 years, but I never watched them because I thought it is not relevant and won't be soon, I thought it was a niche geek thing, and all the answers to the topic were given long ago when sci-fi was at its greatest.
      Well, my opinion changed very quickly, I didn't even notice it.

    • @luigivercotti6410
      @luigivercotti6410 Рік тому +2

      If anything, the problems have gotten worse now that we are starting to brush against AGI territory.

  • @dhruvkansara
    @dhruvkansara 7 років тому +11

    This is very interesting! Never thought about the problems that occur when AI functions correctly...

  • @6teeth318-w5k
    @6teeth318-w5k 3 роки тому +1

    1.34 Actually more complicated. IF the persons on the road is there " illegal" and the person on the boardwalk is there legal, then you can argue hitting 5 people in the raod is better that 1 in the boardwalk. Because, you should count on and be safe on a boardwalk. But playing in the road is asking for trouble. If not then order and rules will fly out if the window, nothing to hold on to.
    It is not atall like the trolley experiment, where all participants are " illegal" or for them, at a wrong spot.

    • @KANJICODER
      @KANJICODER Рік тому

      People Jay walking annoy the fuck out of me. Though doesn't that imply the penalty for Jay walking is death?
      If they are "Jay Running" that doesn't annoy me too much. At least they are aknowledging they shouldn't be there.

  • @kennywebb5368
    @kennywebb5368 7 років тому +5

    That slide at the end. In what sense do you mean that "the worst case scenarios for AI are worse than igniting the atmosphere"? I can understand saying that they're just as bad, but what could be worse?

    • @Nulono
      @Nulono 7 років тому +2

      Assuming there is other intelligent life in the universe, it could also be at risk.

    • @RobertMilesAI
      @RobertMilesAI  7 років тому +26

      There are all kinds of things that could happen to me that I'd prefer to choose a sudden painless death. Even if the outcome is everyone dying in a way that takes longer and involves more suffering, that's worse.
      The actual worst case is probably something like, we correctly produce the perfect utility function, and then make a sign error. Silly example, but stupider things have happened.

    • @kennywebb5368
      @kennywebb5368 7 років тому

      Gotcha. Thanks for the clarification!

    • @alant84
      @alant84 7 років тому +5

      I would say that the scenario in "I have no mouth but I must scream" is an example of something which would be worse than a quick fiery death, hopefully a far fetched one though. Let's hope your stamp collector isn't going to have such hatred for humanity...

    • @andreinowikow2525
      @andreinowikow2525 7 років тому +9

      "We produce the perfect utility function [for a GSI] and then make a sign error."
      You reallly know how to make things scary...
      A place designed to instill the most intense suffering possible for the longest possible time. By a Superintelligence. Yeah....
      Someone, ignite the atmosphere, would you?

  • @acf2802
    @acf2802 Рік тому +2

    3:05 AI doesn't have racial or gender biases. Reality has racial and gender biases. AI just recognizes the pattern and acts accordingly. The only way to create an AI which isn't "biased" is to explicitly give it a list of facts you want it to pretend don't exist (like some people do.)

    • @vadenlow5953
      @vadenlow5953 Рік тому +2

      True... But I still think I shouldn't have to pay more for car insurance just because most guys are more reckless than most gals

    • @KANJICODER
      @KANJICODER Рік тому

      @@vadenlow5953 Car insurance is a scam. I crashed my car once as a delivery driver. We fixed it without making an insurance claim so my insurance wouldn't go up.

    • @RonnieNichols
      @RonnieNichols 8 місяців тому

      I know this is an old comment, but so-called "racial and gender differences" are more often socially and societally enforced than by any natural or "objective" standard.
      If an AI is trained for crime recognition based on the standards of a society where many actions were made crimes for specific racist reasons, and/or where racial inequality exists, it notices the patterns that developed as a direct result of racist actions, and replicates them, thereby continuing and reinforcing them.
      Truthfully, though i sincerely hope its not the case, your comment seems to indicate that you hold a racial bias against certain people for "facts" you claim people pretend dont exist. I wonder what these "facts" are and why you declined to give any examples.

    • @acf2802
      @acf2802 8 місяців тому

      ​@@RonnieNichols Jesus, the cope is overwhelming. Black crime statistics have nothing to do with shit that was "made crimes for specific racist reasons." The FBI crime statistics specifically show that they are many times more likely to commit violent crimes, which consists of murder, rape, and assault. In what crazy woke rationalizing world do you think that the only reason we punish murder, rape, and assault is to unfairly keep black people down? 🤪

    • @acf2802
      @acf2802 8 місяців тому

      @@RonnieNichols The cope is overwhelming. FBI crime statistics have nothing to do with shit that was "made crimes for specific racist reasons." The FBI crime statistics specifically show that they are many times more likely to commit violent crimes, which consists of murder, rape, and assault. In what crazy woke rationalizing world do you think that the only reason we punish murder, rape, and assault is to unfairly keep black people down? 🤪

  • @mahiaravaarava
    @mahiaravaarava 20 днів тому

    AI risks and nuclear risks share similarities in their potential for widespread harm, but they differ significantly in nature. Both pose existential threats if not properly managed; however, AI risks involve ethical and control challenges, while nuclear risks are more about physical destruction and geopolitical tensions. Platforms like smythOS are working towards mitigating these risks by promoting responsible AI development and safety protocols.

  • @TheJamie109
    @TheJamie109 5 років тому +3

    I just recently came upon your videos on computerphile. I watched all of them in an afternoon and had to know more. So here I am more leisurely but persistently going through your channel. You do such a great job of fusing big complex ideas with a bit of humour and real world applications, without any broad "dumbing down" of the information you provide.
    I have always enjoyed programming and have sought a different path right out of high school. Your videos have reignited my passion and I hope to steer towards this passion as my life progresses.
    Thank you, Keep up the great work.

  • @Trophonix
    @Trophonix 5 років тому +2

    In an ideal circumstance, I guess I would want the self-driving car to deduct which person would be more likely to survive the impact and then do everything it can to lessen the damage and avoid hitting them while swerving in that direction.

  • @reqq47
    @reqq47 7 років тому +2

    As a nuclear physicist I like the analogy, too.

  • @NNOTM
    @NNOTM 7 років тому +2

    Did you see the recent (from 2 days ago) article on Slate Star Codex discussing a new Survey about AI safety opinions of AI researchers? (or maybe the paper itself - it's from May 30th)
    (link: slatestarcodex.com/2017/06/08/ssc-journal-club-ai-timelines/)

    • @NNOTM
      @NNOTM 7 років тому

      Haha, fair enough

  • @MetsuryuVids
    @MetsuryuVids 7 років тому +3

    I almost missed your new video on computerphil.

  • @michaeldrane9090
    @michaeldrane9090 7 років тому +3

    I think one huge problem with AI is intentionally negative use, how do we deal with that?

    • @NathanTAK
      @NathanTAK 7 років тому

      ...you can't, really?

    • @graog123
      @graog123 Рік тому

      @@NathanTAK oh well in that case we won't try to stop it

  • @poisenbery
    @poisenbery Рік тому +1

    Pierre and Marie Curie did groundbreaking research on radiation
    and died of radiation poisoning because they did not fully understand what they were dealing with
    the cost of learning safety in nuclear physics has always been in lives.
    i wonder if AI will be the same

  • @DK-qp4jf
    @DK-qp4jf 5 років тому +1

    Achievement? Dude really is working in McDonald's or similar type of jobs gives achievement? No. What gives achievement is jobs there you create something but jobs like that are rare and has large requirements

  • @anandsuralkar2947
    @anandsuralkar2947 4 роки тому +1

    Do you think neuralink can in anyway increase sefety from future AGI

    • @RobertMilesAI
      @RobertMilesAI  4 роки тому +2

      So there are a bunch of safety approaches and ideas that are designed around limiting the bandwidth of the channel through which the AI interacts with the world, and limiting its ability to influence people. From that perspective, giving a possibly misaligned AGI a high bandwidth direct channel to your brain is one of the worst ideas possible.
      On the other hand there are also a lot of approaches that are designed around having the AI system learn about human preferences and values, and from that perspective, data from a brain interface might be a good way to learn about what humans want.
      So plausibly something like Neuralink could be useful, but only to slightly improve what has to already be a good safety setup

  • @Dunkle0steus
    @Dunkle0steus 4 роки тому +1

    Rather than getting AI to do real things like collect stamps, maybe we should give AI goals like "solve cold fusion" or "find a unified theory of quantum physics and gravity".

    • @cfdj43
      @cfdj43 4 роки тому +2

      The stamp collector is a thought experiment to show how immediately dangerous an AGI is regardless of its goal. It prevents the argument of "an yeah some people might die, but it'd be worth it for (whatever sensible sounding goal you'd set)" no one is actually trying to build it

    • @Dunkle0steus
      @Dunkle0steus 4 роки тому

      @@cfdj43 i know.

    • @fieldrequired283
      @fieldrequired283 4 роки тому

      @@Dunkle0steus
      Do you care how many babies are killed in the process of solving cold fusion? If so, you still have the stamp collector problem. It turns the world into infinite redundant cold-fusion-solving-machines instead of stamps, because it needs to be completely sure.

    • @Dunkle0steus
      @Dunkle0steus 4 роки тому +1

      @@fieldrequired283 I'm not implying that solving physics problems is a perfect option. I'm not saying "DUH OBVIOUSLY! why didn't anyone think about this???", I'm saying that I think there may be better avenues for AI to go down than collecting stamps. Currently, we use computer programs and AI to automate tasks which humans could do, but requires too much effort, like doing arithmetic, sorting, counting, image recognition, etc. When you talk about stamp collecting, you're talking about setting the AI up to interact with the world in a very physical way, and maybe that's not how we should use artificial intelligence. If set the AI up so that its goals don't force it to directly interact with humans, our world and the internet, and instead give it problems it can solve internally, that might at least help prevent it from causing obvious negative impacts. We can't know how it will decide to solve those problems, but we can at least say that the obvious things like accidentally running over babies in order to get a teacup from the cupboard are less likely to be instrumental goals for it than they are for a tea-serving robot.

    • @fieldrequired283
      @fieldrequired283 4 роки тому

      @@Dunkle0steus
      The computer is made of physical matter. Humans are made of physical matter. Communication is a direct, material interaction in the physical world. All problems are in the physical world, and so are there solutions.
      If you're asking it to do *anything,* and it understands all these things, it will, by necessity, knowingly interact materially with the physical world.

  • @mathmagician5990
    @mathmagician5990 2 роки тому +1

    I don't want to be a contrarian but how are the worst case scenarios for AI worse than igniting the atmosphere? I cannot imagine a feasible AI problem that is worse than total human annihilation.

    • @dawidkiller
      @dawidkiller Рік тому +3

      vr torture chamber for everyone i asume

    • @baranxlr
      @baranxlr Рік тому

      @@dawidkiller You can already hop on vrchat rooms and experience that for free

  • @ArthurKhazbs
    @ArthurKhazbs 2 місяці тому

    People tend to value convenience over security: we have been leaving stovetops unattended, taping down safety switches and reusing our work account passwords all the time. Same goes for AI development: so much money and work is put into extending the AI's functionality and computational power, yet so little is put into AI safety research. Not only that, but companies even prohibit their employees to disclose known safety issues in their AI products.

  • @LLoydsensei
    @LLoydsensei 7 років тому +1

    You made me think about an even greater (but still very far away) problem than all of this:
    Imagine that one day, we effectively understand AI one and learn how to design one which cannot be end-of-the-world-like. What about people who would design nonetheless dangerous AIs, be it for testing or for evil intent? Are researchers already looking at countermeasures to such rogue AI systems?
    I certainly would not like accidentally wiping out the human race by failing to follow the recommendations for creating AI...

    • @LLoydsensei
      @LLoydsensei 7 років тому

      Uhm, I used my brain for a second and understood that the simple answer to that question is "too soon". But I can't avoid thinking about an end-of-the-world scenario happening because a cosmic ray shifted a bit in a "safe" AI ^^ (even though I know that something which in the distant future will be labelled as "safe" won't have a single fail-safe mechanism ^^)

  • @sophiacristina
    @sophiacristina 5 років тому +1

    You forgot, or i skipped, an important issue, its not exactly hard to program an AI, consequently, people can make terrorists robots or other things with AI, for example, people can make AIs that target certain class, culture, ideals or ethnics and releases then on crowd, if they kill someone they will not reveal their commander and will not care about the repercussion. They can auto-destruct and hide all evidences, they can morph itself to hide and they are disposable ...

  • @ioncasu1993
    @ioncasu1993 7 років тому +20

    I'm a simple person: I see Robert Miles, I press like.

  • @Paul-A01
    @Paul-A01 4 роки тому +1

    People jumping in front of cars are just reward hacking the insurance system

  • @harlandirgewood7676
    @harlandirgewood7676 Рік тому +1

    I used to work with self diving cars and had weirdos who would walk in front of our cars. Folks in cars would try and get hit so they could sue us.

    • @KANJICODER
      @KANJICODER Рік тому +1

      Don't self driving cars have a shit tonne of cameras? How do they think they are getting away with that?

    • @harlandirgewood7676
      @harlandirgewood7676 Рік тому +1

      @@KANJICODER they do. Dispatch would often show us tapes of people attacking cars or trying to "get hit". Never works for them, as far as I've seen.

    • @KANJICODER
      @KANJICODER Рік тому

      @@harlandirgewood7676 I would watch a 15 minute compilation of "people trying to get hit by self driving cars".

  • @thisisashan
    @thisisashan Рік тому

    No, they are nothing alike.
    But also, AI risks are much worse because most people are purposefully ignorant so they can achieve a lifetime of profit from abusing AI systems while ignoring potential repercussions.\
    The level of intelligence required for AI use/research is also MUCH lower, which means we have a much higher likelihood of incidence.

  • @matthewdancz9152
    @matthewdancz9152 Рік тому

    Belief is not reality...AI does have a real world equivalency. Slavery did not benefit the economy, but it made a few individuals very wealthy. Ownership of intelligence leads to concentrated and unified forces along a singular goal.

  • @ninjabaiano6092
    @ninjabaiano6092 4 роки тому +1

    I do like to point out that the energy released by nuclear bombs was severely underestimated.
    Not world end scenario but still.

  • @trucid2
    @trucid2 7 років тому +1

    Awesome channel. It will shoot into the stratosphere if you keep making regular videos.

  • @miketacos9034
    @miketacos9034 Рік тому

    I miss the good old days when humans only risked total destruction of life on Earth, and not total paperclippification of the universe.

  • @mapu1
    @mapu1 Рік тому

    You are ignoring the fact of malicious actors. There are non-zero amount of people working on making killer drones that are not limited by ethical concerns. Ai has to be robust enough to handle not only accidents but also reasonable amount of malicious actors.
    Badly done AI is as dangerous as basement nuke. As long as everyone uses AI responsibly the chance of it going bad is small. But they are not gonna. Someone is gonna put a gun or whatever on a robot eventually.
    Same goes for abuse of facial recognition. People are already using it to discriminate against groups of people.

  • @smithwillnot
    @smithwillnot Рік тому

    We should only let AI control our paperclip production. What could possibly go wrong?

  • @Corey_Brandt
    @Corey_Brandt 6 років тому +1

    Why can’t it be that every time the AI encounters an ethical dilemma or may encounter one it just gives control to a human?

    • @0MoTheG
      @0MoTheG 5 років тому +4

      Because it would not do anything then.

  • @nastropc
    @nastropc 7 років тому +9

    The only winning move is not to play...

  • @bp56789
    @bp56789 5 років тому

    Yes! People will fuck with self-driving cars. The discussion around whether to kill the driver is ridiculous and ignores bad actors in society. If someone has to die, let it be the people who are at fault.

  • @mattevenson1232
    @mattevenson1232 7 років тому +20

    I propose that you rename your channel to be ComputerMiles.

    • @NathanTAK
      @NathanTAK 7 років тому

      Punny, but he's an independent channel

  • @peterrusznak6165
    @peterrusznak6165 Рік тому

    I developed the habit to like the videos of the channel at the moment when the player starts.

  • @patrickramos9251
    @patrickramos9251 Рік тому

    If cats could have taught humans how to domesticate cats, would they?

  • @rogerc7960
    @rogerc7960 3 роки тому

    Stafford scandal was a great efficiency savings
    50,000 dead

  • @ThunderZephyr_
    @ThunderZephyr_ Рік тому

    The fallout song at the end was perfectly suited XDDD

  • @mennoltvanalten7260
    @mennoltvanalten7260 5 років тому +1

    And then... Well, I'm not a physicist.

  • @philipbraatz1948
    @philipbraatz1948 7 років тому +1

    The music at the end was a genius addition

  • @DamianReloaded
    @DamianReloaded 7 років тому +24

    I think the sense of achievement won't necessarily be a problem if the standard of living is good. There are a lot of things that can be done, particularly social driven stuff, like sports, romance or politics where AI can be put aside or be used only as an improvement. For most people there will be endless synthesized/sequenced entertainment for which the hours of the day won't be enough to consume. Also space colonization. All this assuming we won't be cooking ourselves due to global warming or starving to death due to the miserliness of the Mafia/Politicians.

    • @__-cx6lg
      @__-cx6lg 7 років тому +5

      Damian Reloaded I dunno, are endless hours of entertainment what we want the future to be like? I mean, we spend enough of our time sedentary and immobile in front of screens already--do we want a future where that's humanity's primary activity?

    • @DamianReloaded
      @DamianReloaded 7 років тому +8

      Entertainment is a choice. Most people will choose to sit back and be served with pleasurable sensations. It's not the job of entertainment to educate. Neither you will be able to educate people only by depriving them from entertainment. If you ask me I'd rather have everybody sitting on their couches enjoying themselves than carrying a gun to war.

    • @maximkazhenkov11
      @maximkazhenkov11 7 років тому +1

      Can't speak for "we", but I certainly do. Who's to say we spend "enough" of our time in front of screens? Some divine commandment?

    • @XFanmarX
      @XFanmarX 7 років тому +5

      Sense of achievement has nothing to do with quality of living. It has to do with feeling useful and skilled.
      If the entire world has nothing to do but have fun all day, then they *will* feel like shit. Human beings are programmed to receive their most enjoyable hormone-reactions when they feel they made an accomplishment. Which is one of the reasons our species is at the top of the food chain; our hormones motivate us to do our best. When we don't we become restless and self-loathing, this is how our bodies push us to do something more productive. Why do you think depression is so incredibly widespread at the new adult generations? Contrary to what some might believe most people do not want to be lazy slobs with nothing to do all day.
      If you think people will be happy just sitting on their couches all day, you're being incredibly naive as to what makes a human being. Robots, not just AI, taking over people's livelihood is a real serious problem that is closer to reality at this moment then any of the other AI-problems mentioned in this video and should not be so easily dismissed.

    • @tubblestop414
      @tubblestop414 7 років тому

      We could use genetic engineering to improve humans. Whether it's intelligence or physical ability, genetic engineering can do that.

  • @durellnelson2641
    @durellnelson2641 4 роки тому +1

    6:35 "There's a chance that we turn the entire atmosphere into a thermonuclear bomb"
    7:06 "There was a non zero probability... that all humanity would end instantaneously more or less right there and then"
    So please explain how...
    9:35 "The worst case scenarios for AI are worse than igniting the atmosphere"

    • @RobertMilesAI
      @RobertMilesAI  4 роки тому +9

      You can't imagine anything worse than being dead?

    • @fieldrequired283
      @fieldrequired283 4 роки тому +2

      @@RobertMilesAI
      I swear this is like the third time I've seen you make this exact response to this sort of comment.
      It really is a chilling line. Like a threat, almost.

    • @martinh2783
      @martinh2783 4 роки тому +1

      Igniting the atmosphere will most likely kill all organisms that live above the surface of the ocean. Which is really bad but organisms that live really deep in the ocean will probably be just fine. While an AI could possibly end all life on earth (and in every part of the universe it can get influence over). Which I would call worse.

  • @sobertillnoon
    @sobertillnoon 4 роки тому

    Sweet, more British vocab. "Autoqueue" or is it Autocue? Either way, I doubt this will get as much use as "fly-tipping" did when I learned it.

  • @DrDress
    @DrDress 6 років тому

    2:21 This comment will be worth gold in the future.
    Yeah I'm trying to piggyback on your future fame Rob.

  • @solsystem1342
    @solsystem1342 2 роки тому

    The sun does not fuse nitrogen. Fusion rates are extremely sensitive to temperature. Like e=a*t^10 all the way up to e=a*t^40 and beyond. So basically at all temperatures only one fusion process is possible since whatever "turns on" first will quickly supply the energy to support that layer of the star. Right now that's hydrogen throughout the core. When the sun is dying it will start to fuse other elements.

  • @mattcelder
    @mattcelder 7 років тому

    Other youtubers: "hey what is up my youtube family thanks for clicking on the video if you're not subscribed then please subscribe for more videos and leave a comment about what you thought and make sure to hit that like button! alright let's get started (begins content)"
    Robert: "Hi! (begins content)"
    I love it

  • @rockbore
    @rockbore 5 років тому

    Another side note.
    The dilema of detonating thr first nuke was used wonderfully in a plot from The Discworld Series. Terry Pratchet.
    Cant remember the actual novel, sos, but as a clue, they created our universe includong planet earth as a byproduct of their sucessful delpetion of an excess of magic, if that helps.
    Also, the stratopsheric tests were attempts to kill the magnrtosphere.

  • @insidetrip101
    @insidetrip101 7 років тому

    I agree with you, but the thing is humanity has been thinking about intelligence for at least as long as we can write, and likely sooner. For thousands (probably tens of thousands) of years humans have been thinking about what is it about our minds that makes us different from other animals. This was done primarily by philosophers, but I think its fair to also include religious people as well.
    In either case, during all that time, we know less about what makes our intelligence actually work (or at least we're less certain about how our intelligence works) than the first scientists were about nuclear fission (and nuclear fusion) reactions. The funny thing, nuclear physics is only around about 100 years old.
    I'm certain that you're aware of this, but one major difference is that I think we had a foreseeable future where we could be relatively certain that nuclear arms wouldn't necessarily cause our destruction (to be fair it still may). Unfortunately, given how complex intelligence is relative to nuclear physics, I don't think we'll have the patience to wait around for us to be certain that general AI won't wipe us out somehow.
    I suspect that you probably disagree with me (since you clearly do research in AI), but we really need to just not fuck with general AI. I know that won't happen, but I really think its just a terrible idea given how little we know about intelligence. We're going to create something that we have no fucking clue about. Its really terrifying.

  • @adamkey1934
    @adamkey1934 7 років тому

    I wonder if there was a traffic jam of self driving cars (unlikely I know, but let's say they are stuck behind some human drivers) and I drove my car straight at them, would they move aside to avoid a collision? It'd be like motorcycle filtering, with a car.

  • @badlydrawnturtle8484
    @badlydrawnturtle8484 3 роки тому

    AI won't increase economic inequality. Economic inequality exists independently, and any increase in overall wealth will distribute disproportionately because of that. To link the two would be to insinuate that it's something about AI in particular that is causing the inequality, which is not the case at all.

  • @SenorZorros
    @SenorZorros 4 роки тому

    sigh... As a physics student, people really overestimate the dangers of nuclear energy. it's safer and cleaner than pretty much everything available...

  • @remicaron3191
    @remicaron3191 4 роки тому

    I like how 90% of the world loosing their function in life is not too scary for people but super intelligence is completely unthinkable by the same people because that would make them obsolete. Funny how one problem can be dealt with while the other is completely unthinkable. Shows you what the 1% really think of the rest of us. Talking to you Elon!

  • @stilltoomanyhats
    @stilltoomanyhats 5 років тому

    Here's a link to the "Concrete Problems" paper: arxiv.org/abs/1606.06565
    (not that it took more than a few seconds to google it, but I might as well save everyone else the hassle)

  • @thomasbyfield5366
    @thomasbyfield5366 6 років тому

    I like the soothing and relaxing music after your Apocalyptic AI examples

  • @lukalot_
    @lukalot_ 2 роки тому

    Your ending song choices are sometimes such a satisfying wordplay that I just leave feeling happy.