i taught an AI to solve the trolley problem

Поділитися
Вставка
  • Опубліковано 2 січ 2025

КОМЕНТАРІ • 12 тис.

  • @answerinprogress
    @answerinprogress  3 роки тому +8343

    I really hope you liked that video; it took a week and a day longer than usual because I was tired of ending videos with accepting flat out failure. If you did, consider sharing our channel with a friend (it's the biggest way you can help us keep doing what we're doing). Also, we (secretly) announced something in our latest newsletter, you can check out the archive here: answerinprogress.com/newsletter

    • @Potoaster
      @Potoaster 3 роки тому +18

      aight

    • @F00Lsmack
      @F00Lsmack 3 роки тому +24

      Might be able to edit this project in a way that reflects the criticisms you received. Such that, The algorithm displays what it would pick and why (spits outs is valuation, "cats > three babies in trench coat" then people have the option to accept or alter the decision. Regardless, they have to type their reason for doing so or select previous respondents for doing so up to like 9 answers (like a jury). Then the machine learns overtime based on the outcome even if it doesn't understand the written justification.

    • @F00Lsmack
      @F00Lsmack 3 роки тому +11

      The memeiest answer might win though lol, garfield>everyone cuz Mondays

    • @anthropomorphizedrock
      @anthropomorphizedrock 3 роки тому +14

      “I have not failed once. I have succeeded in proving that those 10,000 ways will not work.” Love these! I learn and think way more about these vids

    • @epistoliere
      @epistoliere 3 роки тому +4

      I can't thank you enough for the videos you're making: they are interesting, educational, well-produced and funny. And, honestly, I really like your approach - instead of accepting failure, you found a way to learn from it and tell us about this

  • @ectior
    @ectior 3 роки тому +12331

    This reminds me of what my 9th grade programming teacher said: “The computer isn’t stupid, it will do exactly what you tell it to. The computer is only as stupid as you are.”

    • @glitterghost834
      @glitterghost834 3 роки тому +256

      lmaoo i love this

    • @Dhips.
      @Dhips. 3 роки тому +52

      same

    • @Dong_Harvey
      @Dong_Harvey 3 роки тому +205

      And the designer sometimes, but marketing is there to hide it's flaws

    • @Dtrainabrams
      @Dtrainabrams 3 роки тому +210

      This explains why my computer is stupid.

    • @LaCabraAsada
      @LaCabraAsada 3 роки тому +24

      Harsh

  • @bensonprice4027
    @bensonprice4027 3 роки тому +3379

    I think the tagline for this show should be this,
    "I found out that what I am trying to do is harder than I expected."
    I swear I hear it on every episode.

    • @answerinprogress
      @answerinprogress  3 роки тому +804

      cursed by my own hubris

    • @katbairwell
      @katbairwell 3 роки тому +68

      @@answerinprogress In fairness pretty much everything is harder than you first expect, once you really think about it, even something as mundane as what to have for dinner. Or maybe that's just me, perhaps I just find things difficult, all of the things.

    • @hopefulbloom
      @hopefulbloom 3 роки тому +7

      wait. i saw this comment as she said that-

    • @tails89us
      @tails89us 3 роки тому +2

      Damn beat me to it😂

    • @Rafael-iq5su
      @Rafael-iq5su 3 роки тому +22

      @@answerinprogress "god has cursed me for my hubris and my work is never finished"

  • @tacoman10
    @tacoman10 3 роки тому +4762

    "Can you teach a robot to be a good person?"
    Step 1: Define "good"

    • @MP-ut6eb
      @MP-ut6eb 3 роки тому +30

      ,👏

    • @maggiminer
      @maggiminer 3 роки тому +223

      @@rbxq define morals and ethics

    • @roboduck7401
      @roboduck7401 3 роки тому +41

      @@maggiminer whatever I say so.

    • @joyifu
      @joyifu 3 роки тому +91

      @@CheeZBallz7 define define

    • @__dash__23
      @__dash__23 3 роки тому +42

      @@CheeZBallz7 how do you define define?

  • @stevieinselby
    @stevieinselby Рік тому +27

    Two years on, ask the question about M*sk again and most people would pull the lever to switch the trolley onto the track with him on _even if there was no-one on the original track._

  • @dshopov
    @dshopov 3 роки тому +13815

    Your AI is actually flawless. It realized that without humans there wont be any trolleys

    • @phirerising
      @phirerising 3 роки тому +273

      This though

    • @wildfire9280
      @wildfire9280 3 роки тому +190

      Even with humans some just don’t have public transport at all

    • @MissSweetie
      @MissSweetie 3 роки тому +105

      OMG YOU'RE A GENIUS

    • @gearbreaker9645
      @gearbreaker9645 3 роки тому +92

      It has achieved superintelligence

    • @peskypigeonx
      @peskypigeonx 3 роки тому +158

      @@wildfire9280 unless....
      *CAT-DRIVEN PUBLIC TRANSIT BABY*

  • @cerealgibbon7330
    @cerealgibbon7330 3 роки тому +2498

    "But what if Albert Einstein was on the other track?"
    "Isn't he already dead?"
    No no she's got a point

    • @theleviathan3902
      @theleviathan3902 3 роки тому +38

      Pick the cat

    • @medexamtoolscom
      @medexamtoolscom 3 роки тому +103

      Well actually that would mean that Einstein was a zombie and deserves different consideration than a regular human, for instance would being hit by a train kill a zombie Einstein? Is his life more valuable because he has no limited lifespan and so you'd be cutting short a 10 thousand year life? Less valuable because he goes around eating brains and you'd be saving his perspective victims?

    • @An_Average_Arsonist
      @An_Average_Arsonist 3 роки тому +3

      I cant belive I'm the first to mention the kronk reference.

    • @vangildermichael1767
      @vangildermichael1767 3 роки тому +10

      Who lives, (Albert Einstein) or (Adolf Hitler). Who gave (any one person) the power to decide if another person lives or dies. Because (any one person) cannot make your decision with no respect to who the subject is. Then (any one persons) answer is just a question of "what is better for (that person)". Because they think they are something.

    • @3nertia
      @3nertia 3 роки тому +2

      What if it were the equivalent of Einstein though? Just a human, maybe even poor, but someone whose ideas could contribute to the progression of mankind as a whole?

  • @Fade2GrayOG
    @Fade2GrayOG 3 роки тому +1416

    Creating an AI to solve the trolly problem is the most complicated way of saying "I wouldn't pull the lever" I can imagine.

    • @luiss428
      @luiss428 3 роки тому +105

      funny enough, by the end of the video she addresses exactly that. you cant say u didnt pull the lever if u built the machine for it

  • @arrangemonk
    @arrangemonk Рік тому +160

    the usual human solution to this kind of dilemma is panicing, doing something by chance, screaming and suffer from ptsd for the rest of their life

  • @halbarroyzanty2931
    @halbarroyzanty2931 3 роки тому +6330

    It would be funny to have a typical "rogue ai destroys humanity" story except the ai just really likes cats

    • @TheSleepiestPlurals
      @TheSleepiestPlurals 3 роки тому +295

      you defeat it by reasoning that without humans, no one will be there to take care of the cats

    • @mcmonkey26
      @mcmonkey26 3 роки тому +70

      the ai destroys humanity by pushing them off the world and into orbit

    • @Gray0101
      @Gray0101 3 роки тому +19

      Someone needs to make a movie out of this

    • @gabrielbelouche3954
      @gabrielbelouche3954 3 роки тому +94

      @@TheSleepiestPlurals Ai build robots to take care of the cats.

    • @thenineteenth8526
      @thenineteenth8526 3 роки тому +50

      @@TheSleepiestPlurals You forget that cats are the most successful hunters in the world. They can fend for themselves, no problem.

  • @sojirokaidoh2
    @sojirokaidoh2 3 роки тому +1483

    This video took a fitting turn, since "I don't pull the lever" tends to be about deflecting guilt rather than actually engaging with the fundamental messiness of ethics.

    • @randominternetguy3537
      @randominternetguy3537 3 роки тому +91

      If you do pull the lever, you might be sued, if you don't, then the chance is less. You aren't responsible for not saving the lives of people, but you are for ending one.

    • @sojirokaidoh2
      @sojirokaidoh2 3 роки тому +76

      @@randominternetguy3537 Getting sued or not has no bearing on whether or not an action is ethical. And what do you mean by responsible? Legally responsible?

    • @jadeamulet2339
      @jadeamulet2339 3 роки тому +18

      @@randominternetguy3537 being sued isn’t immoral…

    • @randominternetguy3537
      @randominternetguy3537 3 роки тому +64

      @@jadeamulet2339 yea, but it has a bearing on everyone's actions. If touching the lever meant saving 4 people, but you get 10 years in prison for manslaughter, I'd go with not touching anything.

    • @kennylee12313
      @kennylee12313 3 роки тому +37

      @@sojirokaidoh2 It's more like ... if you didn't do anything, the people who were fated to die ends up dying.
      If you did something that resulted in the death of 4 other people, then you are responsible for their death since they were originally not going to die.

  • @ChrisAkaMastermind
    @ChrisAkaMastermind 3 роки тому +761

    i love how all these videos start out like "ooh, i tried this fun thing with coding" and then slowly shift to "the problem of moral responsibility with autonomous decision making computer programs that control everyday processes with the power to change lives"

    • @eeeguba432
      @eeeguba432 3 роки тому +13

      The female vsauce pretty much xd

  • @strawberrylemonadelioness
    @strawberrylemonadelioness Рік тому +119

    I never thought Garfield could create such an ethical dilemma.

    • @moleratical1232
      @moleratical1232 6 місяців тому +4

      Yeah but, they're goona serve lasagna at the funeral, so, worth it.

  • @jamiewindsor
    @jamiewindsor 3 роки тому +7824

    Thank you UA-cam algorithm for recommending this video to me.

  • @nothingtoit142
    @nothingtoit142 3 роки тому +1201

    The machine picking a cat every time does sound like people watching a horror movie though. XD People do generally favor the animals to be saved in horror movie, while people dying is expected. In some ways the robot is operating on human ethics, just not how we usually consider the trolley problem.

    • @grasshopper1292
      @grasshopper1292 3 роки тому +73

      So true. I will watch a person get slashed to bits in a movie, but if a dog gets kicked, I'm devastated and uncomfortable. I think part of it comes from the fact that animal rights are still fairly recent history. There are films where animals were mistreated and I guess whilst you know for certain the actors didn't get hurt, the animals feel a little more real

    • @SocialLocust
      @SocialLocust 3 роки тому +51

      Animals are completely innocent and we consider many of them to be more defenseless, so it makes sense that we would put them in a category next to human babies. Human adults made choices to get where they are and we reason that they had some sort of chance against the harm.

    • @jacobfreeman5444
      @jacobfreeman5444 3 роки тому +3

      Sentimentality isn't a legitimate choice for ethical concern.

    • @21_jadhav_rajendra84
      @21_jadhav_rajendra84 3 роки тому +10

      Well the AI does not have the guilt for pulling the lever which humans will have no matter who dies or is saved.

  • @sorsocksfake
    @sorsocksfake 3 роки тому +4588

    Plot twist: the AI actually didn't want to save cats.
    It just liked killing humans more.

    • @lorddiavoloasayoutubepolic5185
      @lorddiavoloasayoutubepolic5185 3 роки тому +81

      Why am I now called AI-
      OH .

    • @asomeoneperson4608
      @asomeoneperson4608 3 роки тому +46

      Real-life GLaDos

    • @lorenzoh.17
      @lorenzoh.17 3 роки тому +18

      I was stopping the video then the machine has killed the dog to see the comments..: correct that is true to me. Because dogs protect humans more than all day long relaxed and farting cats (well have a positive psychological effect on humans but dogs as well). Dogs can find unhealthy drugs, direct blinds, detect illnesses or a certain status of s person with diabetis etc. . -But this decision is in contradiction in my head to the decisons made before by the machine. But if the good deeds are stopping human kind to evolve like stopping war on drugs making them less important teaching humans and treating humans to reduce number of people with need for drugs changing the market. Stopping using dogs for blind and think of developing better electronic helpers or how to improve the teaching of the blind people how to navigate like bats by sound and its reflections. time is ticking for human kind. So if humans time is not spend well but perhapd bad (classic earth becomes some day inhabitable) living creatures (including humans) become a lesser value over the total spacetime with lesser or nothing of livings. So if humans treat all possible living including them selves bad, why shouldn t a machine decide to kill humans Terminator like. A bad politician had a greater value to be killed to improve human kinds value to planet earth and to stop the machine to kill humans. The machine is in a domination god like position, so as we are just human: don t decide to kill bad politicians by your hand or a created machine. We have to hold on to imperfectness to be alive and find education for all. Education and time to create can change structures of you and me and theoretical in all the universe but actual we find always a dominant thing to be solved indicating the submitted not godlike position (lesser general value) of humankind. The machine can be smarter but only time can show if it can dominate without using physical brutal force on others in all aspects all humankind. Despite that.. will she put Putin and Trump on two different sides? Or Biden and Trump and the machine trys to send a waggon/car on each track to have certainty on its side or had Trump has one of them done somthing valuable or have both done somthing valuable and the machine decides to shut down her program to save the honour and the lives preserving life -the machine knowing that it is just simulating something that is of value in reality? -This is not a bad feeling for any politician as I guess the machine would kill me aswell as I have littered just with my presence this planet without improving education or technologies to preserve live.. So lets go and do somthing positive with our imperfectness..

    • @lorenzoh.17
      @lorenzoh.17 3 роки тому +2

      I am at 8:26 and must say without the time there was a dog and cat I scored full, understanding that human kind sucks a bit.. Idiotic amount of laws with out using the mind gifted by nature. The last one was an obvious choice to protect the poor cat from the violators wanting to put a law in that machine to kill it. watched it now.. If Garfield comics are important to live than the way Garfield will die must probably more spectacular to boost sales. Bad luck I was on the other rail. No it is not your fault cute coding girl.

    • @katrubie3
      @katrubie3 3 роки тому +9

      Cats are notorious for a seemingly uncaring attitude. Our society (us humans) appears to have been hurtling in this direction with an over-inflated sense of self and less compassion or thought given to others. We expect more of human beings, but accept the cat at face value. Is the trolly opting to save the cat because of humans failing to live up to their own standards?

  • @gabrielkujawowicz2472
    @gabrielkujawowicz2472 Рік тому +38

    I asked my grandma this question and she instantly went on a rant about the imorarlty of creating this question and said whoever made that question is sick

    • @girrrrrrr2
      @girrrrrrr2 4 місяці тому

      Feels like she would pull the lever to run over the more people.

    • @hamzamotara4304
      @hamzamotara4304 4 місяці тому +11

      Grandma playing 4d chess.

    • @brocky69
      @brocky69 22 дні тому +1

      Classic deflection. "Dont even worry about what my answer is, its a terrible question" intellectually cowardly haha

  • @zoey9077
    @zoey9077 3 роки тому +494

    This video just devolves into “I broke ethical guidelines for a funny video… guess what? Those ethical guidelines are constantly being broken by people with a lot more power than me. You should probably be afraid.”

  • @cwasone.
    @cwasone. 3 роки тому +12283

    Kill the entire human race or kill a cat?
    The AI: OoOoh… that’s a tough one

    • @somethingrandom8091
      @somethingrandom8091 3 роки тому +796

      No it’s not I’m sure it would pick the cat

    • @aveen1968
      @aveen1968 3 роки тому +906

      @@somethingrandom8091 I mean one could argue that the cat is the ethical choice as humans are evil and destroying the planet **Terminator drums enter the chat**

    • @julis.6667
      @julis.6667 3 роки тому +467

      If all humans were dead, nobody would care.

    • @the_opossum_guy
      @the_opossum_guy 3 роки тому +268

      @@aveen1968 yeah, but there’s a fair amount of man made things that would ruin parts of the planet if humans were to suddenly go extinct. Plus there would probably be a lot of fires due to the amount of hot stuff left running. We need a sec to cut that stuff off before suddenly dying. Not to say the human race deserves to live, just saying we made such a mess that if we were to suddenly disappear, it would probably hurt stuff more.

    • @kellim7618
      @kellim7618 3 роки тому +37

      What about dogs ☹️

  • @SwitchAndLever
    @SwitchAndLever 3 роки тому +3399

    "Forget the past 16 minutes of your life" Done! 😄

  • @sergioballes
    @sergioballes 2 роки тому +333

    What nobody understands about this dilemma is that it doesn’t matter who are the people in the trails.
    The purest moral question here is, what is worse: letting 5 people die by omision or killing 1 on purpose? In other words, is it better to dirt your hands, or is it better to leave life be, however atrocious it may seem.
    Adding character to the people on trails doesn’t change the fact that you are knowingly killing a human being if you pull the lever, making you a murderer.
    This question is asking if “The Greater Good” is something real, instinctive or innate; or if it’s just another social construct. Because that’s the excuse we use to justify our acts, and worse, that’s the excuse people in positions of power use when taking certain decisions, specially in Governments.
    Are we entitled to judge other people lifes if that means we have to choose who lives or who doesn’t?
    So it doesn’t matter if kids or Gandhi are on the trail, do you have the guts to pull the lever?

    • @tartaletk
      @tartaletk Рік тому

      Agree
      Im gonna pull the lever twice, to kill 5 people on purpose, idc who is who from them

    • @theYoutubeHandle
      @theYoutubeHandle Рік тому

      the correct answer is don't do anything. The person that tied those people to the track is the murderer here. You doing nothing is not at fault here.
      The world doesn't need more saviours, it needs less wrong doers. If we are strictly talking about who's right and who's wrong.
      If you accept wrong doers as part of life and you have to fight them, and you have to make a choice, then it get complicated.
      But you are not solving the root issue here. If there are no wrong doers, u wouldn't be in this situation in the first place.
      To be a good person, u just need to not do any wrong things. Don't hurt anybody, it's that simple. U don't need to save anyone.

    • @andreapatacchiola1184
      @andreapatacchiola1184 Рік тому +31

      Excactly! That's why my answer is usually pretty straight forward: i won't pull the lever. I don't have the guts. Dont want blood on my hands.

    • @syrena911
      @syrena911 Рік тому +11

      Yeah, I'm always pulling the lever to save the most people. Even if my mom, spouse, Einstein, was the "one" on the single person track. I hate Einstein anyway. 😂

    • @anatolymakarevich8260
      @anatolymakarevich8260 Рік тому +5

      ​@@andreapatacchiola1184 what if its 100 to 1? A 1000 to 1?

  • @Minh-fo5fd
    @Minh-fo5fd 3 роки тому +971

    "Are you ethical?"
    "I try to be."
    Agreed.

    • @hero303-gameplayindonesia8
      @hero303-gameplayindonesia8 3 роки тому +1

      *E D G Y*
      Controversial Edit: I mean most people of this channel are Edgy tho, so i dont blame you

    • @Meewee466
      @Meewee466 3 роки тому +12

      @@hero303-gameplayindonesia8 what

    • @yukiandkanamekuran
      @yukiandkanamekuran 3 роки тому +5

      I vibe with the person who said no

    • @t6homs
      @t6homs 3 роки тому +1

      🗿

    • @reality-fl1sl
      @reality-fl1sl 3 роки тому +3

      "are you ethical"
      Me: "no"

  • @vexed832
    @vexed832 2 роки тому +1897

    “Today you and I are going to teach a machine to solve the trolley problem”
    I appreciate you making me feel involved despite the fact that I’m doing nothing productive

    • @ZachAttack6089
      @ZachAttack6089 2 роки тому +78

      Gotta build up that parasocial relationship!

    • @keep-ukraine-free
      @keep-ukraine-free 2 роки тому

      Neither is she.
      Her extreme attention-seeking is painful to watch - like a prancing peacock.

    • @superdave8248
      @superdave8248 2 роки тому +6

      I remember the short lived reboot to Knight Rider. In it "Kitt" had to have all of his personality parts retrieved. At one point, this partial "Kitt" had control of the vehicle and was speeding as Knight Rider so often did back in the day and almost runs over a deer. The driver, asked the car why it didn't slow down. The car's response (paraphrase), "I calculated the impact from the deer and determined it would cause no damage." In other words, the car was looking at efficiency over other external factors.
      The same came be said about the Trolley Experiment. You are trying to make a AI understand that it needs to divert to a less efficient track simply because there is less obstructions in the way that won't impact its performance to begin with. You are asking a AI to understand that a representation of a life has value and needs to stay in play. The more appropriate approach would be to ask why the trolley can't slow down to avoid hitting the humans versus switching tracks and hitting less humans. I can't help but wonder if the AI even understands the logic of its action. Instead it goes through a series of less logical responses until it achieves a response the controller is looking for. In the AI's logic it is probably getting dumber with each rendition of the experiment.

    • @PodiumTuningRacing
      @PodiumTuningRacing 2 роки тому +18

      Your very presence invites challenge. You are in fact more involved than you believe... Without you, there is no audience, without an audience there is no reason for a video, without the video there is no awareness, without awareness there is no curiosity, without curiosity there is no purpose. Without purpose there is no progress, without progress there is death. Congratulations, you've saved the human race by watching.

    • @sakariaaltonen7428
      @sakariaaltonen7428 2 роки тому +1

      Don't worry about that, you both did nothing productive. The title of the video suggests that there is actually an interesting machine being built and tested, but there is very little of that.
      What there is however is standard boilerplate drivel about AI ethics, focused naturally on "societal issues" ie. racism.

  • @SuperRoboPopoto
    @SuperRoboPopoto 3 роки тому +747

    Is simple… you pull the lever to switch the track and then back again as the trolley crosses the switch platform. This forces the front of the trolley to turn and the back end to continue along the normal route. This derails the trolley sending it careening into the crowd of onlookers that were too useless to help any of those people on the track.

    • @franciscohughes1757
      @franciscohughes1757 3 роки тому +55

      The best solution lol

    • @synexiasaturnds727yearsago7
      @synexiasaturnds727yearsago7 2 роки тому +25

      Hit them with the "gg 10 flawlessed"

    • @lazzie7495
      @lazzie7495 2 роки тому +13

      I honestly answered the trolley problem this way. Derail the trolley. The problem then is that the person giving the questions says thus kills everyone.

    • @qxdeath13
      @qxdeath13 2 роки тому +16

      What if you consider that there’s passengers on the trolley

    • @SpectroliteDS2400
      @SpectroliteDS2400 2 роки тому +23

      yeah the other people have it all wrong, you should be going for the high score!

  • @zephodb
    @zephodb Рік тому +146

    The trolley problem is basically psychological trolling where everyone tries to force others to be unsure of themselves and be able to call them monsters for making a decision, any decision. The Trolley problem is a mental experiment to force everyone into inaction. >.

    • @IncubiAkster
      @IncubiAkster Рік тому +10

      Nope, its whole point is to point out how ridiculous thought experiments are and how impractical and/or ludicrous they are in reality.

    • @premiumfruits3528
      @premiumfruits3528 Рік тому +9

      It's a thought experiment meant to provoke discussion on the morality of influencing a situation or being apathetic, and whether there is an obligation on way or the other. It's not sinister or deep at all.

    • @zephodb
      @zephodb Рік тому +1

      @@premiumfruits3528 That's one way of deciding to look at it with rose-tinted-glasses glasses. I've taken Philosophy courses and I described it how it ~is used~. It is meant to be able to be used to make others look worse, or able to call them monsters, or what have you. It is one of those thing... a tool used in ideal circumstances serves its purpose... nobody uses it for its actual purpose.

    • @StupidusMaximusTheFirst
      @StupidusMaximusTheFirst Рік тому +4

      I guess you can see it this way, it also asks the question of whether you should intervene if it's none of your business to decide, it asks the question on whether you're a numbers person or not, in regards to your ethics, and it asks the question of the whether you would play God if you had the chance. And it's a really good one, it also proves that the people with number ethics are the ones who would play God and intervene, as they choose the same decision, they can't help it. I guess you could also argue that if your ethics is based on numbers, maybe you lack ethics. Inaction, is an action in itself btw.

    • @aribright2390
      @aribright2390 Рік тому +3

      Inaction is an action.

  • @madmarbles
    @madmarbles 3 роки тому +798

    I always felt that the trolley problem was less of a choice between ethical decision-making, and more of a thought experiment for passive vs active choices. Because the decision is presented as a person choosing to pull a lever, even if it saves more lives, it seems like an active decision. Whereas, the decision that doesn't save more lives is not pulling the lever, which reads as doing nothing (or the passive choice). This allows the person being asked to choose to "do nothing" and take no blame for what happened, even though the decision was in their hands.

    • @suddenllybah
      @suddenllybah 3 роки тому +20

      You could always restructure the problem to have the default be "kill all people on the tracks"
      It is however, somewhat harder to give a realistic design, and makes people aware that they might be able to use the switch to derail the trolley, which may or may not save more people, depending on if anyone is on the trolley.

    • @Nukestarmaster
      @Nukestarmaster 3 роки тому +51

      @@suddenllybah If the default is "kill all the people on the tracks" there is no longer a dilemma, then you are saving 5 people with one person who dies regardless of your choice.

    • @andy6877
      @andy6877 3 роки тому +22

      Yeah that's a much more accurate representation of how the problem works in practice. Otherwise we could just hang all people involved in the sticks and ask the people to choose which we drop!

    • @ruubzraubzen8784
      @ruubzraubzen8784 3 роки тому +45

      Your right. The problem is about whether we have the right to pull the lever, not about whether it's the better outcome. The "but who would we kill" question is purely asked by people who are not interested in the philosophy of the original problem by assuming we have that right in the first place

    • @divabhardwaj6381
      @divabhardwaj6381 3 роки тому +14

      *ahem* indecision is a decision! You have the power to save life, weather you “have the right” or not, and it’s not like you’re risking your life or it’s hard to do, it’s a lever which we are assumed to be strong enough to push. You still killed five people due to your indecision.

  • @WhiteNorth
    @WhiteNorth 3 роки тому +612

    So you’re telling me in order to survive I just need to always have a cat on me

    • @jjbarajas5341
      @jjbarajas5341 3 роки тому +11

      Good luck with that one

    • @akisekar1795
      @akisekar1795 3 роки тому +30

      In Egypt there was a battle in which th Egyptians gave in and stopped attacking because the other side were holding up cats. So yeah if you live in ancient Egypt.

    • @kayda5323
      @kayda5323 3 роки тому +6

      @@akisekar1795 in Egypt they worshipped cats (I think it was Egypt)

    • @JohnSmith-uu5ov
      @JohnSmith-uu5ov 3 роки тому +1

      Many have gone that route before, you wouldn't be the only one

    • @holygrass6278
      @holygrass6278 3 роки тому +3

      Rip people who are allergic to cats

  • @jenniferparke7699
    @jenniferparke7699 3 роки тому +1839

    AIP:“Can you teach a computer morality?”
    Me: “SHOULD you TRY to teach a computer morality?”

    • @empressofhearts7300
      @empressofhearts7300 3 роки тому +9

      this doe

    • @rosewinter4818
      @rosewinter4818 3 роки тому +40

      we learned from ultron that you should not

    • @Tony-xl3mv
      @Tony-xl3mv 3 роки тому +61

      “Your people were to busy thinking wether or not they could, that they didn't stop to ask if they should”

    • @nemogd7991
      @nemogd7991 2 роки тому +5

      @@rosewinter4818 you learned from FICTION that you should not

    • @rosewinter4818
      @rosewinter4818 2 роки тому +10

      @@nemogd7991 it was a joke, babe

  • @XhanAnimations
    @XhanAnimations Рік тому +87

    As a cat person, I can't fault the machine at all 😂

  • @sabyasachighosh9940
    @sabyasachighosh9940 3 роки тому +4310

    AI 🤝 Cats
    Biding their time to make humans a subservient species.

    • @YOEL_44
      @YOEL_44 3 роки тому +113

      Both are smarter and know how to take advantage of us mere humans, I could see the logic behind it

    • @MisterFusion113
      @MisterFusion113 3 роки тому +20

      Hoomans are done for. 😹

    • @gateauxq4604
      @gateauxq4604 3 роки тому +3

      Accurate

    • @BeniTheTesseract
      @BeniTheTesseract 3 роки тому +7

      @@YOEL_44 Idk where you got that cats are smarter than humans... they are smart animals but humans are the most intelligent animals in the world atm (second is probably dolphins)

    • @YOEL_44
      @YOEL_44 3 роки тому +31

      @@BeniTheTesseract ...

  • @CaffeineAndMylanta
    @CaffeineAndMylanta 3 роки тому +6155

    “This machine killed 10 humans to save a cat!”
    Me: the system is functioning as designed and I detect no errors.

    • @claudioalencarzendo6791
      @claudioalencarzendo6791 3 роки тому +115

      Do you remember what the cat did? what if you were one of the persons?

    • @ThePrimevalVoid
      @ThePrimevalVoid 3 роки тому +384

      @@claudioalencarzendo6791 then it's definitely functioning as designed

    • @Hans-gb4mv
      @Hans-gb4mv 3 роки тому +237

      @@claudioalencarzendo6791 It's not because the cat is under investigation by the FBI that it is actually guilty. We still work under the principle of innocent until proven otherwise in a court of law.

    • @Plasmy271
      @Plasmy271 3 роки тому +50

      @@Hans-gb4mv not on the internet (ahem, Twitter)

    • @claudioalencarzendo6791
      @claudioalencarzendo6791 3 роки тому +15

      @@Hans-gb4mv Sus

  • @MTG_Scribe
    @MTG_Scribe 3 роки тому +1891

    There's actually a board game called Trial by Trolly that is based on this, and I think is super fun. In case someone reads this, I highly recommend it.

    • @MrChipMC
      @MrChipMC 3 роки тому +55

      Cyanide and Happiness?

    • @jamesehlenfeldt7132
      @jamesehlenfeldt7132 3 роки тому +14

      @@MrChipMC indeed

    • @jaggerzite7208
      @jaggerzite7208 3 роки тому +14

      i have it but not enough friends to play it with

    • @khushi-bg8fk
      @khushi-bg8fk 3 роки тому +6

      @@jaggerzite7208 Let's play it

    • @OatmealDonk
      @OatmealDonk 3 роки тому +24

      @@jaggerzite7208 same here! Was even one of the Kickstarter backers but I didn't consider the fact I have no friends...

  • @cberge8
    @cberge8 Рік тому +62

    I think the biggest question I would ask myself if put in this situation would be "who tied up these people and put them on the tracks and why did they do it?"

  • @Max-ni7mx
    @Max-ni7mx 3 роки тому +564

    C'mon guys we can't blame her, it's obvious a cat got onto her code when she went bathroom

    • @WastedTalent83
      @WastedTalent83 3 роки тому +3

      i see it pretty logic to save an animal life over people ... the aminal have never did a crime or hurt anybody, ask the human tho XD

    • @sephandremanticore5438
      @sephandremanticore5438 3 роки тому

      cats are great. they are stupid but smart. cute but annoying. quiet but loud. needy but avoidant.
      Anyone would quickly swerve they've car into a tree to avoid hitting a car. Yet children are killed in hit and runs everyday. 😏

  • @AnastasiaB44
    @AnastasiaB44 3 роки тому +875

    “I’ve treated machines like they’re a replacement of me, rather than an extension.” Yoooo this is some important shit

    • @Dimension2364
      @Dimension2364 3 роки тому +6

      Yes yes yes! Can I somehow leave three likes to your comment? 🧡🧡🧡

    • @kenzieking2546
      @kenzieking2546 3 роки тому +3

      I literally read this comment at the same exact time as she said it. I’m astonished.

    • @iantaakalla8180
      @iantaakalla8180 3 роки тому +4

      We will have to collectively realize that we program the values we want an AI to have before we realistically make an AI that can make significant decisions

  • @maxmustermann4710
    @maxmustermann4710 3 роки тому +881

    "So, which way should the train go?"
    "This way."
    Kills all people at once.

    • @teknoghost5654
      @teknoghost5654 3 роки тому +37

      Multi track drifting!

    • @tudornaconecinii3609
      @tudornaconecinii3609 3 роки тому +10

      An unironic argument can be made that the Kantian approach approves of that. (in other words, that in the condensed version of the trolley problem the ethical approach is to not pull the lever)

  • @matt45540
    @matt45540 2 роки тому +58

    The good place is by far my favorite visual representation of the trolley problem. Great concept

    • @matt45540
      @matt45540 2 роки тому +5

      Good Place, the show

    • @samiam619
      @samiam619 Рік тому

      It was a good show. But by the end I was SO sick (and tired) of the black guy being so wishy-washy. I know it was just a character, but Jesus, grow a pair and make a decision!

    • @amystern123
      @amystern123 Рік тому +9

      @@samiam619But that was the whole point. Each character had a tragic character flaw they needed each others’ help to grow through. That was his.
      It was like in a Shakespearean tragedy except they got to have long enough and enough second (third, fourth, 100th,…) that they could win.

    • @samiam619
      @samiam619 Рік тому +1

      @@joseph5900 Sorry, I forgot the character’s name. It’s been a year or so since I saw it last. Plus I’m old…

    • @ptorq
      @ptorq Рік тому +4

      Well, obviously the dilemma is clear: how do you kill all six people?

  • @starcycle4308
    @starcycle4308 3 роки тому +7663

    "Something went wrong. My machine values cats over human lives."
    Me: I don't see an issue here.

  • @Apo0
    @Apo0 3 роки тому +1692

    As a cat person, I love that the computer chose to save the cats over and over and over again. I'm also dying from laughing.

    • @Nepetita69696
      @Nepetita69696 3 роки тому +19

      @Heberth R. ?????

    • @NamekFreakazoid
      @NamekFreakazoid 3 роки тому +31

      @@Nepetita69696 you're as smart as a brick

    • @tigerspruce8580
      @tigerspruce8580 3 роки тому +40

      Cat person as well didn't stop me from looking at my cat and saying "I'm sorry Cynda, but if I was on a train track with you on the other track the trolley would hit me and I'd be fine with that." (Maybe I can be on the same track as my cat lmfao)

    • @abandonedaccount123
      @abandonedaccount123 3 роки тому +8

      i love cats :)

    • @cincocats320
      @cincocats320 3 роки тому +15

      There are at most nine people whose lives I would prioritize over my cats...so yeah makes total sense to me.

  • @ratticustheratman4747
    @ratticustheratman4747 3 роки тому +1467

    I’d like to think rapidly flipping the switch is the answer. In hopes that it launches the trolley off track or serves as a random selection.

    • @WastedTalent83
      @WastedTalent83 3 роки тому +159

      ahhahaha, that's just because you're too afraid of the consequences to make a decision. its normal human nature.
      I'd kill based on logic , and blame only the situation that put me in the wrong place at the wrong time.
      In life we have shitty tough decision to make from time to time, and there is no escape.

    • @liancao7163
      @liancao7163 3 роки тому +81

      double track drifting

    • @AmanomiyaJun
      @AmanomiyaJun 3 роки тому +82

      @@WastedTalent83 Third choices exist though. You just have to find a way to reach it.

    • @WastedTalent83
      @WastedTalent83 3 роки тому +45

      @@AmanomiyaJun true ,but it wouldn't be a answer to the trolley problem. The problem is about "making a forcefull choice" not finding the best solution to fix it XD

    • @wolfy297
      @wolfy297 3 роки тому +17

      What if there's people on the trolly and it going off track kills them?

  • @Ironguy1212
    @Ironguy1212 Рік тому +3

    Bro I love your videos, your humor is on point and it's clear that you put a lot of work and love into them

  • @Will-yy7cg
    @Will-yy7cg 3 роки тому +173

    Sabrina's coding centric videos always remind me of The Programmers’ Credo: "we do these things not because they are easy, but because we thought they were going to be easy."

  • @iamjacksfakeband
    @iamjacksfakeband 3 роки тому +1852

    My intro psychology class had AI grade our papers. I got better at learning the keywords it liked by the end of the semester

    • @nlatimer
      @nlatimer 3 роки тому +124

      Was utilizing that knowledge ethical?

    • @olive6419
      @olive6419 3 роки тому +419

      @@nlatimer nope but an A is an A

    • @YELLERHEAD
      @YELLERHEAD 3 роки тому +263

      Sounds like the AI trained you

    • @ChaosPod
      @ChaosPod 3 роки тому +55

      @@nlatimer Well it'll be utilising the psychology of the AI program.

    • @gavinjenkins899
      @gavinjenkins899 3 роки тому +120

      My Essay: "William James reward punishment reinforcement lever reinforcement neuron axon hypothesis." Thank you for your time. Submit.

  • @auravitae6063
    @auravitae6063 3 роки тому +1065

    "Do you consider yourself an ethical person?" "No." That exchange lol

  • @stevieinselby
    @stevieinselby Рік тому +3

    11:30 - an interesting bit of background to what went wrong with the UK's algorithm for A levels.
    The (correct) assumption was that teacher assessment would give higher average grades than exams, so the algorithm was written to look at a school's track record of results, and that cohort's performance in GCSEs 2 years earlier, and scale the results accordingly to bring them into line with the expected results. *But* the brains behind it also figured that if you had a very small cohort then this method wouldn't be reliable, and so no scaling factor was applied when fewer than 5 students in a school were taking a particular qualification. And this was where it all went shit-shaped, because it turns out that schools and colleges in wealthy areas (and especially fee-paying schools) are much more likely to offer those kind of niche qualifications and run them for very small classes, whereas schools and colleges in more deprived areas generally don't. So rich kids had fewer of their grades subjected to the algorithm's scaling factor, whereas poor kids were likely to have all of their grades algorithmed.
    The algorithm was behaving exactly as designed, but apparently no-one had considered the implications of the way it was designed (or at least, had considered them but didn't care about them).

  • @avairejustdesserts9921
    @avairejustdesserts9921 3 роки тому +2600

    One cat leaves a much smaller carbon footprint than a bunch of people. The AI is operating in a utilitarian mindset and doing it perfectly

    • @zapper333
      @zapper333 2 роки тому +72

      BUT the cat goes on to murder all the smart humans and earth falls into anarchy

    • @abuhanifahhidayatullah9160
      @abuhanifahhidayatullah9160 2 роки тому +112

      @@zapper333 Earth have always been in anarchy, animals kill each other all the time, we are the weird ones

    • @zapper333
      @zapper333 2 роки тому +24

      @@abuhanifahhidayatullah9160 ...but fires and craters with nuclear fallout isn't normal, is it?

    • @abuhanifahhidayatullah9160
      @abuhanifahhidayatullah9160 2 роки тому +27

      @@zapper333 Everything about human is just so weird if you view it from global viewpoint. Then again, we do have the biggest brains.

    • @theonewhodanceswiththepenguins
      @theonewhodanceswiththepenguins 2 роки тому +7

      @@zapper333 I see this as an absolute win

  • @Strav24
    @Strav24 3 роки тому +718

    Alternate title: "I taught an ai to save the cats"

    • @brokeandtired
      @brokeandtired 3 роки тому +18

      She didn't make a mistake...She simply forgot to keep her cat away from the computer.

    • @cliffsofdover1
      @cliffsofdover1 3 роки тому +3

      Clickbaitttt

    • @nicofreako4228
      @nicofreako4228 3 роки тому +1

      Ok, now, the cats *movie*

    • @runed0s86
      @runed0s86 3 роки тому

      Nya~

    • @nicofreako4228
      @nicofreako4228 3 роки тому

      @@runed0s86 let's not go that path

  • @nathanbeals4583
    @nathanbeals4583 3 роки тому +753

    I’m going to go out on a limb and say that the algorithm prioritized named entities over random pedestrians. This led to the algorithm prioritizing Garfield the cat. And I’m guessing since the algorithm prioritized Garfield, this led to the algorithm prioritizing anything resembling him, Ex all of those cats. Idk just my little nerd theory I love to think about how different algorithms work. It’s fun.

    • @PuppyLove2468
      @PuppyLove2468 3 роки тому +19

      that makes sense

    • @158-i6z
      @158-i6z 3 роки тому +1

      Yeah. Something with value has value and something without value doesn't have value.

  • @MaoMaster69
    @MaoMaster69 Рік тому +2

    That dillema of "a child is one track, you can pull the lever and redirect it to their mother" sounds like a really powerful high concept for a movie or a show.

  • @westganton
    @westganton 3 роки тому +947

    I love how we don't even take the time to understand morality before building it into our systems

    • @WastedTalent83
      @WastedTalent83 3 роки тому +38

      morality follow logic, its a fact, humans MOSTLY are not logical being.. the mass is composed of idiots, sadly.
      Meaning we can never graps the concept of morality as a whole, but only in single individual OR in very small groups.

    • @unluckyomens370
      @unluckyomens370 3 роки тому +5

      @PP - 12ZZ 653663 Turner Fenton SS the trolly problem has an obvious solution dont pull the lever and walk away so noone knows

    • @Uriolu
      @Uriolu 3 роки тому +9

      @@unluckyomens370 Inactivity is one of the possible answers, but it doesn't feel well to me (not judging anyone to chooses to not act, after all the point of a dilemma is not having an obvious good answer to make people form their own opinion). I had the opportunity to change it, and denying it only because "it was going to happen that way" is something I would regret. I'l personaly pull the lever to kill only one instead of many, if we don't know anything about the individuals.

    • @martenkahr3365
      @martenkahr3365 3 роки тому +11

      @@Uriolu Yeah, that's the whole reason the trolley problem persists in philosophy: there is no definitive objectively "correct" answer. In an academic setting, where you get graded for "solving" this problem, you get graded for how well your moral argument is built for the choice you do make, not for which of the two results you pick. (As long as your professors aren't moral zealots who actually think their personal answer is the "correct" answer and everyone who disagrees with them is wrong). Neither option is objectively more moral than the other, it all flows from your personal, subjective moral framework.
      Personally, I would attempt to switch just as the train is crossing the threshold in an attempt to derail it before it kills anyone. Even if the attempt is doomed to failure and guaranteed to kill everyone, I cannot know that for certain in the moment, and I think that the attempt to save everyone is the morally correct choice over both of the other options (letting people die due to inaction vs directly killing a single person to save an arbitrarily larger number).

    • @Kohanman
      @Kohanman 3 роки тому +7

      I mean that's a fine outside-the-box jokey answer in a conversational setting but in terms of academic philosphy you've simply failed to engage with the thought experiment.
      Any philospher discussing this would simply restate the scenario. For instance, now the trolley is designed such that derailing will kill not only the occupants but also everyone on both tracks and is also designed in such a way that if no action is taken to switch it to either the left or right -hand tracks it will similarly derail killing everyone. Thus forcing you to engage with the thought experiment as originally intended - as an analogy and foundational scenario for any and all moral decisons.
      By engaging with it how it was intended you can get the results it's testing for - like all thought experiments what it is testing are your thoughts. In this case what your thoughts are on culpability and responsibility as well as the moral beliefs that underpin your decision making.

  • @Edgee_yy
    @Edgee_yy 3 роки тому +2313

    “What’s your answer to the trolly problem”
    Tell the driver to stop, if they don’t, sue them for murder.

    • @macaemeia146
      @macaemeia146 3 роки тому +127

      i never thought about that
      but it´s the best option
      so thank you

    • @Rita_Arya
      @Rita_Arya 3 роки тому +77

      But what if it was a trial run where they lost control, or the trolley just started running on its own(due to technical issues) and the only thing you can do is switch the lever to reach dead end and incidentally there are people tied to the tracks. Or even if there is a driver and they lost control of it and can't do anything about it but do fate do the work?

    • @Edgee_yy
      @Edgee_yy 3 роки тому +197

      @@Rita_Arya Sue the company for machine failure that cost the lives/life of a person.

    • @Rita_Arya
      @Rita_Arya 3 роки тому +42

      @@Edgee_yy even if the company admits their fault and pays the compensation, thay still wouldn't indemnify the loss of human(s)

    • @Rita_Arya
      @Rita_Arya 3 роки тому +36

      @M3l0nii3 thanks!!!!

  • @RealJaller007
    @RealJaller007 3 роки тому +840

    I believe the correct awnser to the question is “multitrack drifting.”

    • @erc778
      @erc778 3 роки тому +32

      “precision airstrike ready”

    • @monad_tcp
      @monad_tcp 3 роки тому +17

      that's the only correct answer

    • @-GG-
      @-GG- 3 роки тому +20

      The real answer is " Nothing " the trolley isn't moving...

    • @RealGreenrh
      @RealGreenrh 3 роки тому +14

      @@-GG- logically, yeah. It says nothing about wether it’s moving or not

    • @enderkiwi
      @enderkiwi 3 роки тому

      Yes

  • @GingaBeater
    @GingaBeater Рік тому +3

    The editing is AMAZING

  • @Corneax
    @Corneax 3 роки тому +401

    Alright, would you save a cat or everything else that is alive-
    AI: The cat. Save the cat.

    • @bimisikocheng
      @bimisikocheng 3 роки тому +9

      Internet wouldn't exist without cats

    • @Fragens
      @Fragens 3 роки тому +4

      @@bimisikocheng yeah there's more cats than just 1

    • @bimisikocheng
      @bimisikocheng 3 роки тому +2

      @@Fragens fair enough

    • @cardemis7637
      @cardemis7637 3 роки тому +2

      The ai is correct!

    • @CheesyLizzy
      @CheesyLizzy 3 роки тому +1

      But wouldn't that kill all other cats as well?

  • @liriodendronlasianthus
    @liriodendronlasianthus 3 роки тому +1130

    Wow this went from:
    "Haha AI loves cats"
    to:
    "Oh shit AI is biased because humans are biased"

    • @fayazuddinshah
      @fayazuddinshah 3 роки тому +4

      True

    • @michaelc.r.6416
      @michaelc.r.6416 3 роки тому +72

      Maybe is because im studying this (programing not ethic problems) but I that was my first thought.
      "Won't the AI be biased because you need to put some parameters before it can start picking one choice whenever you put it?"

    • @Suthriel
      @Suthriel 3 роки тому +29

      @@michaelc.r.6416 Then again, won´t it be alwas biased, just because it has to follow the rules, that we deem social/good? Isn´t the true problem, that every person asked have probably a different opinion, about what is social, priorizing certain things over others, but most social persons still priorize similar (cute) things, like cats or babies/kids?
      A cold hearted and pure logic approach might be better sometimes, even if it hurts feelings. Like, there are 3 people of the main track of the trolley, and one of the side track. Ignoring the numbers, you can also approach it in a different way: there are people for whatever reason in a danger zone - the main track of the trolley. Then there are people on the side track, which however is currently a safe zone, so those people are out of danger. Why should i put people, that are already in the safe zone (out of danger) out of the safe zone into a danger zone and then killing them? Why should we priorize some people, isn´t that against equality?
      And why does this trolley have no working emergency brakes in the first place? ^.^ How was such a trolley allowed to leave the station? ;)

    • @NotAGraveRobber
      @NotAGraveRobber 3 роки тому +9

      A lot of programs are biased, there are whole fields of study dedicated to that, so backward success here?

    • @gabemerritt3139
      @gabemerritt3139 3 роки тому +9

      Tbh is the student grade example even a problem in the robot? When humans grade I would expect higher income students to do better, simply because they tend to have many more resources available to them. If it wasn't a higher percentage than other years it doesn't seem to be an AI problem, but a societal one.

  • @EightyFourThousands84000s
    @EightyFourThousands84000s 3 роки тому +739

    AI: "I LOVE CATS. I LOVE EVERY KIND OF CAT. I JUST WANT TO HUG ALL OF THEM, BUT I CAN'T-- CAN'T HUG EVERY CAT."

  • @MiningNatureYT
    @MiningNatureYT Рік тому +1

    Seeing Ryder show up at 3:07 genuinely made me smile because why wouldn't I, Ryder's amazing

  • @SciFiPieGuy
    @SciFiPieGuy 3 роки тому +564

    2:52 Pick your fighter:
    a. Some Nerd
    b. Speaks with Hands (Derogatory)
    c. Power Posing in a Spinny Chair
    d. A Scandalized Woman
    e. Clearly Cut Her Own Hair During the Panini
    f. Substitute Teacher Trying Hard to be Fun
    g. Crab in a Human Disguise
    h. Acting Way Too Chill About This One tbh

  • @GS-el8ll
    @GS-el8ll 3 роки тому +451

    making people and companies responsible and not scapegoating AI sounds like an excellent standard to maintain

    • @angeldude101
      @angeldude101 3 роки тому +22

      I remember the responsibility of an accident caused by an AI being the main thing in the way of them because legal, but that just made no sense to me since obviously the people who programmed and trained the AI should be responsible. In what way would they ever not be?

    • @bloxxerhunt1566
      @bloxxerhunt1566 3 роки тому +3

      Depends on the AI. If it's a man-made algorithm I'd agree. Something like a neural net though is a different creature entirely.

    • @Envy_May
      @Envy_May 3 роки тому +8

      @@bloxxerhunt1566 i mean they should still need to take responsibility for creating it though right ?

    • @joelmacha2104
      @joelmacha2104 2 роки тому +1

      I see that argument, but the more advanced AI gets, the less direct control the creators have.

    • @robbievanwijk2512
      @robbievanwijk2512 2 роки тому

      @@angeldude101 Would parents be responsible for what their children do? (if they're adult)

  • @jadeoreo
    @jadeoreo 3 роки тому +429

    "my ai would choose to save a cat over you" 9 lives are more important than 1

    • @Felipemelazzi
      @Felipemelazzi 3 роки тому +2

      Hahahahhahahahha
      Thank you 😂

    • @rachelcookie321
      @rachelcookie321 3 роки тому +7

      But if you hit the cat you only kill it once and it still has eight lives.

    • @ithomas7788
      @ithomas7788 3 роки тому +2

      @MAGAT slayer I like your way of thinking friend LOL

    • @Pixal_Dragon
      @Pixal_Dragon 3 роки тому

      Bruh I don’t even have 1

    • @chlorobyte_projects
      @chlorobyte_projects 3 роки тому +2

      @@rachelcookie321 Exactly, so it's not even worth hitting the cat. You get an actual result by hitting the humans.

  • @GiraffeBoy
    @GiraffeBoy 2 роки тому +8

    6:22 "I've done a bad job!" 👎 🤣

  • @dnbuhat
    @dnbuhat 3 роки тому +259

    "I've treated machines like they're a replacement of me, rather than an extension. It's easy, it's tempting, but it's misguided." I like this phrase

    • @3nertia
      @3nertia 3 роки тому +8

      The *importance* of this CANNOT be overstated!

    • @TheReaverOfDarkness
      @TheReaverOfDarkness 3 роки тому +4

      "I taught the machine to think like me. I did it so that I wouldn't have to think. I did it while not thinking."

  • @theowlyone
    @theowlyone 3 роки тому +586

    Also I'm loving the "power posing in a spinny chair" "clearly cut their own hair during a panini" intro subtitles

  • @pebble710
    @pebble710 3 роки тому +869

    "Create an orphan!" is my new favorite quote.

    • @duccline
      @duccline 3 роки тому +5

      LMAO

    • @epistoliere
      @epistoliere 3 роки тому +20

      Count Olaf liked this

    • @michaelcheng4985
      @michaelcheng4985 3 роки тому +16

      interesting how we just assume the father is gone too... ://///

    • @vmp916
      @vmp916 3 роки тому +1

      C programmers be like

    • @hopefulbloom
      @hopefulbloom 3 роки тому +1

      😂

  • @Tchaikovskythegreat
    @Tchaikovskythegreat Рік тому +1

    I like the version of the trolley problem where you can be held accountable for your actions and things happen after you pull the lever because otherwise it’s just a question of “who would you rather kill”

  • @sarajohnsson4979
    @sarajohnsson4979 3 роки тому +387

    A computer can never be held accountable, therefore a computer must never make a management decision
    -some IBM training documents from 1979, apparently

    • @vigilantcosmicpenguin8721
      @vigilantcosmicpenguin8721 3 роки тому +102

      A computer can never be held accountable, therefore a computer must make the management decisions we don't want to be held accountable for
      -a somehow acceptable argument in 2021, apparently

    • @a-s-greig
      @a-s-greig 3 роки тому

      A computer can never be held accountable
      -Bert from Accounting Who Doesn't Like to Hold Computers

  • @Ampersanderp
    @Ampersanderp 3 роки тому +407

    I was initially horrified by the initial premise of this video, but by the end you did a really great job of starting the discussion on the importance of not outsourcing ethical decision making.

    • @Gr3nadgr3gory
      @Gr3nadgr3gory 3 роки тому +1

      To be fair I would do just as good a job.

  • @oliviercote7794
    @oliviercote7794 3 роки тому +1812

    She is basically Michael Reeves, but she unlocks the Good Ending

    • @jaybg1972
      @jaybg1972 3 роки тому +35

      More crazy is needed by very close

    • @williamvalorious4403
      @williamvalorious4403 3 роки тому +51

      So like micheals reverse in terms of morality

    • @ZaychikSN
      @ZaychikSN 3 роки тому +43

      Yes Michael Reeves but good and less taser

    • @oliviercote7794
      @oliviercote7794 3 роки тому +31

      @@ZaychikSN and way less crack

    • @ZaychikSN
      @ZaychikSN 3 роки тому +5

      @@oliviercote7794 Agreed

  • @HarpaAI
    @HarpaAI Рік тому

    🎯 Key Takeaways for quick navigation:
    00:00 🤖 *Introduction to the Trolley Problem and the goal of teaching a machine to solve it.*
    - Introducing the Trolley Problem and its ethical dilemma.
    - The desire to create a machine capable of solving the Trolley Problem.
    00:29 🚃 *What is the Trolley Problem?*
    - Explanation of the Trolley Problem thought experiment.
    - Variations and complexities of the Trolley Problem.
    - Frustration with the ongoing debates about the problem.
    01:26 🤔 *Approaches to Teaching the Machine*
    - Researching different approaches to teach a machine ethical decision-making.
    - The idea of crowd-sourcing morality to train the machine.
    - Collecting Trolley Problem scenarios and ethical judgments from people.
    03:23 👫 *Ethical People's Perspectives*
    - Interviews with people to gather their ethical perspectives.
    - Ethical considerations and decisions in various Trolley Problem scenarios.
    - Preparing a dataset of ethical judgments.
    04:55 🤖 *Machine's Ethical Approach*
    - Exploring the two main philosophical approaches: deontological and utilitarian.
    - Designing the machine to make utilitarian decisions based on outcomes.
    - Using the collected dataset for training the machine.
    06:27 🤯 *Unexpected Outcomes*
    - Introduction to the AI trolley problem game show.
    - Revealing unexpected and questionable decisions made by the machine.
    - Realization that something went wrong with the approach.
    08:29 🔍 *Seeking Expert Opinions*
    - The decision to consult experts in AI ethics.
    - Insights from Dr. Tom Williams on the evolution of AI ethics.
    - The importance of considering fairness, accountability, and transparency.
    10:55 🌐 *Thinking Beyond Individual Decisions*
    - Expanding the perspective to a system-level approach.
    - Examining how technology fits into society and its potential consequences.
    - The role of AI creators in the ethical implications of their technology.
    13:21 🤔 *Reflecting on the Responsibility*
    - The question of whether machines can be taught to be "good" persons.
    - Realizing that technology is not just an algorithm but a product of people.
    - Acknowledging the potential for technology to have both positive and negative impacts on society.
    15:44 ⚖️ *Accountability and Responsibility*
    - The need to consider accountability when AI systems go wrong.
    - Examples of real-world consequences of AI and algorithmic decisions.
    - The importance of acknowledging the role of individuals in creating and deploying AI.
    16:16 📚 *Sponsorship and Conclusion*
    - Sponsorship message from Skillshare.
    - Encouraging viewers to explore Skillshare for learning new skills.
    - Concluding thoughts on the complexity of AI ethics and responsibility.
    Made with HARPA AI

  • @soulchorea
    @soulchorea 3 роки тому +245

    The real answer: Flip the switch to the *MIDDLE position. This will cause the trolley to not be able to move to either side, and it will gently come to a rest right at the point where the two paths begin. I hold a degree in trolley operations, so yes, I'm pretty sure that's exactly how they work.

    • @lrizzard
      @lrizzard 3 роки тому +62

      hooray the real solution to the trolley problem

    • @insertclevernamehere2506
      @insertclevernamehere2506 3 роки тому +22

      Did you just reprogramme the Kobayashi Maru simulation?

    • @lucaspeixoto975
      @lucaspeixoto975 3 роки тому +44

      but you have to tell us, is the forbidden technique "Multi-track drifting" possible? I wanted to know, for... uh... Science of course

    • @lynx9373
      @lynx9373 3 роки тому +25

      Turns out that it derailed into a fusion daycare and cat shelter. The AI is now distraught.

    • @KanoPX
      @KanoPX 3 роки тому +9

      Or we can make it customary that every multi-laned track has a third track that causes the trolley to crash instead

  • @lowercase_ash
    @lowercase_ash 3 роки тому +407

    Dr. Tom seems like he's incredibly excited to be talking about his Thing while at the same time he is roasting you and your ethics approach

    • @thetalantonx
      @thetalantonx 3 роки тому +13

      A really excited-to-have-an-Apprentice Sorcerer managing the overwhelming tide of WTFery.

  • @zildiun2327
    @zildiun2327 3 роки тому +265

    In some alternate universe, an AI decision maker has sacrificed all of humanity to save Garfield.

    • @dean_l33
      @dean_l33 3 роки тому +12

      An equal trade

    • @zildiun2327
      @zildiun2327 3 роки тому +2

      @Æshton [like rp and mint,is a gamer]
      There’ll be a lot of meat lying around ripe for the grinding.

    • @zildiun2327
      @zildiun2327 3 роки тому +2

      @Æshton [like rp and mint,is a gamer]
      Definitely the robot

    • @ajohnymous5699
      @ajohnymous5699 3 роки тому +2

      @@zildiun2327 All questions have been answered, the trolley decimates humanity at the whim of an AI who serves the lazy Garfield for all time.
      This is the only good ending for the trolley problem.

    • @jaydencoffman1856
      @jaydencoffman1856 3 роки тому

      Good save tho

  • @PrimataFalante
    @PrimataFalante Рік тому

    Mini metro owes you guys a retrospective sponsorship for this video. Your 2 second mention and showing of it was enough to add this game to my short list of addictions.
    I found the channel two days ago and I'm almost finished watching every video, thanks for the amazing work! (doing my part to make Brazil show up on your demographics 😂)

  • @TheSadster
    @TheSadster 3 роки тому +362

    **The AI gets one with a cat on both sides**
    *The AI: panics*

    • @wavestrider2160
      @wavestrider2160 2 роки тому +47

      The ai with a biased code:
      ".....what are the cat's colors-"

    • @nezasumi
      @nezasumi 2 роки тому +5

      Intercept the trolley immediately.

    • @TheRoosterBucket
      @TheRoosterBucket 2 роки тому +3

      I’d let it run. other one becomes barn cat. survival of nature

    • @StellaWaldvogel
      @StellaWaldvogel 2 роки тому +1

      A real cat vs. Garfield would probably break it.

  • @JeffSmith03
    @JeffSmith03 3 роки тому +318

    "I forced the computer" is ultimately the best way to describe every kind of computer programming

    • @PeteWonderWhyHisYTNameIsSoLong
      @PeteWonderWhyHisYTNameIsSoLong 3 роки тому +5

      Bruh don't force a machine
      _sigh_
      Humans these days

    • @JeffSmith03
      @JeffSmith03 3 роки тому +1

      @@PeteWonderWhyHisYTNameIsSoLong lol, a bunch of goo won't do anything until you forge it into something you want

    • @PeteWonderWhyHisYTNameIsSoLong
      @PeteWonderWhyHisYTNameIsSoLong 3 роки тому

      @@JeffSmith03 what you mean?

    • @JeffSmith03
      @JeffSmith03 3 роки тому +1

      @@PeteWonderWhyHisYTNameIsSoLong meaning nobody can not force a computer

    • @PeteWonderWhyHisYTNameIsSoLong
      @PeteWonderWhyHisYTNameIsSoLong 3 роки тому

      @@JeffSmith03 i know but better don't some artificial intelligence also have feelings we evole

  • @davidroddick91
    @davidroddick91 Рік тому +1

    There is another issue when it comes to the trolley problem. That is the decision whether or not to act. Regardless of how many people are on each of the two tracks, the decision to pull the switch actively kills the person(s) on the second track, and makes you complicit in their death(s). Although the outcome would be essentially the same, with one person on each track, the ethical decision is to do nothing, since pulling the switch would be killing a person who would survive without your intervention. So how many people would you need to save to convince you to murder one person?

  • @jakeking974
    @jakeking974 3 роки тому +574

    This is really good at reminding folks that a good or bad algorithm is a product of the creators and therefore the responsibility SHOULD lie on them.
    * LOOKS AT UA-cam *
    * STARES DAGGERS AT UA-cam *

    • @dominik7423
      @dominik7423 2 роки тому +13

      Welcome to sociology :). The algorithm is made by individuals who have their own mindset and use that to build it. Also: the algorithm cannot be racist, it's the creator or the reason how some statistics that are the origin behind algorithms come about. Also also: if you are in doubt: don't touch anything. If you know all known possible outcomes are considered bad, simply don't do anything. You didn't choose to kill any one of them, the situation resolves without your control.

    • @jadeking3263
      @jadeking3263 2 роки тому +2

      I was so confused for a second, I Thought I commented.

    • @SlayingSin
      @SlayingSin 2 роки тому +3

      @@dominik7423 JUST. CARRY. THE. SINGULAR. PERSON. OFF. OF. THE. TRACKS. AND. THEN. PULL. THE. LEVER.

    • @surtmcgert5087
      @surtmcgert5087 2 роки тому

      not sure i can agree there

  • @anthonydelfino6171
    @anthonydelfino6171 3 роки тому +585

    "Obviously the problem is, how do you kill all six people? So I would dangle a sharp object out of the trolley while running into the other five"

    • @Manthab
      @Manthab 3 роки тому +47

      I see you too are a person of culture

    • @Biotear
      @Biotear 3 роки тому +47

      Ah yes, the true solution. Mother on one end, her kid on the other? Squish the mother, shoot the kid. Can't have survivor's guilt if you die.

    • @shawermus
      @shawermus 3 роки тому +8

      What about people in trolley?

    • @ur.left.buttcheek
      @ur.left.buttcheek 3 роки тому +17

      @@shawermus After you kill the people outside, you just drive the trolley forward and jump out right before it hits another trolley, creating a collision that kills everyone in both trolleys (except you of course)

    • @MorinehtarTheBlue
      @MorinehtarTheBlue 3 роки тому +18

      @@Biotear But you just killed two people. You have to close off the loose end and potential guilt by killing yourself too.

  • @OON7
    @OON7 3 роки тому +396

    I feel like we missed an opportunity to put a cat and a human on one side, and a cat and a dog on the other.

    • @tander101
      @tander101 3 роки тому +14

      Yesssss

    • @thisisacomment6061
      @thisisacomment6061 3 роки тому +26

      Cue the computer exploding

    • @ZedF86
      @ZedF86 3 роки тому +24

      It would logically choose the human to save. The cats cancel each other out.

    • @cheemsmay
      @cheemsmay 3 роки тому

      @Joel Roy You have no info on the person. You have never met them, you know nothing about them.

    • @Island_Bag
      @Island_Bag 3 роки тому +3

      Never understood why Americans (mostly), regard animal life so closely with human life… y’all would do more for a dog or cat than a human…. I wouldn’t hesitate for 1 millisecond to kill the dog or cat to save your life ☺️

  • @sofiaroura9652
    @sofiaroura9652 2 роки тому +4

    I remember there was this movie called I Am Mother, where an AI was training a girls to be the new mother of humnaity, and the AI phrased the trolley problem in a way that makes more sense:
    Let's suppose you're a doctor and you have six patients. One of these patients, let's call it X, has a treatable disease, but the other five patients need organ transplants, and X is compatible with all of them. Would you kill patient X to save the other five patients?
    It's a more practical, yet way darker version of the trolley problem.

    • @strawberrypink.
      @strawberrypink. 8 місяців тому

      But doctors are bound by the hippocratic oath, so they would not be able to kill the one patient intentionally.

  • @BradenBest
    @BradenBest 3 роки тому +325

    ONN Interviewer: How many people's lives is one cat worth?
    AI: Seven

    • @paimonja
      @paimonja 3 роки тому +18

      AI: There isn't enough humans in the planet *

    • @yyyyyb1432
      @yyyyyb1432 3 роки тому +1

      10

    • @Fragens
      @Fragens 3 роки тому +1

      0.00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001

    • @shannonrobbins59
      @shannonrobbins59 3 роки тому +3

      AI: humanity*

    • @01nmuskier
      @01nmuskier 3 роки тому +5

      Nine.
      Nine lives. Come on people. 😆

  • @Xastor994
    @Xastor994 3 роки тому +110

    "Don't unsubscribe" as the new educational youtuber signoff is my new favorite thing.
    That said, I remember when you asked people to roast you and someone said "all your videos end in failure and a sponsorship" and when you were talking about how you did a bad thing I honest to God thought "well that was a short video" and was shocked to see that we weren't even halfway. Good padding :D

  • @AnEmu404
    @AnEmu404 3 роки тому +331

    About the A-Level thing:
    The reason kids from poorer backgrounds got poor grades was because the AI was set so only a *set number of students could get a certain grade, regardless of if they deserved it*
    Basically, state school suffered because they had larger classes -
    So some big classes had almost hALf of the class having Us (un-gradeable).
    It was absolutely ridiculous, and I’m so glad they rescinded those grades and went back to teacher assessed, using evidence from work and past tests to determine grades.

    • @ebolachanislove6072
      @ebolachanislove6072 3 роки тому +25

      Wow, without that bit of context I was super confused on how the results were even a problem. The way it's presented in that part of the video is terrible if you don't already have an innate bias against successful people.

    • @AnEmu404
      @AnEmu404 2 роки тому +17

      @@ebolachanislove6072 yeah, i thought it would be confusing to people who didn’t know about it. My brother got his A levels done by the algorithm initially, and got worse grades than he should’ve. It was pretty outrageous, they’d had so long to work on it and yet somehow missed that fatal flaw lmao, it was all over the uk news for weeks.

    • @eadbert1935
      @eadbert1935 2 роки тому +10

      @@ebolachanislove6072 so believing that people with low income should be allowed to have good grades is an "innate bias against successful people"?
      i mean, the wording was very specific about income and nothing else. The reasoning why doesn't change the ethical issue with it.
      if that was the only issue, they could've removed that specific coding and run the algorithm again (probably reaching the same results, as larger classes generally make the average student worse, aside from private school getting better funding)

    • @ebolachanislove6072
      @ebolachanislove6072 2 роки тому +15

      @@eadbert1935 the way it's brought up in the video doesn't convey the reality of the event, it just sounds like a bit of woke-ism. Like I would expect rich kids to get better grades on average than poor kids because of the resource advantage innate to that situation, what i didn't expect was the actual flaw in the system that limited the total grades (which compounds the negative effects of a large class.) and that wasn't made clear in the video, only a vague coverage of the result without any of the "why" needed to understand it.
      when i first saw that section of the video I literally said to myself "Yeah, of course rich kids get better grades than poor kids overall, what's weird about that?" but then the video just moves along.

    • @nuknukisdead
      @nuknukisdead 2 роки тому +5

      @@ebolachanislove6072 Usually when people make those kind of statements, they mean it as comparing A vs B with everything else being equal. No clue if that's what she meant in this video, but it generally makes sense to assume so or the statement would be pointless because she would basically be saying A vs B with everything else being different as well.

  • @housewiththesun9199
    @housewiththesun9199 Рік тому +1

    At the question about feeling themself as ethical person... I believe the ones who think and can't decide because they really think about bad things they made earlier and it is empathy for me

  • @akisekar1795
    @akisekar1795 3 роки тому +466

    In ancient Egypt a battle was lost because the attackers held up cats above their heads for the Egyptians to see and they retreated to protect the cats. So maybe your machine is just ancient egyptian?

    • @vangildermichael1767
      @vangildermichael1767 3 роки тому +40

      Cats are great, the ultimate hunting machine. But I also like "ants". They have the perfect communism community. When it is time for food. Every single ant gets their share. No more, No less. When somebody gets hurt. That individual gets his medical help and time off. Nobody tries to "cheat" the system. People cannot do that. I look at the "ant" colony with awe.

    • @An-tm9mc
      @An-tm9mc 3 роки тому +4

      @@vangildermichael1767 Anthony?

    • @vangildermichael1767
      @vangildermichael1767 3 роки тому +2

      @@An-tm9mc ​ @Rachel Support l brandDigi nope. VanGilder Michael Shane. Is there another out there, that shared my thoughts? I've found (one), who agrees with me. But another one exists?

    • @An-tm9mc
      @An-tm9mc 3 роки тому +3

      @@vangildermichael1767 I meant to ask if you were referring Anthony or Chrysalis

    • @elliotwang9501
      @elliotwang9501 3 роки тому +4

      I’m an ancient Egyptian

  • @Forsakianity
    @Forsakianity 3 роки тому +624

    "the trolly was hurdling twards a small child"
    "no"
    "you see that wasnt the full-"
    "*no*"

    • @outspade
      @outspade 3 роки тому +1

      can relate

    • @alsiredwood5642
      @alsiredwood5642 3 роки тому +9

      Ay, Forsakianity can i request for u to put a space between the asterisk and the speech marks xD i see u attempted to put it in bold and its hurting my tired brain- thanks

    • @Stark-Raving
      @Stark-Raving 3 роки тому +1

      So, you'd kill the child?

    • @trustmeimaplaguedoctor4967
      @trustmeimaplaguedoctor4967 3 роки тому +4

      @@Stark-Raving absolutely without a second thought maybe

    • @Damini368
      @Damini368 3 роки тому +3

      Congrats, you pulled the lever and killed five babies.

  • @aasekristoffer
    @aasekristoffer 3 роки тому +792

    I find the dilemma behind the trolly problem to be what people are failing to see.
    The question could be asked in a different way:
    Do you 1) passivly observe the death of someone.
    or 2) take action to kill someone else.
    That is the trolly problem. And when looked at it that way, the answer, at least for me, will almost always be: Do not act, because I do not want to kill someone.

    • @trini5793
      @trini5793 3 роки тому +112

      but your decision to not move is still killing 4 people

    • @apimpnamedslickback5936
      @apimpnamedslickback5936 3 роки тому +102

      @@trini5793 no his decision is to watch and experience in horror the brutal murder of those people that some madman put there.

    • @michaelstoel
      @michaelstoel 3 роки тому +134

      @@trini5793 @Kristoffer Georg Aase . Haha I think the discussion you two are having is the hart of the trolly problem and the reason that there is no right answer; just the discussion: Is pulling the lever murder? Or is your decision not to pull the lever even a bigger murder? or does it make you innocent? And this hypothetical situation can of course be compared to other real or fictional situations witnessing voidance, science or baby hitler :P

    • @Anelkia
      @Anelkia 3 роки тому +45

      I think I’d rather kill one people and let a greater number survive that to have to see people dying and having to live my life saying to myself "I could've saved them"

    • @aasekristoffer
      @aasekristoffer 3 роки тому +48

      @@Anelkia No matter the choice, you will always have the thought: "I've could have saved someone". Your way you also have to think: "I killed someone".

  • @gabrielmedeirospatrocinio5110
    @gabrielmedeirospatrocinio5110 2 роки тому

    I love the fact that the ads of this channel are the few ones that i like to watch.

  • @SerasXHarkonnen
    @SerasXHarkonnen 3 роки тому +152

    The trolley problem is really dependant on the size of the trolley and the distance between the tracks, if the trolley is sufficiently large and the tracks narrow enough, you can probably flip the lever when it's half way across so it turns on its side, flips over, and takes out all 6 of them.

  • @alexxb2047
    @alexxb2047 3 роки тому +169

    "No braincells required." *Proceeds to create ethical robot.*

  • @thestudycorner2992
    @thestudycorner2992 3 роки тому +141

    I like how it went from “choose one font to destroy” to “Do you kill a child or do you kill its mother?” Lol 😂

  • @SlacktivistWeeb
    @SlacktivistWeeb Рік тому +1

    8:43 No, I think you did a perfect job. Always save the cat. Can’t get those number of lives to 8. 😂😂

  • @KAROLINAPOCHWAT
    @KAROLINAPOCHWAT 3 роки тому +238

    “I’ve treated machines like they’re a replacement of me, rather than an extension.”
    The one, singular, most important notion we need to keep in mind when it comes to robotics and especially AI. What is the purpose of AI? Is it to replace a human being, or to add to a human being’s life?
    The moment this becomes unclear is the moment we need to step back and re-evaluate.

    • @johnhenry4024
      @johnhenry4024 3 роки тому +9

      Why not both

    • @lovelyhomeboy2782
      @lovelyhomeboy2782 3 роки тому +6

      @@johnhenry4024 to both add and replace oh yeah

    • @KAROLINAPOCHWAT
      @KAROLINAPOCHWAT 3 роки тому +1

      @@johnhenry4024 Are you saying you want a machine to replace you in particular?
      This is purely an issue of self-preservation. We, as machines' creators, need to be able to foresee the path that this research, development and implement could take in the future as best as we can. The machines/AI can help us visualize the consequences of our own actions, but will they help us prevent us from inadvertently replacing ourselves?
      We need to venture into this world with eyes wide open and do what we can to ensure future generations don't inherit a mess.

    • @Handlelesswithme
      @Handlelesswithme 3 роки тому +3

      Why should the corporate elite care about the wellbeing of who their research, or technological optimization affects over what is considered a marginalized advantage

    • @tiredHooman
      @tiredHooman 3 роки тому +1

      Replace the meatbags with the superior intelligence, AI.

  • @jeffreyho8281
    @jeffreyho8281 3 роки тому +93

    I like how she said "you and I", it reminds me of back when my brother would play games while I watch and say "we did it, we cleared the level" even though I did nothing lol.

    • @rosiegaymer
      @rosiegaymer 3 роки тому +2

      we did it patrick
      we saved the city!

  • @clairelin5058
    @clairelin5058 3 роки тому +639

    "i messed up.... so this machine just prioritizes the lives of cats"
    where is the problem

    • @gwynzynoodles2553
      @gwynzynoodles2553 3 роки тому +21

      @Xubse where are the problems

    • @_MECHA_
      @_MECHA_ 3 роки тому +3

      It killed a dog

    • @nat8264
      @nat8264 3 роки тому +20

      Ai will team up with cats in the future

    • @_MECHA_
      @_MECHA_ 3 роки тому +7

      @@nat8264 "dogs really are man's best freind"the main character says to the main dog before the dog and humans vs the robot and cats war as epic music plays

    • @littlefox_100
      @littlefox_100 3 роки тому

      @Xubse Vegan

  • @Its_just_me_again
    @Its_just_me_again Рік тому +1

    with the speed of ai computations exponentially improving and the likelihood of all of us at some stage being in the "system" maybe the autonomous car will quickly face scan, access individuals in harms way and decide which one to hit. criminal record? age? life expectancy? dependents? political party alignments?

  • @ambiej123
    @ambiej123 3 роки тому +501

    Honestly, I think saving the cat is actually a true representation of the reality of our average day ethics. Ex: you have a change box. Do you give .25 to a cat shelter or a food bank? With my own giving, I can say the cat shelter. So if I had to switch and had to chose between a cat or 5 people I would obviously save the 5 people. But when it actually matters day to day it’s the cat. The robot is saying “listen, your ethics is actually saying save the cat”.

    • @blur8919
      @blur8919 3 роки тому +8

      I disagree

    • @jonathanday6022
      @jonathanday6022 3 роки тому +38

      @@blur8919 while I agree I disagree with your vocalization of your disagreement, which means you vs cat I'm saving the cat.

    • @jonathanday6022
      @jonathanday6022 3 роки тому +12

      Don't worry for you it will only be a blur.

    • @randominternetguy3537
      @randominternetguy3537 3 роки тому +24

      Cats don't have any choice in whether they're abandoned or get sick. Not saying humans do. However, Imagine being euthanized because your homeless shelter doesn't have the funds to keep you. (Not to mention those homeless shelters are often corrupt and a majority of the funds goes to salaries rather than the homeless).

    • @1mol831
      @1mol831 3 роки тому +11

      I’ll spent the .25 on a chocolate bar

  • @skeptic10
    @skeptic10 3 роки тому +65

    I once were in a trolley problem situation myself. I was riding my bike on a downhill icy road when my brakes failed. At the same time a bus stopped down the road and busload of passengers came off the bus. Of course people just swarmed and blocked the whole road. I tried to yell my brakes failed but nobody even looked.
    I had two options:
    1. Steer left to the driveway where I would certainly hit the bus risk my own death.
    2. Steer right and hit the cross section full speed and injure myself real bad or possibly die.
    3. Drive through the crowd and hope no one gets hit.
    I considered option 2 but then I saw an opening I had a small chance to get through. I ended up going through the opening (option 3). I slightly hit one guys backpack but otherwise luckily didn't hit anyone. I didn't even fall over. I eventually got my bike stopped my bike and apologized the guy but he said he didn't even notice.
    That was the scariest situation I have ever been.

    • @suddenllybah
      @suddenllybah 3 роки тому +23

      Glad you survived and without hurting anyone.
      Does highlight a possible issue of the trolley problem.
      It assumes that the results are guaranteed given a choice, but you managed to take what might have been a hurt people choice and convert it into a no people getting hurt.

    • @vigilantcosmicpenguin8721
      @vigilantcosmicpenguin8721 3 роки тому +3

      Too bad you didn't have a computer telling you what to do.

    • @melonoire
      @melonoire 3 роки тому +1

      Damn

    • @501thtrooper4
      @501thtrooper4 3 роки тому +1

      Ah yes the difficult decision of killing yourself in a painful way or bumping into someone with a bike

  • @jahnvisingh8015
    @jahnvisingh8015 3 роки тому +288

    Was waiting for this, and its finally here. Even if you do not succeed, you end up giving us a lot of information through your research. That's why I like these videos. Somehow, I really connect to your thoughts though they may be strange.

  • @dave23024
    @dave23024 Рік тому

    One thing I learned doing algo trading, it's a whole lot easier to just set the parameters for trade triggers than it is to create a neural network that has to be trained.