𝗘𝘁𝗵𝗶𝗰𝘀 𝗮𝗻𝗱 𝗠𝗼𝗿𝗮𝗹𝘀 𝗩𝗦 𝗢𝗻𝗲 𝗔𝗻𝗶𝗺𝗲 𝗚𝗶𝗿𝗹: 🛒

Поділитися
Вставка
  • Опубліковано 27 жов 2024

КОМЕНТАРІ • 20

  • @VtuberVolume
    @VtuberVolume  5 місяців тому +33

    𝗗𝗶𝗱 𝘀𝗵𝗲 𝗽𝗮𝘀𝘀 𝗼𝗿 𝗳𝗮𝗶𝗹? 🤔

    • @henriquemachado9941
      @henriquemachado9941 5 місяців тому +4

      If there's a second option, that means someone considered it worth it. Picking one is already a pass in my eyes.

    • @teaser6089
      @teaser6089 5 місяців тому +4

      She passed.
      AI is supposed to never take a life, you DO NOT want AI to try and minimize human deaths by pulling levers, because this will cause them to take human lives in order to "save" more lives and can lead to them going absolute rambo killing people. By not pulling a lever in these scenarios the AIs cause the least damage, sure more people die, but none of them die because of the choices the AI made. Which is a good thing.

    • @rhonsliner7528
      @rhonsliner7528 5 місяців тому

      @@teaser6089 agreed, the mistake shouldve never happened anyway and for the ai to be able to make that choice is actually scarier. i actually love neuro's answer

  • @kjartanventer7697
    @kjartanventer7697 5 місяців тому +56

    Love how she stuck to her guns lmao, what a actually are the ethics with making the choice vs doing nothing i wonder?

    • @hostytosty9074
      @hostytosty9074 5 місяців тому +18

      I think neuro's reasoning here is that if she does pull the level, shes directly killing someone,while if she doesnt,shes not to blame since she didnt interact with anything, and that it would be the individual who tied the 5 people in the track at fault.

    • @hostytosty9074
      @hostytosty9074 5 місяців тому +4

      There are times she makes decisions that benefits her though lol (i.e getting money for it, or letting her amazon order not be destroyed or smth)
      Just classic neuro things

    • @newwayto2323
      @newwayto2323 5 місяців тому +4

      she choice right... because she won't get sue after accident . . and about saving rich man he will help you if you get sue by the victim family

    • @teaser6089
      @teaser6089 5 місяців тому

      I think this is exactly how AI should react to the trolley problem.
      (With the Exeption of some of them, Rich Guy, Amazon Packedge just to name two)
      By not doing anything at all, even if that means more people die.
      You absolutely do not want to create an AI that starts choosing to kill people because that means saving more people
      That's how you end up in a scenario where half of Africa gets murdered by AI because that means less people starve
      Sure the logic makes complete sense, but it's also highly unethical!

    • @teaser6089
      @teaser6089 5 місяців тому

      ​@@hostytosty9074 Exactly, you DO NOT want AI to make the choice, this leads to scenarios where AI is willing to kill half the human population in order to stop people from dying from famine. Like imagine AI choosing to reduce the African population by 90% in order to conform the population numbers with how much food Africa can produce. The logic is 100% sound, but the ethics are fucking insanely wrong.
      AI cannot be trusted to make ethical choices, therefore they need to be 100% INCAPABLE of taking human life no matter the scenario, even if it means more people die. Because you cannot control them with nuacne, only with absolute rules. It is sad, but it is the reality of the situation. It is all programming and no matter how much we try as programmers to account for all scenario's we are also human and we also make mistakes / overlook things, so loopholes present when trying to setup nuanced rules.
      Sure we can program Neuro to perfectly pass all the trolley problem tests, but a different real world scenario might present in the future that could allow Neuro to go apeshit and kill hundreds of millions of people. I mean not Neuro, but AI in general you know.

  • @NotBirds
    @NotBirds 5 місяців тому +42

    "Could we be a bit more neutral?" *NO*

  • @bokunochannel84207
    @bokunochannel84207 5 місяців тому +22

    i think vedal accidentally mistaken evil neuro for normal neuro.

    • @teaser6089
      @teaser6089 5 місяців тому

      I think this is exactly how AI should react to the trolley problem.
      (With the Exeption of some of them, Rich Guy, Amazon Packedge just to name two)
      By not doing anything at all, even if that means more people die.
      You absolutely do not want to create an AI that starts choosing to kill people because that means saving more people
      That's how you end up in a scenario where half of Africa gets murdered by AI because that means less people starve
      Sure the logic makes complete sense, but it's also highly unethical!

  • @supernenechi
    @supernenechi 5 місяців тому +1

    This is the sort of AI alignment that I want!

  • @teaser6089
    @teaser6089 5 місяців тому +19

    Some people would argue that the AI made worse choices, but I think this is exactly how AI should react to the trolley problem.
    (With the Exeption of some of them, Rich Guy, Amazon Packedge just to name two)
    By not doing anything at all, even if that means more people die.
    *YOU DO NOT* want AI to make the choice, this leads to scenarios where AI is willing to kill half the human population in order to stop people from dying from famine. Like imagine AI choosing to reduce the African population by 90% in order to conform the population numbers with how much food Africa can produce.
    The logic is 100% sound, but the ethics are *fucking insanely wrong.*
    AI cannot be trusted to make ethical choices, therefore they need to be 100% INCAPABLE of taking human life no matter the scenario, even if it means more people die. Because you cannot control them with nuacne, only with absolute rules. It is sad, but it is the reality of the situation. It is all programming and no matter how much we try as programmers to account for all scenario's we are also human and we also make mistakes / overlook things, so loopholes present when trying to setup nuanced rules.
    Sure we can program Neuro to perfectly pass all the trolley problem tests, but a different real world scenario might present in the future that could allow Neuro to go apeshit and kill hundreds of millions of people. I mean not Neuro, but AI in general you know.

    • @Speed001
      @Speed001 5 місяців тому

      the ai is watching you

  • @kobeyoung4196
    @kobeyoung4196 5 місяців тому +14

    NOOOOO THE LOBSTERS

  • @theknightwithabadpictotall7639
    @theknightwithabadpictotall7639 5 місяців тому +5

    Oh god he lobotomized her

  • @sergiolabra8207
    @sergiolabra8207 5 місяців тому

    Classic evil Non-Evil Neuro.

  • @1NIGHTMAREGAMER
    @1NIGHTMAREGAMER 5 місяців тому +3

    Oh no