She passed. AI is supposed to never take a life, you DO NOT want AI to try and minimize human deaths by pulling levers, because this will cause them to take human lives in order to "save" more lives and can lead to them going absolute rambo killing people. By not pulling a lever in these scenarios the AIs cause the least damage, sure more people die, but none of them die because of the choices the AI made. Which is a good thing.
@@teaser6089 agreed, the mistake shouldve never happened anyway and for the ai to be able to make that choice is actually scarier. i actually love neuro's answer
I think neuro's reasoning here is that if she does pull the level, shes directly killing someone,while if she doesnt,shes not to blame since she didnt interact with anything, and that it would be the individual who tied the 5 people in the track at fault.
There are times she makes decisions that benefits her though lol (i.e getting money for it, or letting her amazon order not be destroyed or smth) Just classic neuro things
I think this is exactly how AI should react to the trolley problem. (With the Exeption of some of them, Rich Guy, Amazon Packedge just to name two) By not doing anything at all, even if that means more people die. You absolutely do not want to create an AI that starts choosing to kill people because that means saving more people That's how you end up in a scenario where half of Africa gets murdered by AI because that means less people starve Sure the logic makes complete sense, but it's also highly unethical!
@@hostytosty9074 Exactly, you DO NOT want AI to make the choice, this leads to scenarios where AI is willing to kill half the human population in order to stop people from dying from famine. Like imagine AI choosing to reduce the African population by 90% in order to conform the population numbers with how much food Africa can produce. The logic is 100% sound, but the ethics are fucking insanely wrong. AI cannot be trusted to make ethical choices, therefore they need to be 100% INCAPABLE of taking human life no matter the scenario, even if it means more people die. Because you cannot control them with nuacne, only with absolute rules. It is sad, but it is the reality of the situation. It is all programming and no matter how much we try as programmers to account for all scenario's we are also human and we also make mistakes / overlook things, so loopholes present when trying to setup nuanced rules. Sure we can program Neuro to perfectly pass all the trolley problem tests, but a different real world scenario might present in the future that could allow Neuro to go apeshit and kill hundreds of millions of people. I mean not Neuro, but AI in general you know.
I think this is exactly how AI should react to the trolley problem. (With the Exeption of some of them, Rich Guy, Amazon Packedge just to name two) By not doing anything at all, even if that means more people die. You absolutely do not want to create an AI that starts choosing to kill people because that means saving more people That's how you end up in a scenario where half of Africa gets murdered by AI because that means less people starve Sure the logic makes complete sense, but it's also highly unethical!
Some people would argue that the AI made worse choices, but I think this is exactly how AI should react to the trolley problem. (With the Exeption of some of them, Rich Guy, Amazon Packedge just to name two) By not doing anything at all, even if that means more people die. *YOU DO NOT* want AI to make the choice, this leads to scenarios where AI is willing to kill half the human population in order to stop people from dying from famine. Like imagine AI choosing to reduce the African population by 90% in order to conform the population numbers with how much food Africa can produce. The logic is 100% sound, but the ethics are *fucking insanely wrong.* AI cannot be trusted to make ethical choices, therefore they need to be 100% INCAPABLE of taking human life no matter the scenario, even if it means more people die. Because you cannot control them with nuacne, only with absolute rules. It is sad, but it is the reality of the situation. It is all programming and no matter how much we try as programmers to account for all scenario's we are also human and we also make mistakes / overlook things, so loopholes present when trying to setup nuanced rules. Sure we can program Neuro to perfectly pass all the trolley problem tests, but a different real world scenario might present in the future that could allow Neuro to go apeshit and kill hundreds of millions of people. I mean not Neuro, but AI in general you know.
𝗗𝗶𝗱 𝘀𝗵𝗲 𝗽𝗮𝘀𝘀 𝗼𝗿 𝗳𝗮𝗶𝗹? 🤔
If there's a second option, that means someone considered it worth it. Picking one is already a pass in my eyes.
She passed.
AI is supposed to never take a life, you DO NOT want AI to try and minimize human deaths by pulling levers, because this will cause them to take human lives in order to "save" more lives and can lead to them going absolute rambo killing people. By not pulling a lever in these scenarios the AIs cause the least damage, sure more people die, but none of them die because of the choices the AI made. Which is a good thing.
@@teaser6089 agreed, the mistake shouldve never happened anyway and for the ai to be able to make that choice is actually scarier. i actually love neuro's answer
Love how she stuck to her guns lmao, what a actually are the ethics with making the choice vs doing nothing i wonder?
I think neuro's reasoning here is that if she does pull the level, shes directly killing someone,while if she doesnt,shes not to blame since she didnt interact with anything, and that it would be the individual who tied the 5 people in the track at fault.
There are times she makes decisions that benefits her though lol (i.e getting money for it, or letting her amazon order not be destroyed or smth)
Just classic neuro things
she choice right... because she won't get sue after accident . . and about saving rich man he will help you if you get sue by the victim family
I think this is exactly how AI should react to the trolley problem.
(With the Exeption of some of them, Rich Guy, Amazon Packedge just to name two)
By not doing anything at all, even if that means more people die.
You absolutely do not want to create an AI that starts choosing to kill people because that means saving more people
That's how you end up in a scenario where half of Africa gets murdered by AI because that means less people starve
Sure the logic makes complete sense, but it's also highly unethical!
@@hostytosty9074 Exactly, you DO NOT want AI to make the choice, this leads to scenarios where AI is willing to kill half the human population in order to stop people from dying from famine. Like imagine AI choosing to reduce the African population by 90% in order to conform the population numbers with how much food Africa can produce. The logic is 100% sound, but the ethics are fucking insanely wrong.
AI cannot be trusted to make ethical choices, therefore they need to be 100% INCAPABLE of taking human life no matter the scenario, even if it means more people die. Because you cannot control them with nuacne, only with absolute rules. It is sad, but it is the reality of the situation. It is all programming and no matter how much we try as programmers to account for all scenario's we are also human and we also make mistakes / overlook things, so loopholes present when trying to setup nuanced rules.
Sure we can program Neuro to perfectly pass all the trolley problem tests, but a different real world scenario might present in the future that could allow Neuro to go apeshit and kill hundreds of millions of people. I mean not Neuro, but AI in general you know.
"Could we be a bit more neutral?" *NO*
i think vedal accidentally mistaken evil neuro for normal neuro.
I think this is exactly how AI should react to the trolley problem.
(With the Exeption of some of them, Rich Guy, Amazon Packedge just to name two)
By not doing anything at all, even if that means more people die.
You absolutely do not want to create an AI that starts choosing to kill people because that means saving more people
That's how you end up in a scenario where half of Africa gets murdered by AI because that means less people starve
Sure the logic makes complete sense, but it's also highly unethical!
This is the sort of AI alignment that I want!
Some people would argue that the AI made worse choices, but I think this is exactly how AI should react to the trolley problem.
(With the Exeption of some of them, Rich Guy, Amazon Packedge just to name two)
By not doing anything at all, even if that means more people die.
*YOU DO NOT* want AI to make the choice, this leads to scenarios where AI is willing to kill half the human population in order to stop people from dying from famine. Like imagine AI choosing to reduce the African population by 90% in order to conform the population numbers with how much food Africa can produce.
The logic is 100% sound, but the ethics are *fucking insanely wrong.*
AI cannot be trusted to make ethical choices, therefore they need to be 100% INCAPABLE of taking human life no matter the scenario, even if it means more people die. Because you cannot control them with nuacne, only with absolute rules. It is sad, but it is the reality of the situation. It is all programming and no matter how much we try as programmers to account for all scenario's we are also human and we also make mistakes / overlook things, so loopholes present when trying to setup nuanced rules.
Sure we can program Neuro to perfectly pass all the trolley problem tests, but a different real world scenario might present in the future that could allow Neuro to go apeshit and kill hundreds of millions of people. I mean not Neuro, but AI in general you know.
the ai is watching you
NOOOOO THE LOBSTERS
Oh god he lobotomized her
Classic evil Non-Evil Neuro.
Oh no