Defining Harm for Ai Systems - Computerphile

Поділитися
Вставка
  • Опубліковано 12 гру 2024

КОМЕНТАРІ • 159

  • @EDoyl
    @EDoyl Рік тому +142

    Some very old philosophical questions have sat with no answer or multiple contentious answers for a long time, and now the computers need solid explicit answers to all of them. That's quite a daunting problem.

    • @weksauce
      @weksauce Рік тому

      False. Computers don't need any of these questions answered. Nor do humans.

    • @boldCactuslad
      @boldCactuslad Рік тому +5

      @@weksauce Enjoy your default ending to the AI apocalypse, friend, because that's all you can hope for without answers.

  • @benjaminclehmann
    @benjaminclehmann Рік тому +12

    Worth noting that defining harmful actions as those which decrease someone's utility is a utilitarian idea. Utilitarian ethics (where what is moral is determined only by how it impacts some goal, such as a utility function) is very useful but it very regularly contravenes human morality. Utilitarianism leads to an idea of morality that can much more readily be reasoned about (and it's why economics originated as an offshoot of utilitarianism) but it usually also leads to a morality that we would object to. Think of all the supervillains that assume the ends justify the means.
    This isn't a criticism, utilitarianism is very useful and there's a reason it's the most realistic way we can rigorously define harm without relying on human judgement. It's just utilitarianism can be very easily misused, the history of science can be ugly and often utilitarianism is a prerequisite for those ugly deeds.
    As a note, Dr. Chockler talks about simply change in utility until she gets into her probabilistic example, but in general the utility function of some moral philosophy can be a lot more complicated, it can be a (potentially probabilistic) social preference function that considers multiple people and some sort of notion of equity.

  • @thuokagiri5550
    @thuokagiri5550 Рік тому +57

    Philosophers touch anything and suddenly it turns into this convoluted deep dark rabbit hole.
    And I love it

    • @ahmadsalama6447
      @ahmadsalama6447 Рік тому +1

      Ikr, not just in computer systems, everything man

    • @odorlessflavorless
      @odorlessflavorless Рік тому +2

      and make anything deranged ? 😂

    • @raffriff42
      @raffriff42 Рік тому +2

      It’s great when philosophers debate while millions die.

  • @behemoth9543
    @behemoth9543 Рік тому +16

    If AI is ever introduced into societies on a global scale it will very likely be another area where US and to a lesser extent european customs and social structures become an inherent part of a technology and drive their cultural dominance. Its truly fascinating how the internet has already led to a major cultural homogenization of english speaking people across the world and that "soft power" is a huge driver of geopolitical reality aswell.
    The example of tips is a great one for this rift aswell. A waited expecting a tip in this way is going far beyond anything that could be considered reasonable in most of the world and would probably cause a lot of customers to never visit that restaurant again if he voiced that displeasure to them.

    • @Norsilca
      @Norsilca Рік тому +1

      Or said another way, a restaurant not paying a living wage to its employees!

  • @SecularMentat
    @SecularMentat Рік тому +28

    It seems to me 'measuring' any of the systems that lead to the definition of harm is a difficult task. Granted machine learning can fudge a lot of it.
    It would have to know what the preferred state of an agent would be first. That alone is a huge definitional issue I'd imagine.

    • @pleasedontwatchthese9593
      @pleasedontwatchthese9593 Рік тому

      I agree, like what is harm is an opinion that will need to be learned

    • @SecularMentat
      @SecularMentat Рік тому

      ​@@pleasedontwatchthese9593 I think it'd have to be an individual target for each person that the machine knows. But maybe have a 'baseline' for 'average human'.
      But then, if you let an agent work on those assumptions it seems like to maximize its utility function to minimize harm, the machine wouldn't by default never take action. Because all actions seem to have some measure of possibility of harm.

    • @bengoodwin2141
      @bengoodwin2141 Рік тому +2

      These are all things that humans do already, unconsciously

    • @SecularMentat
      @SecularMentat Рік тому +1

      @@bengoodwin2141 yup. We're evolved for it for sure.
      Machines will take a bit of coaxing to get it right.
      Heck humans sometimes aren't great at threat perception. We range from jumping at shadows to wanting to pet the fluffy bear.

    • @brcoutme
      @brcoutme 2 місяці тому

      @@SecularMentat That isn't necessarily true, you could have machines that utility function is designed to prevent harm, not only from themselves but also from other causes. In this sense a machine that protects humans might be able to consider various dangers as harm and weigh those against the harm trying to prevent them will cause. Of course this could result with misalignment where it avoids letting humans do what they want (simple examples include going outside, eating junk food, taking a bath), due to it calculating that the risk of harm of these reasonable actions actually outweighs the benefits we gain from them AND that it's interference doesn't add harm to outweigh the harm of those actions. We could try to improve this by setting a minimum threshold or something like that, but this only reduces what the machine's utility in exchange for reduced situations where it's definitely still misaligned. It's feasible that an AGI might work around this by coming up with ways to still interfere just only in things that it's calculations consider large scale, but actually it therefore does seemingly small scale effects through more complex causes, like eliminating the ability to get junk food or baths in all of society. Anyways, although we should acknowledge that formal definitions of reducing harm doesn't solve the risks of AGI or powerful AI systems, it does still improve the options we have in trying to make utility functions for Machine Learning better align.
      EDIT: After watching the extrabits and learning more about how harm is defined I see your point better. If you absolute no harm than the machine would do nothing it's true. Of course any leniency for allowing utility in exchange of harm such as my protection from outside causes only example would quickly change that. Just want to be clear that I think there is still a lot of value in the formalizing an quantifying definitions of harm, but like most philosophical concepts extreme interpretations will result in silly results. So if that is your point than I agree 100%.

  • @MarkusSimpson
    @MarkusSimpson Рік тому +9

    I love Dr Chocklers chilled demeanour, definitely one of my favourite teachers 🤓

  • @Insaniaq
    @Insaniaq Рік тому +16

    I love the edit at 2:21 where bob watches a computerphile video, got me cracked up 😂

    • @bornach
      @bornach Рік тому +1

      Poor Bob. He was about to get to the best bit of that video just when the car crash happens

  • @Veptis
    @Veptis 5 місяців тому

    A rarely discussed topic is how this moral compass of values changes globally by culture. Some prefer themselves, some prefer others, some prefer wealth and social status, others the many etc.
    You can optimize a dilemma decision machine for the geographical region with the target of causing least societal outcry.
    Or just get rid of cars.

  • @kuronosan
    @kuronosan Рік тому +85

    If there is harm to the waiter not getting a tip, the waiter is being harmed by the restaurant owner, not the customer.

    • @kalizec
      @kalizec Рік тому +26

      This is exactly what I wanted to add here as well. The example of not tipping is so extremely poorly chosen that the entire video suffers from it.
      The example misses the entire point of determining cause, only to try and calculate harm on a non-causal factor.
      The restaurant at least has a contract with the waiter.
      The customer definitely does not have a contract with the waiter.
      It is possible that terms and conditions apply to the customer visiting the waiter, but I've yet to see or hear a single restaurant going after a customer who doesn't tip the waiter enough for violating their terms and conditions, so that's clearly not a thing.
      P.S. people who argue that tipping is a social contract can easily be countered with the following argument.
      Namely that society itself is not honouring a social contract that people, waiters included, deserve a decent wage.
      So, if social contracts are to be considered binding, then the harm is still not perpetrated by the customer but by the society.

    • @rauljvila
      @rauljvila Рік тому +10

      I find the tip-scenario perfect to illustrate the problem with this approach, all the phillosophical issues are hidden under the rug of the "default value". Many people won't agree that 20% tip is the default value when there is no law forcing you to do so.
      EDIT: In fairness, she acknowledges this point at the end of the Extra Bits video:
      > in the example of the hospital and the organ harvesting the default might be the treatment that is expected in our current norms. But you are absolutely right, I mean this all definitely involves discussion about societal norms right.

    • @MrRedstoner
      @MrRedstoner Рік тому +7

      @@kalizec And really, the answer is that the US would need to fix their laws, otherwise whoever is making wage decisions would otherwise be harming stakeholders in the restaurant and on the chain goes.

    • @cwtrain
      @cwtrain Рік тому +5

      Fuggin' thank you! Defining the system inside of exploitive capitalist constructs made me sick.

    • @pleasedontwatchthese9593
      @pleasedontwatchthese9593 Рік тому +4

      ​@kalizec I think your looking way into it. It's a contrived example. What if the waiter is the owner? Why does the restaurant only take cash tips? Etc
      I mean non of that matters, they just wanted to show how to find out more and less harm. Not try and fix Capitalism, lol

  • @samuelthecamel
    @samuelthecamel Рік тому +12

    The problem is that harm is completely subjective, despite how much we would like to think that it's objective.

  • @salvosuper
    @salvosuper Рік тому +2

    The one thing harming the waiter is the unethical work culture

  • @nunyobiznez875
    @nunyobiznez875 Рік тому +4

    10:36 The standard tipping rate in the US is actually 15%. Though, some like to give more, and I think some people just find it easier to calculate 20% in their head.

    • @bornach
      @bornach Рік тому

      At a restaurant I remember being offered a choice at the bottom of the bill: 15%, 20%, 25%. Cannot recall if this was in CA or TX. Apparently there are regional differences.

    • @BTheBlindRef
      @BTheBlindRef 11 місяців тому +1

      Yes, 15-18% is "service was decent, as expected". 20%+ is "wow, the service was great or went above and beyond". Especially where I live, where all service workers are guaranteed full minimum wages before tips already. I might consider a higher tip rate reasonable in some other places in the US where tipped service workers are allowed to be paid under standard minimum wage with the expectation that tips more than make up the difference.

  • @paulbennett1349
    @paulbennett1349 Рік тому +6

    With the doctors dilemma, maintenance of the current level of the condition is not a harm of zero. Sure, people get used to being in pain their entire lives but I don’t think any of them would consider it to be a static level of harm. The lack of hope of improvement is what drives many to suicide. So calculation of harm is only as robust as our understanding of all the variables. Since most people seek to investigate to the first point of exhaustion (where is my brain happy to stop) rather than the last (can I demonstrate that any other factors must be insignificant), I can see some rather large consequences.

  • @ungodly_athorist
    @ungodly_athorist Рік тому +8

    Was Goofy harmed by being called Pluto?

  • @don_marcel
    @don_marcel Рік тому +3

    I need more Dr. Chockler explanations! Undercover hilarious wit

  • @zzzaphod8507
    @zzzaphod8507 Рік тому +12

    Why isn't the option of the car going more slowly and stopping before hitting the obstacle considered as an option?!

    • @rosameltrozo5889
      @rosameltrozo5889 Рік тому +3

      You're missing the point

    • @phizc
      @phizc Рік тому +6

      ​@@rosameltrozo5889not really. At least not in terms of the lawsuit. For the car, the obvious option is to just injure the driver instead of killing him. But for the purpose of the lawsuit, the situation didn't pop into existence when the car decided to swerve into the guard rail.
      Why didn't the car notice the stationary car? Corner?
      Why did it go so fast around the corner that it wouldn't have time to avoid the stationary car?

    • @rosameltrozo5889
      @rosameltrozo5889 Рік тому +2

      @@phizc You're missing the point

    • @phizc
      @phizc Рік тому

      @@rosameltrozo5889 explain.

    • @rosameltrozo5889
      @rosameltrozo5889 Рік тому +4

      @@phizc It's not about the technical details , it's a thought experiment to show the difficulties of making AI "understand" what humans understand more or less intuitively, such as harm

  • @weksauce
    @weksauce Рік тому +1

    What's the "default" is the wrong question. Harm is irrelevant. Everything should do the best expected value benefit minus cost choice. The real questions are how much agents should value other agents' expected benefit minus cost, how much agents should be expected to spend acquiring information to make their expectations more accurate, and how finite agents ought to approach expected values of options that have very small probabilities (Pascal's Muggings and such).

  • @charlesrussell6183
    @charlesrussell6183 Рік тому

    great look at the big picture

  • @doubleru
    @doubleru Рік тому +1

    In the first example, why is Bob suing his own car's manufacturer, rather than whoever was responsible for creating a hazard in the first place by stopping their car in the middle of traffic that was so intense that there is literally no way for Bob's car to come to a halt in time to avoid a crash? Because as the video itself points out, we need to trace the causality in order to measure harm, and the main cause of the crash was the hazard on the road, not how Bob's car reacted to it.

    • @supermax64
      @supermax64 Рік тому +2

      From his point of view the car chose to throw itself in the fence. Also he's more likely to get a million dollar payout from the manufacturer than a random person. I'm sure some people would or will try to sue unless it's explicitly ruled that the manufacturer is never responsible (which would be surprising, at least at the start).

  • @chanm01
    @chanm01 Рік тому +3

    This is all interesting from an academic POV, but if we're actually gonna do anything with these definitions and criteria, I think you probably need to talk to one of the law professors. Sure, AI presents a bunch of novel fact patterns, but I somehow doubt that the suits which arise are going to be heard as if no prior case law exists.

  • @klutterkicker
    @klutterkicker Рік тому +9

    So imagine that you're at a time before you get into this scenario, when you have the option of 1.) driving fast and 0.5% of the time you get into a deadly scenario, or 2.) driving slow and 0.1% of the time you get into a deadly scenario. We're kind of back at the doctor's dilemma with medicine vs surgery, but is driving slow actually a harm? And what if instead of pouring over all of these decisions we used that development time to improve our traffic prediction, and we could avoid 20% of possible deadly scenarios... would that have a chance to replace more sophisticated last-resort decision-making?

    • @vadrif-draco
      @vadrif-draco Рік тому +2

      Well said. The example in the video just forced us into the situation on 1.) and then told us to deal with it without considering how this situation itself could've been avoided.

    • @user-sl6gn1ss8p
      @user-sl6gn1ss8p Рік тому +1

      @@vadrif-dracoI think that's a common problem of utilitarianism, it usually doesn't challenge the reasons for things

  • @aarocka11
    @aarocka11 Рік тому +39

    I initially read that as haram. Lol

    • @saamboziam5955
      @saamboziam5955 Рік тому +1

      🤣🤣🤣🤣

    • @uropig
      @uropig Рік тому

      💀

    • @MartinMaat
      @MartinMaat Рік тому +7

      Basically the same. We want the least haramful outcome at all times.

    • @WilkinsonX
      @WilkinsonX Рік тому +4

      I read Harambe 🦍💔

    • @aarocka11
      @aarocka11 Рік тому

      @@WilkinsonX dicks out for harambe 😭🦍❤️

  • @arletottens6349
    @arletottens6349 Рік тому +9

    The rule can be simple: minimize your own blame. Which means: stick to the rules, drive safely, and only use accident avoidance that does not create additional dangers.

    • @randomusername6
      @randomusername6 Рік тому +15

      Great, now all that's left to do is defining "blame"! I got it: "blame is responsibility for causing harm". Oh, wait...

    • @underrated1524
      @underrated1524 Рік тому

      Most of society goes by this simple rule. This works, but it does lead to people spending a *lot* of their time and effort playing "blame hot potato". Turning your problems into everyone else's problems is a solid strategy from your point of view, but if everyone does it the problem never gets solved.

    • @C00Cker
      @C00Cker Рік тому +1

      But then, there is the issue of harming others by being unnecessarily pedantic about following the general rules if the situation requires breaking them.
      Also, most rules are based on the fact that it is almost impossible to coordinate well in real time. With AI agents, the cars can share the current situation on the road and prevent most of the accidents.

  • @salat
    @salat Рік тому

    16:20 Solving moral dilemmas by weighting _everything_? How? E.g. should a high price&protection car crash preferably into "weaker" cars because that guarantees a higher probability that it's passengers survive, while the passengers of the weaker car won't? Should it always crash into other price&protection cars? Who wanted to buy such a crash magnet and how much would it cost to insure? We've had this discussion before here on the channel, right?

  • @Squeesher
    @Squeesher Рік тому

    I love her voice, could listen to 1000 videos with her teaching

  • @mpouhahahha
    @mpouhahahha Рік тому +1

    i fell asleep and it's still 11am🤤

  • @brookrichardson1373
    @brookrichardson1373 Рік тому

    Why do all of these AI driving scenarios always involve vehicles without working breaks?

  • @bengoodwin2141
    @bengoodwin2141 Рік тому +2

    To me, *some* of these seem obvious. You can cause harm without it being wrong if it was still the least bad outcome, and that "achievable" is relative, and in the second example, making tje "default" "achievable" requires harm

    • @jursamaj
      @jursamaj Рік тому +2

      The tipping example is flawed. If anybody is causing harm, it's the employer who isn't paying a living wage, so that he can have artificially low prices.

    • @shandrio
      @shandrio Рік тому +2

      @@jursamaj But you are changing the frame of reference... You have to narrow it down to only waiter - client relation in this example to be able to theorize about the problem. Of course later in real life when you take into consideration all the players the problem gets WAY WAY harder...

  • @Raspredval1337
    @Raspredval1337 Рік тому

    BUT there's another fitness function: expenses. Imagine you're an autonomous car manufacturer. And your car has decided to crash into a safety fence. Now the passenger is alive, injured and is going to sue somebody, just because they're mad.
    And there's an option to passively crash into another car, leaving us with no angry passengers, who would try and sue anybody. And it's even somewhat cheaper to make an AI, which doesn't care. Makes you think, doesn't it

  • @vermeul1
    @vermeul1 Рік тому +1

    Obviously the AI is not driving according to “expecting the unexpected”

  • @eidane1
    @eidane1 Рік тому +1

    I think the problem of explaining harm to an AI. when the people trying to explain it do not understand it themselves...

  • @atlantic_love
    @atlantic_love Рік тому +3

    LOL at all the channels trying to ride the "AI" train before it peters out.

  • @tiagotiagot
    @tiagotiagot Рік тому +1

    WTF? The harm isn't not tipping, the harm is the employer not paying their workers a fair wage for their work.

  • @AcornElectron
    @AcornElectron Рік тому +1

    Rob looks different somehow.

  • @welemmanuel
    @welemmanuel Рік тому +2

    "quantify harm"... engineers trying, and failing, not to be relativistic, this is why technocracy is so appealing to them. I'm not saying it's useless to measure it, the problem is the ruler, morality is arbitrary on a utilitarian worldview

  • @arsilvyfish11
    @arsilvyfish11 Рік тому

    A great video covering the need of hour stuff for AI!

  • @juliusapriadi
    @juliusapriadi Рік тому +1

    and next, the car factors in the likely penalties for either it's owner or manufacturer, to make its decision. For example diplomats are not held liable legally, so the diplomat's car might opt for killing some kids, if that meant a better outcome for its diplomat passenger. I'd expect a system designed in favor of the manufacturer, not the passenger, as long as there's no regulation telling the manufacturers to prioritize otherwise.
    Another thought: All those theoretical concepts are beautiful in their logic, but decisions of politicians and managers are not logic, but often irrational. So I find it difficult to predict the adaptation of AI based on wether we'll solve harm or AI safety - it's very possible that the (rarely expert) people in charge will simply "press the button" and see what happens.

    • @RAFMnBgaming
      @RAFMnBgaming Рік тому

      a diplomat however is more at risk of scrutiny over their actions and losing their job over an incident.

    • @muche6321
      @muche6321 Рік тому +1

      I believe most politicians and managers are rational. It's just that they optimize for other values than you'd want them to.
      E.g. politicians optimize for staying in the power/keep getting re-elected. Sometimes that means improving lives of all people within an area, sometimes it means improving lives of a select group of people at the expense of other groups in the area and ignoring the opinion of those other groups through gerrymandering, populism, etc.

  • @SkyFpv
    @SkyFpv Рік тому +3

    Choices which are ethical are not the same as choices which are moral. Ethics concerns justice and reduces a person's blame. Morals ignore justice (allowing forgiveness) and instead consider culture and emotion. You HAVE to separate these two metrics before you can draw a conclusion in these examples.

  • @sabrinazwolf
    @sabrinazwolf Рік тому +1

    I think that's Goofy, not Pluto.

  • @OcteractSG
    @OcteractSG Рік тому +4

    It seems like AI is going to be adopted regardless, and the world will have to scramble to figure out the ethical problems before things get too far off the rails.

    • @supermax64
      @supermax64 Рік тому +2

      The penalty for waiting is too great because other countries won't.

  • @dgo4490
    @dgo4490 Рік тому +1

    Obviously, the fairest and most non-discriminate outcome is everyone ded... Equality and all!

    • @FHBStudio
      @FHBStudio Рік тому +1

      That was the Sovyet way, and it's still prevalent today.

    • @raffriff42
      @raffriff42 Рік тому

      “That’s what I call thinking!” ~Majickthise, HHGGTG

  • @eljuano28
    @eljuano28 Рік тому +2

    So, is anyone talking about the fact that Bob is a crappy driver?

    • @IngieKerr
      @IngieKerr Рік тому +4

      _You_ can :) but that won't solve the AI alignment issue.
      One has to start from the principle "Assume all operators of this machine are foolish" :)
      Alternatively, he might have had a horrible reaction to thinking about categories of homomorphic cube tessellation symmetries, but then arguably that was Bob's own fault for watching a Mike Pound video about programming.

    • @eljuano28
      @eljuano28 Рік тому +1

      @@IngieKerr you're my kind of nerd 🤓

  • @jsherborne92
    @jsherborne92 Рік тому

    I feel for bob

  • @BunnyOfThunder
    @BunnyOfThunder Рік тому +1

    Option 4: Drive at a safe speed so you can stop without harming anyone.

  • @CeruleanSounds
    @CeruleanSounds Рік тому

    I think we should sue AI

  • @Cassius609
    @Cassius609 Рік тому

    harm considered harmful

  • @FHBStudio
    @FHBStudio Рік тому +2

    There's also the problem that not all "harm" is "bad". Some suffering is necessary suffering. Sacrifice is suffering and there is no guarantee of a worthwhile payoff. If we start from the premise that harm must always be minimized, sacrifice becomes impossible. Growth and investment becomes impossible.

    • @ApprendreSansNecessite
      @ApprendreSansNecessite Рік тому

      You mean sacrificing yourself or "sacrificing" someone else? Because no one would say the former is harm since you do this to yourself, while the latter should be renamed "taking advantage of"

    • @FHBStudio
      @FHBStudio Рік тому

      @@ApprendreSansNecessite There's a difference between harming yourself and sacrificing yourself. However, the difference to us isn't always clear, let alone to a machine.

  • @xileets
    @xileets Рік тому +8

    WHY, does the autonomous car not detect the hazard and stop? (Because it's a Tesla? heh)
    Seriously tho, This seems like a necessary function of the vehicle, a reasonable expectation for the user, and therefore, the manufacturer's or designer's fault for not implementing it.

    • @IngieKerr
      @IngieKerr Рік тому +5

      The point of the example given is that it is assumed a-priori that it absolutely _cannot_ stop in time without breaking the laws of physics, for any number of reasons that would not deem it to be directly a fault of the car's systems. [e.g. car in front arrived suddenly in front from a side-road without right-of-way, car was not visible due to some transient obstruction... etc].

    • @phizc
      @phizc Рік тому

      ​@@IngieKerrbut she also talked about a lawsuit resulting from it. There the OPs point do matter. Unless of course the stationary car teleported to where it was immediately before the AI decided to swerve.

    • @xileets
      @xileets Рік тому

      @@IngieKerr ​ Good point. I would accept this, however, because it IS something highly HIGHLY unlikely, a hazard appearing suddenly out of nowhere, it's not so useful. Consider how this would happen? Sink hole, plane crashing onto road, etc... it would have to appear WITHOUT Warning, inside the anticipated safe-stopping distance of the vehicle, in order to be a useful analysis.
      I understand that this is a thought experiment, but being both intimately familiar with philosophy and philosophical discussion, and a risk management engineer, I feel that these highly hypothetical scenarios are far less helpful in teaching and demonstrating "potential" risks, harm, threats, etc. Concrete examples also have caveats to consider like, engineering oversight, but here we are avoiding physics-breaking solutions in a statistics-breaking problem. Far too fanciful a scenario to demonstrate what is a simple problem.
      BUT I see your point, don't get me wrong. I understand now what they were trying to show.

    • @supermax64
      @supermax64 Рік тому +1

      No amount of sensors can make the car precognitive. Some actions from other drivers WILL result in a crash even with the best efforts from the car to minimize said crash. The thought experiment specifically focuses on one such case that is BY DESIGN inevitable.

  • @timng9104
    @timng9104 Рік тому

    feels like game theory

  • @theancientagoracorner2379
    @theancientagoracorner2379 Рік тому

    Poor Bob. Always gets screwed in all use cases. 😅

  • @jeromethiel4323
    @jeromethiel4323 Рік тому +4

    Without empathy, it's almost impossible to quantify harm. And computers do not have empathy, and i can not even think of a way to emulate empathy in a digital system.

    • @jursamaj
      @jursamaj Рік тому +1

      I think a bigger issue is that no machine we now have or can expect any time soon has any actual comprehension. You can't have empathy without having comprehension 1st.

    • @bornach
      @bornach Рік тому

      A lot of people lack empathy too.
      That doesn't prevent them rising to the top of society where they run companies which create the AI for self driving cars

  • @erikanderson1402
    @erikanderson1402 Рік тому +14

    … how about we just build some decent trains. Autonomous cars are a scam and a waste of resources

    • @SiMeGamer
      @SiMeGamer Рік тому

      Then go and build a train. Trains are some of the most inefficient forms of transportation. The cost of operating and maintaining trains, stations and tracks are terrible. That's why you don't see private companies entering the train business unless it's under government subsidies. And if you are going to argue for the government to do it/be involved, then you will be entering a completely separate debate that is about the morality of taxes which is a much broader philosophical avenue.
      Autonomous vehicles, when finally put into practice, will result in much lower traffic because of shared rides, autonomous taxi services, car pools and way less occurrences of jams, blockades and accidents. And the more this technology develops and enters the traffic ecosystem, the more we could afford to make smaller vehicles due to higher safety standards which will take even less space. Perhaps we will find a more sustainable train solution. Who knows? What we do know for a fact that if AI vehicles operate at the presumed standard, then traffic will be much better for everyone.
      I love public transportation as a concept, but it is really hard to make well because of many, many considerations, some of which are moral (taxes, for example). So in the meantime, as we figure public transportation and urban design, I encourage the development of autonomous vehicles. They could buy us a lot of headaches when we are ready for better public transportation solutions.

    • @erikanderson1402
      @erikanderson1402 Рік тому +3

      @@SiMeGamer by no objective metric is that true.
      Maintaining a fleet of cars needed to move the same number of people as a modern train costs way more and has a much lower level of asset utilization. It is much more efficient by every conceivable metric.

    • @erikanderson1402
      @erikanderson1402 Рік тому

      @@SiMeGamer well incidentally, train companies were previously forced to provide passenger rail as a public service. I think we should just reconstitute those policies because they were quite effective. And a fleet of cars constitutes a lot more possible points of failure than an effective public transport system.

    • @muche6321
      @muche6321 Рік тому +1

      ​@@SiMeGamer Let's compare trains with cars.
      Operating trains requires people who need to be trained and paid. Operating a private car requires one person who is not paid. Their training is also usually unpaid, done by parents/friends, followed by a formal test.
      Both trains and cars require maintenance.
      Stations could be compared to parking lots/garages. Stations' maintenance is paid by the transportation company, whereas parking lots/garages are paid by the companies/people that want to attract customers/for themselves.
      Tracks are again maintained by the transportation company. Roads' maintenance is paid by the government from taxes.
      In summary, the costs of operating trains are concentrated to the train company, whereas most of costs of operating cars is spread out to other subjects.

  • @nilss1900
    @nilss1900 Рік тому +1

    Why couldn’t the car just brake instead of crashing?

    • @supermax64
      @supermax64 Рік тому

      Too close for the breaks to work in time.

    • @initialb123
      @initialb123 Рік тому

      @@supermax64 Then the driver ( the AI? ) was driving too fast, the primary responsibility is to be able to stop in time, or in American "to stop short" . Road users have a responsibility to not hit stationary objects.
      If you fail to stop in time it's bad news for you, you are liable man or machine.

  • @initialb123
    @initialb123 Рік тому +4

    If I can't make out some words and the auto generated closed caption can't understand what's being said, perhaps the speaker needs to acknowledge their heavy accent and consider some pronunciation classes. If you have no trouble following along I commend you, however neither the auto closed caption system or I could determine what some of the words were.

    • @DavidAguileraMoncusi
      @DavidAguileraMoncusi Рік тому

      Time stamps?

    • @nickjwowrific
      @nickjwowrific Рік тому +4

      I would say that if you are a native english speaker you should be embarrassed that you can't understand what she is saying and should maybe interact with more people outside of your country. If english is your second language then I would assume that you know how difficult learning a language is and should be more understanding. Contrary to what you think her job is not making these videos, she is helping make one because the channel thought she had something interesting to talk about. People have lives outside of just trying to entertain you and are allowed to spend their free time however they like.

  • @Mr.Leeroy
    @Mr.Leeroy Рік тому

    what accent is that?

  • @hoseja
    @hoseja Рік тому +1

    This person wants to dictate what you're not allowed to do.

  • @chuckgaydos5387
    @chuckgaydos5387 Рік тому

    Maybe the A.I. could examine our laws, news, and literature in order to determine which of its options would be considered to be the most reasonable to most of society. Of course, this would have to be done in advance since there wouldn't be time to do it when the situation arises. Since there likely would be no objectively best course of action, we'd at least get something that we could live with.

    • @RAFMnBgaming
      @RAFMnBgaming Рік тому

      it is importat to nderstand that laws can ( and shoud be able to) change to reflect our state as a society, and some are best given defacto leeway outside of what they say, like accidental shoplifting of small things is often forgiven without charges or piracy is accepted for preservation, so fixing it on a specific set of laws at a specific time does come with problems.

    • @chuckgaydos5387
      @chuckgaydos5387 Рік тому

      The A.I. would have to keep itself up to date.

    • @RAFMnBgaming
      @RAFMnBgaming Рік тому

      @@chuckgaydos5387 the problem is that if your objective is to enforce the current laws as best as possible, that implicitly means protecting them from being changed to anything else so you can continue to enforce them in the future. There's a real risk of being trapped in a cultural legal limbo until the next carrington event by an AI trying to mantain the status quo.

    • @chuckgaydos5387
      @chuckgaydos5387 Рік тому

      @@RAFMnBgaming The objective is not to enforce current laws. It's to have the A.I. make decisions that will be acceptable to most of human society. Rather than have people try to program the A.I. to do this, the A.I. could observe our opinions and figure it out for itself.

    • @muche6321
      @muche6321 Рік тому

      It seems to me this could lead to a thing similar to airplane tickets overbooking, where the equilibrium is between the number of people not showing up and people overbooked.
      If you're the Bob who got bumped, you might feel harmed and get compensated for it. But that harm is the result of other people wanting the cheapest tickets for themselves.

  • @omegahaxors9-11
    @omegahaxors9-11 Рік тому +2

    What people thought AI safety was: "Either we hit this car or hit this baby, this decision is very important"
    What AI safety was probably going to be: "Either we hit this baby or we take longer to arrive at destination"
    What AI safety actually ended up being: "Baby detected, speeding up to hit baby, baby has been eliminated"

  • @hurktang
    @hurktang Рік тому

    "the adoption of AI system is not gonna happen until we figure all this out"
    So candide...

  • @justwanderin847
    @justwanderin847 Рік тому

    Just say NO to government regulation of computer programming.

  • @kamilziemian995
    @kamilziemian995 2 місяці тому

    I think we should replace "Ai" with "AI" in the title.

  • @bersl2
    @bersl2 Рік тому +9

    Harm is when my artist and writer friends have their work fed into the machine without their informed consent or fair compensation. >:(

    • @arletottens6349
      @arletottens6349 Рік тому +12

      There's no law that requires consent or compensation for looking at your work and learning from it.

    • @kuhluhOG
      @kuhluhOG Рік тому +6

      @@arletottens6349 This is more of a philosophical thing: Is an AI learning from something and a human learning from something the same thing?
      Some people (especially companies which push AI) will say yes.
      Other people (especially artists) will say no.
      The question is now what society at large will answer, and that will take time.

    • @maltrho
      @maltrho Рік тому +9

      No it certainly is not. Your friends sales are in no way affected, and the machine does not use their work in any direct way. It is like complianing that writers use public language and words created by other persons without any payment.

    • @kuhluhOG
      @kuhluhOG Рік тому +2

      @@maltrho if the sales are effected depends on the output of AI
      some AI tools are at this point specifically made to mimic specific artists (even alive ones) as close as possible

    • @maltrho
      @maltrho Рік тому +2

      @@kuhluhOG They mimic wellknown artists styles, not your totally unknown friends's, and practically (if not absolutely) nobody uses chatbots for 'free' fiction litterature.

  • @omegahaxors9-11
    @omegahaxors9-11 Рік тому

    Tipping culture needs to die. Just raise your prices. Rich people don't tip anyway so all that does is make things more expensive for people who already have the hardest time paying in the first place. Besides, these days tips just go straight to the CEO anyway.

  • @kibels894
    @kibels894 Рік тому

    "Obviously related to AI systems" yeah because they're obviously harmful lmao

  • @mibo747
    @mibo747 Рік тому

    Where is the man?

  • @uropig
    @uropig Рік тому

    first