How to make computers less biased

Поділитися
Вставка
  • Опубліковано 30 лип 2024
  • You might think technology is the great leveller. But as AI and other data-driven innovations race farther and faster ahead, the automation of racial bias is causing growing concern.
    00:00 - Can technology be racist?
    00:50 - Bias in facial-recognition tech
    03:50 - Why do data discriminate?
    05:50 - What can be done?
    07:00 - How can regulations help?
    Sign up to The Economist’s daily newsletter to keep up with our latest stories: econ.st/3gJBH8D
    The limitations of AI: econ.st/3dSfOkU
    Listen to our podcast on why some medical devices work less well for non-white people?
    econ.st/31UDRgB
    Find The Economist’s most recent coverage on science and technology: econ.st/3GMPPI3
    How does the EU plan to regulate AI?: econ.st/3DSoFxh
    Read more about algorithmic bias: econ.st/3pVxmCj
    Listen to our babbage podcast on the promise and peril of AI: econ.st/3q32Iae
    Why has America turned against facial-recognition software? econ.st/3F023fO
    How medicine discriminates against non-white people and women:
    econ.st/327PErM

КОМЕНТАРІ • 148

  • @seawalkarrg
    @seawalkarrg 2 роки тому +21

    “I THINK they were probably a team of light skinned developers…” SHE SAYS

  • @daftwod
    @daftwod 2 роки тому +38

    Computers need to be told about the harm that observation of reality can do.
    They cant go around noticing things and expect to get away with it!

  • @GeronimosStolenBones
    @GeronimosStolenBones 2 роки тому +11

    The computer will scrub all narratives other then the one they want posted.

  • @Nalot56
    @Nalot56 2 роки тому +140

    AI is literally just pattern recognition. If you don’t like the patterns that they are recognizing, consider the origin of those patterns.

    • @jkgh374
      @jkgh374 2 роки тому +4

      That’s the point of the video…

    • @Nalot56
      @Nalot56 2 роки тому +3

      @@jkgh374 can algorithms be racist?

    • @georgewitheridge4961
      @georgewitheridge4961 2 роки тому +2

      Can patten recognition be racist?

    • @Nalot56
      @Nalot56 2 роки тому +6

      @@georgewitheridge4961 I don’t think data is inherently racist. Do you?

    • @Antenox
      @Antenox 2 роки тому

      @@Nalot56 It is

  • @bornamofid9254
    @bornamofid9254 2 роки тому +8

    Why is that Uber driver driving a bmw😅

  • @mbm8690
    @mbm8690 2 роки тому +3

    sorry, but how can a computer "see" someones "colour"? Even by considering one's ip-location there's no guarantee for anything.

    • @Millsmills586
      @Millsmills586 2 роки тому

      It sampling that is lacking, if you don't feed your AI imaging of all types of humans, but only "white" people. The sampling is going to be skewed. Incorrect.

  • @Feynman981
    @Feynman981 2 роки тому +3

    Depends all on the sensors. Cameras are biased towards Albedo. LiDAR is not.

  • @luddicpath6756
    @luddicpath6756 2 роки тому +8

    05:24
    Hey Economist! What are these "favored financial behaviors" that are more common among white people? Asking for a friend.

  • @goyasolidar
    @goyasolidar 2 роки тому +5

    Algorithms don't create themselves so consider the source.

  • @edwardguo7995
    @edwardguo7995 2 роки тому +6

    So what about Asians? Is their situation better or worse than Africans?

    • @abhinavpy2748
      @abhinavpy2748 2 роки тому +1

      Doesn't matter China writes it's own algorithms

    • @Millsmills586
      @Millsmills586 2 роки тому

      Better, because this algorithm is most likely written either by a "Asian" or white person. Especially if the American was developed in the US.

    • @Millsmills586
      @Millsmills586 2 роки тому

      @@abhinavpy2748 this is Uber, an american company

  • @XOPOIIIO
    @XOPOIIIO 2 роки тому +39

    "Credit scoring algorithms favored financial behaviors that are more common among white people" - May be you change your financial behavior then?

    • @dfurianz
      @dfurianz 2 роки тому +5

      Completely agree..

    • @WaleSoleye
      @WaleSoleye 2 роки тому +4

      Well not many people can just afford to up and switch lifestyles. Many peoples financial behaviour is based on the options available to them. And the options are not available to everyone. Or are proven to be easier for some than others.

    • @dfurianz
      @dfurianz 2 роки тому +8

      @@WaleSoleye Well, some specific examples are needed here, otherwise what you said is just an excuse.

    • @lucaslouzada44
      @lucaslouzada44 2 роки тому +2

      It said that the respective backgrounds were comparable, but the whole thing looks quite murky. This is one of those situations that need a clarification, instead of being shoved into a racial bias type of narrative - let’s be honest, though, in acknowledging that the purpose of the report was to ascertain the automatic reproduction of biased patterns through algorithms, and that’s pretty much settled, as computer technology is merely reproductive and wasn’t created to address such problems…

    • @Munchausenification
      @Munchausenification 2 роки тому

      So systems should favour specific behaviours? So you get a better algorithmic score if you buy a Volkswagen over a Citroén, you spend 20% rather than 15% of your income on food thereby lowering your score? Where is the line?

  • @doubleaa658
    @doubleaa658 2 роки тому +6

    Yeah depends how they were made

  • @maxwaller2055
    @maxwaller2055 2 роки тому +2

    *pondering and wondering at 3:47 pm Pacific Standard Time on Thursday, 10 February 2022*

  • @jansport0409
    @jansport0409 2 роки тому +18

    Oh great. Now the Economist sounds like the guardian.

  • @metros9911
    @metros9911 2 роки тому +44

    Based, not biased.

  • @bitsbard
    @bitsbard 10 місяців тому

    If this subject piques your curiosity, then Jack Frostwell's "Game Theory and the Pursuit of Algorithmic Fairness" is a book you shouldn't miss. I was deeply captivated by it.

  • @Nipponson86
    @Nipponson86 2 роки тому +2

    The reporter was so lazy that she didn't even bother to do thorough research on the subject...

  • @roninecostar
    @roninecostar 2 роки тому +2

    5:26
    Research found in the older credit scoring used by older mortgage lenders favoured particular spending habits that are more common with white people.

  • @thebigjuggalobowski
    @thebigjuggalobowski 2 роки тому +31

    I want to point out two things that bother me greatly about this report. The first thing that bothers me is that we did not hear anything from the people who create these algorithms or tech systems, or from any singular person with an opposing point of view. I still know almost nothing about how these systems are even capable of being racist or how that would work. I understand the tech that can be sloppy, but not biased. The second extremely troubling thing is the people interviewed, especially Rashida Richardson, had nothing to say. What I call horoscoping the narrative. Go back and listen to her again. Her words were vague enough to apply to any situation, but the word choice was "sophisticated" enough that if one were predisposed to believe what the report is saying, they might just nod along with her as if she was preaching a gospel. It's pretty deceptive.

    • @Munchausenification
      @Munchausenification 2 роки тому +5

      Its true this is a short video to a very complex problem. I would recommend you a channel thats called Jordan Harrod on this issue. She has made a couple of videos on this problem, first one being "Is AI Racist? Sometimes. | AI 103: Ethics (Part 1 of Many)". That one and the next one (104) specifically is about what this video from the Economist talks about.

    • @ThomasFromTN
      @ThomasFromTN 2 роки тому +1

      I would then imagine you would have a similar problem with another information platform that likewise tends to eschew offering "The opposing viewpoint" where they disseminate their stories. When was the last time you saw Fox News allow anyone who expressed a view that departed from Fox's ideology within 10 miles of a Fox microphone?

    • @thebigjuggalobowski
      @thebigjuggalobowski 2 роки тому +1

      @@ThomasFromTN to be honest, even if I think you’re right, the argument is a fallacy. I don’t watch Fox but I imagine all media is skewed to a bias. Nonetheless my points on this specific report are real points and there is no point to bring up something that doesn’t apply to the topic at hand. Fox News is irrelevant to this report. They may have done a similar report, and if your intention is to highlight differences between them than okay. But I think what you’re trying to say is that you don’t like Fox and they do it too so I should be criticizing Fox and not these guys. Which, let’s be real, is totally absurd.

    • @brittgayle467
      @brittgayle467 2 роки тому +3

      Of course the systems are capable of being racist; if they aren’t programmed to include more diverse data or if racist assumptions are incorporated into the program they will perpetuate those same biases.

    • @oyuyuy
      @oyuyuy 2 роки тому +2

      Computers don't have biases, they just follow instructions. The Economist is simply projecting human emotion onto machines.

  • @JM-gz1ej
    @JM-gz1ej 2 роки тому +2

    Will it ever stop ?

  • @importantname
    @importantname 2 роки тому +6

    the maker and designer of the software decide who gets the advantages - and business is about making the greatest profit possible, not about levelling the playing field.

    • @conqueryourfuture6134
      @conqueryourfuture6134 2 роки тому

      Money is a control tool of the elite not something they need. There are certain cultures that avoid computer programming like the plague others dominate the industry….

  • @philipino99
    @philipino99 2 роки тому +10

    very 1 sided reporting, didn't even give a chance for those who developed some of these 'racist algorithms' a chance to have their say.

  • @KAMIOUKA
    @KAMIOUKA 2 роки тому +32

    The AI that misclassified the couple as gorillas was 100% not fed with black people but instead with real gorillas wich means it was misprogrammed due to human error. If it was fed with actual black people the couple just seems to look more like gorillas than black people.
    No case of racism to see because computers (Spoiler) are indeed completly RATIONAL and do not have a bias at all!

    • @WaleSoleye
      @WaleSoleye 2 роки тому +5

      I think that’s the(part of) idea of the video. Human error and acknowledging that these issues do exist can help the developers address them appropriately.

    • @XOPOIIIO
      @XOPOIIIO 2 роки тому +4

      What caused the problem in this particular case is unclear, people are susceptible to sensations, but search engines are mislabeling photos all the time, it's just usual mistakes nobody notice until it coincides to provoke emotional response.

    • @KAMIOUKA
      @KAMIOUKA 2 роки тому

      ​@@WaleSoleye If it's the case that it is a human error, I highly doubt that and just put that statement into my argument because I wanted to name both possibilities.
      These AI's just determine on statistical probability and the AI in my opinion thought that the couple were presumably 60% gorillas and 40% human. So it picked gorillas.
      What to do against it you ask? You can always code a fault tolerance to add a moral into the program that if the probabilities of both are high it always chooses the human because for us people it is morally more acceptable to classify a gorilla as human.
      Probabilities can't always be correct. I understand the point of the video though, our moral concept is much more versatile as any AI or pc ever could comprehend and we have to understand it to fix things computers can't handle without our help.

    • @daftwod
      @daftwod 2 роки тому

      Only things which are true are hurtful.

    • @T_J_
      @T_J_ 2 роки тому +1

      @@daftwod Tell that to the innocent people about to be executed on death row.

  • @chevalier5691
    @chevalier5691 2 роки тому +23

    0:28 I truly wonder how she got the audacity to say dumb stuff like this confidently.

    • @XOPOIIIO
      @XOPOIIIO 2 роки тому +9

      The worst part they don't accept even slightest possibility that they could be wrong. Going into the wrong direction so persistently is the sure way to keep any problem unsolvable. To solve any problem they have to recognize what the roots of the problem are, but such recognition is inconsistent with their persistent denialism.

    • @codefluence
      @codefluence 2 роки тому +4

      I mean there is consensus about that, the bias is in the data.

    • @Munchausenification
      @Munchausenification 2 роки тому +3

      Our biases gets transferred into technology. It could even be something silly as pineapple on pizza. Most people have an idea of what pizza should look like and for the majority of people they dont mind it, i think its 55% or something like that and 10% hate it and 20-30% love it. Well if you search pizza images you often have to look far down the list before a pineapple pizza image comes up, even though lots of people love it? It has to be with the amount of images in the database and on what people click on when searching for images. Also most of the time people against something is louder and our mind is often focused on negativity. I hope it gives a clearer picture of what she is saying.

  • @mannykhan7752
    @mannykhan7752 2 роки тому +2

    Great, 1s and 0s have so much bias. Im never dealing with them again.

  • @goldenunicorn341
    @goldenunicorn341 2 роки тому +1

    Great ! Topic !!! 🧐😎

  • @nayaman1023
    @nayaman1023 2 роки тому +2

    Its creates as next RACISTISM

  • @notmyrealname7634
    @notmyrealname7634 2 роки тому +6

    anthropromorphizing tech to this degree is silly and makes you look less credible

    • @abhiklovesbadbitches
      @abhiklovesbadbitches 2 роки тому

      Exactly, it's such a non issue

    • @x9147
      @x9147 2 роки тому +3

      @@abhiklovesbadbitches saying something is a “non-issue” when people are literally losing their jobs, can only be said when you are not even 18 years old, don’t know how life works yet and get fully supported by your parents while you watch YT videos all day in your room.
      Have a little more empathy, bro. It’s called being a “human”.

  • @artman12
    @artman12 2 роки тому +10

    Should do this video: Is the Economist racist?

  • @alibizzle2010
    @alibizzle2010 2 роки тому +5

    funny how The Economist isn't making videos based on all its recent reactionary anti--woke articles

  • @marilynsolomon5279
    @marilynsolomon5279 2 роки тому +16

    Many of the comments so far are very disappointing. The report starts with a gentleman's, and many similar others, ability to make a living, based on this technology, fundamental to most of us.. It's only a 9 min summary, I'm sure if it was longer it could have added much more depth but blimey, sounds like a lot of ppl wants to shut down this interesting topic right now. If you found out about piece of tech affected you in any way, whatever was at fault, surely you'll want it fixed!

    • @boar6615
      @boar6615 2 роки тому +1

      most middle class white people in first world western countries don't like it when anything disturbes their comfort
      this is how they react, they face no real issues at large so every little "problem" is worth fighting for

    • @Omar-kl3xp
      @Omar-kl3xp 9 місяців тому

      People are pretty ignorant,anyone that know a little bit of AI will know that a not well trained model can be very bias ,and it is already been happening for a while , there are even documentary about this ,it is not just race , but also gender , Amazon at one point was found that their AI was only hiring men while rejecting women with the same experience and education.

  • @parafaramaku285
    @parafaramaku285 2 роки тому

    6:36 well i guess well see...

  • @enzofire5341
    @enzofire5341 2 роки тому +3

    I wholeheartedly recommend those people think carefully about the origin of these phenomenon instead of taking high moral stakes and being politically correct.Having watched too much videos like this in the economic,which make me no longer want to watch this channel.

  • @kunited9
    @kunited9 2 роки тому +13

    The economist used to be a more serious institution, now its encouraging policing and state regulation for policing software against the companys interests?? This is insane

    • @siddhantdeepful
      @siddhantdeepful 2 роки тому +2

      That's what policies are for? They are not an antiquated concept you know.

    • @kunited9
      @kunited9 2 роки тому

      @@siddhantdeepful sure but praising policies without being specific is like wanting to pay a price without knowing the value
      AI uses patterns to make choices. That is the definition of discrimination. You can't make water dry

  • @xiflix8956
    @xiflix8956 2 роки тому +21

    you think everything is racist

  • @stoicismcore
    @stoicismcore 2 роки тому +9

    The computer does not lie. It is not biased.

  • @HelloPenguinYT
    @HelloPenguinYT 2 роки тому

    2.48M subs and only 38k views that explains topic...human made things are totally controlling by humans

  • @fishiestify
    @fishiestify 2 роки тому +32

    How to make computers less biased?
    Keep The Economist out of computer, the problem will be solved.

  • @MrDude826
    @MrDude826 Рік тому +3

    AI isn't racist. It's purely critical and objective thinking. If it comes to a conclusion that's because it's looking at data.

    • @Aerojet01
      @Aerojet01 Рік тому

      It's a woke conspiracy theory and nothing more. I'm mixed race by the way.

    • @Omar-kl3xp
      @Omar-kl3xp 9 місяців тому +1

      It is because the developers that trained those models use only white people pictures to train the model ,if they use also minority pictures just as much it will not be as bias against minority people.

    • @MrDude826
      @MrDude826 9 місяців тому

      @@Omar-kl3xp Nope, it will be biased because raw data is biased and racist.

  • @ohohoho8544
    @ohohoho8544 2 роки тому

    hypocritism in mass media that's real name of these video

  • @alancient8463
    @alancient8463 2 роки тому +5

    Based A.I

  • @myofficetop
    @myofficetop 2 роки тому +6

    I don't see any big issues that AI has some difficulties recognizing black people because AI needs some time to educate itself and fix the issue, the best you can do is report about an issue and wait until they fix it. If you don't have patience you can write your own program and use it,

  • @stevejurgens9836
    @stevejurgens9836 2 роки тому +5

    Total garbage.

  • @WENKTUBEWRC
    @WENKTUBEWRC 2 роки тому

    🕯🌍🌎🌏🕯

  • @WENKTUBEWRC
    @WENKTUBEWRC 2 роки тому

    ⚠☢☣

  • @juanDE1703
    @juanDE1703 2 роки тому +26

    Propaganda

  • @chaovo7629
    @chaovo7629 2 роки тому +6

    The more I read the Economist, the more I feel they are always taking moral highlands, being politically correct and subjective and biased.
    Technology itself are not biased and programmers, firms and the data SAMPLE they use could be biased.
    And AI is not 100% correct
    Rest assured that the comments are not buying what they say.

    • @enzofire5341
      @enzofire5341 2 роки тому +2

      I agree with you one hundred percent.

    • @Millsmills586
      @Millsmills586 2 роки тому

      but why isn't it fair to call it out when it something like this happens? Ofc the sampling is garbage. But still it needs to be fixed. Articles like this, need to happen so they fix issues like this sampling issue. obviously not enough people that aren't "white" aren't in this sample. They never said racist, they said biased. Machines are created by humans and those humans can have bias.

    • @chaovo7629
      @chaovo7629 2 роки тому

      @@Millsmills586 Yes, that’s exactly is the point - machine is created by human and human could be biased. But The Economists and some of the media are framing like the technology is evil, just criticizing but not thinking about what is behind. For Uber’s issue, they are using Google Image - which is naturally automatically generated online - the whole issue is telling us a fact that some demographic groups, not just ethical, gender but ages and so on, are well under-represented. What we need to do is to empower those groups in disadvantages by education and public education, NOT just simply taking moral high ground calling the firms or the programmers racialist. Out of the belief that majority of human are just, I think Uber is not doing it intentionally.

    • @GrayBlevins
      @GrayBlevins 8 місяців тому

      Monkey doesn’t wear any pants

  • @paultopping7413
    @paultopping7413 2 роки тому +10

    The world has gone mad…….everybody seems to feel discriminated against these days (in some cases it is true but it is also true that some people make It their mission to feel discriminated against). What we should be thankful for is that in most western ‘democracies’ you have the right to be paranoid !!

  • @WENKTUBEWRC
    @WENKTUBEWRC 2 роки тому

    Os pequenos deuses dos tolos são coloridos e racistas, estamos na era da tolice humana, especialmente no topo da pirataria fraudulenta? /Há um Deus que criou todas as cores!

  • @mentoriii3475
    @mentoriii3475 2 роки тому +5

    What, now AI is racist as well

  • @traveler5973
    @traveler5973 2 роки тому +2

    Big lips matter

  • @ghostrider_1701
    @ghostrider_1701 2 роки тому +1

    Just nonsense, really healthy on brain people, thinking about this?😅

  • @HologramBoy
    @HologramBoy 2 роки тому

    Slava Ukraina

  • @alexsimonelis164
    @alexsimonelis164 2 роки тому +2

    Wrong headed.