Are We Automating Racism?

Поділитися
Вставка
  • Опубліковано 30 бер 2021
  • Many of us assume that tech is neutral, and we have turned to tech as a way to root out racism, sexism, or other “isms” plaguing human decision-making. But as data-driven systems become a bigger and bigger part of our lives, we also notice more and more when they fail, and, more importantly, that they don’t fail on everyone equally. Glad You Asked host Joss Fong wants to know: Why do we think tech is neutral? How do algorithms become biased? And how can we fix these algorithms before they cause harm?
    0:00 Intro
    1:24 Is AI Racist?
    4:15 The Myth Of The Impartial Machine
    11:09 Saliency Testing
    13:52 How Machines Become Biased
    18:33 Auditing The Algorithms
    20:24 Wrap Up

КОМЕНТАРІ • 473

  • @Vox
    @Vox  3 роки тому +1301

    [UPDATE May 20, 2021] CNN reports: "Twitter has largely abandoned an image-cropping algorithm after determining the automated system was biased." www.cnn.com/2021/05/19/tech/twitter-image-cropping-algorithm-bias/index.html

    • @Neyobe
      @Neyobe 2 роки тому

      That’s amazing!

  • @AtiqurRahman-uk6vj
    @AtiqurRahman-uk6vj 3 роки тому +15243

    Machines aren't racist. The outcome feels racist due to bias in training data. The model needs to be retrained.

    • @Ana-im4dz
      @Ana-im4dz 2 роки тому +7

      Lol 15K likes, no comments

    • @onee1594
      @onee1594 2 роки тому +6

      Well. Now I would like to see stats on distribution between black and white software engineers and ML specialists.
      And no. I don't say it should have quotas. I just wonder wherever it was tested at all

    • @AtiqurRahman-uk6vj
      @AtiqurRahman-uk6vj 2 роки тому +1

      @@onee1594 Feel free to look for that at your nations government DB or draw a conclusion from a credible sample size. I am not obligated to provide that for you.

    • @onee1594
      @onee1594 2 роки тому +4

      @@AtiqurRahman-uk6vj You are not obligated and I didn't ask you to provide it.
      There's no need to be so uncivilized unless you think world turns around your comment and personally.

    • @AtiqurRahman-uk6vj
      @AtiqurRahman-uk6vj 2 роки тому

      @@onee1594 Since you replied under my comment instead of opening a separate comment it is a logical assumption that you were placing the request to me and I declined.
      You and your colonial mindset of xenophobia are a few centuries too late to call others "uncivilized" simply for declining to do your bidding. Good day

  • @Fahme001
    @Fahme001 3 роки тому +10615

    lets not forget how light works in camera. I am a dark skinned person and I can confirm that a light skin would physically reflect higher amount of photon which will result in higher probability of the camera to capture that picture better than that of a black counterpart. same goes for computational photography and basic algorithm that are based on photos that we upload. it only makes sense that it would be bias towards white skin. why does everything have to be taken as an offensive scenario? We are going too far with this political correctness bullshit. Again, I am a person or dark skin and even I think this is bullshit. Now if you use it as if this is an issue in identifying person's face for security reasons or such, then, yes I am all for it to make it better to recognize all faces. But please, please make this political correctness bullshit stop.

    • @jezuconz7299
      @jezuconz7299 2 роки тому +15

      This is all indeed going to a point where everything has to be taken to the correctness debate instead of factual and objetive responses or solutions

    • @daniae7568
      @daniae7568 2 роки тому +5

      this is how barcodes work

    • @faithm9284
      @faithm9284 2 роки тому +4

      AMEN! There is no such thing as racism, there is only one race, the human race! Let's stop speaking the negatives! Words are very powerful. When you speak it, you give even fantasy power to 'become'!

    • @airdrummond241
      @airdrummond241 2 роки тому +19

      Light is racist.

    • @theoaglaganian1448
      @theoaglaganian1448 2 роки тому

      Amen+2
      This video is the definition of greed

  • @aribantala
    @aribantala 3 роки тому +5418

    Yes, as a Computer Engineering Bachelor and someone who's working with a Camera for almost 4 years now it's good to address the apparent weakness for Camera to capture darker object can mess up AI detections.
    My own Bachelor Thesis was about Implementation of Pedestrian Detection and its really hard to make sure the Camera is taking a favourable image... And since I am from Indonesia... Which, you guess it, have less white skinned population... Its really hard to find a good experiment location, especially when I use already developed Algorithm as a backbone. There are a lot of False positives... Ranging on "misses counts" due to the person is darker, to double counts due to a fairer skinned person passes while there are human shaped shadows.
    We need to improve the technology of AI with better Diversity for its Training datasets. It's good to address that weakness to create a better technology than to point fingers... Learn from our mistakes and improve from that... If a hideous person like Edison can do that with his electric lightbulb? Why aren't we doing the same while developing even more advanced tech than him?
    The title is very nuanced... But hey, it gets me to click it... And hopefully others can stand through the Headline.

  • @kumareth
    @kumareth 3 роки тому +4467

    As a machine learning enthusiast, I can confirm there isn't much diverse set of data available out there. It's just sad but it's alarmingly true.

    • @kaitlyn__L
      @kaitlyn__L Рік тому +1

      @Sigmaxon in the case of training datasets, because they can be so expensive to produce, the demand is actually constrained by supply rather than the other way around. Changing the demographics in the dataset is a slow process.

    • @randybobandy9828
      @randybobandy9828 Рік тому

      Why is it sad?

    • @kaitlyn__L
      @kaitlyn__L Рік тому +1

      @@randybobandy9828 because it leads to less optimal outcomes for everyone, duh

    • @randybobandy9828
      @randybobandy9828 Рік тому

      @@kaitlyn__L it's a issue not Worth addressing.

  • @HopeRock425
    @HopeRock425 3 роки тому +928

    While I do think that machines are biased I think that saying they're racist is an over statement.

    • @faychel8383
      @faychel8383 Рік тому +5

      IQs under 83

    • @randybobandy9828
      @randybobandy9828 Рік тому

      You're a simpleton

    • @Unaveragetrainguy
      @Unaveragetrainguy Рік тому +10

      The piece carefully de-emphasized that the technology was 'racist'; but that the technology seemingly had 'racist outcomes'.

    • @jordanreeseyre
      @jordanreeseyre Рік тому +5

      Depends if you define "racism" as requiring malicious intent.

    • @user-gu9yq5sj7c
      @user-gu9yq5sj7c Місяць тому

      They were using racist as a description which the AI outcomes were. They even said in this video that it doesn't mean there has to be malicious intent. Tho there probably is some cause the AI just learns from the prejudice stereotypes and beliefs of what people post online.
      I saw a video that someone received hardworking AI pics as caucasian men in suits in the office. So there were prejudice stereotypes excluding other kinds of activities or jobs has hardworking too.

  • @veryblocky
    @veryblocky 3 роки тому +425

    I feel like a lot of these things aren’t the result of anything racist, but of other external factors end up contributing to that. The example of the hospital algorithm looking at expensive patients, for instance, isn’t inherently racist. The issue there should be with the factors that cause minority groups to cost less (ie. worse access to insurance), not with the software.

    • @Zaptosis
      @Zaptosis Місяць тому

      Could also be due to non-racist factors such as cultural preferences like opting for at home care, or even subsidizes for low income areas/households which reduce the recorded expenditure of a patient.
      But of course as a media organization they need to jump to the most rage & offence inducing headline which gets them the most clicks, this is why I never trust Vox & other companies like this.

    • @user-gu9yq5sj7c
      @user-gu9yq5sj7c Місяць тому

      @@Zaptosis This Vox video did say that there could be factors that didn't have to do with just active racists too. So what are you talking about? You were the one who jumped to rage like you were accusing this vox video.
      Also, you jumped to conclusions that racism doesn't exist too. When it always does and there's evidence.
      You also shouldn't just assume "most African Americans want home care" when the African woman in this video said otherwise. Same with some other African yt-ers I've watched. You should see different perspectives.
      It just seemed like you wanted to not care that there are people negatively impacted by this or racism.
      It's double standards cause if there was prejudice against you or your group you would want the injustice to be amended.
      So far I think Vox is pretty educational.
      There's also conservatives who falsely cry about prejudice hoaxes towards them or caucasians too.
      There's people who received resulted from AI art that were racist or sexist stereotypes.
      I saw a video that someone received hardworking AI pics as caucasian men in suits in the office. So there were prejudice stereotypes saying other kinds of activities or jobs were less hardworking too.

  • @rodneykelly8768
    @rodneykelly8768 3 роки тому +2318

    At work, I have an IR camera that automatically measures your temperature as you walk into my facility. How it is supposed to do this is by locking on to the face, then measuring the person’s temperature. Needless to say, I want to take a sledgehammer to it. When it actually works, it’s with a dark face. The type of face it has the most problem with is a light face. If you also have a bald head, it will never see you.

  • @theguardian3431
    @theguardian3431 3 роки тому +562

    I just started the episode but I would think that this has something to do with the basic principles of photography. When you take a photo, the subject is usually in the light and the background is darker for obvious reasons. So, the algorithm simply sees the darker faces as part of the background.

    • @kaitlyn__L
      @kaitlyn__L Рік тому +6

      Indeed - but why didn’t the researchers train it to not do that? Because insufficient testing was done. Which comes back to a blind spot re race in humans. These algorithms are amazingly good at fitting to the curves we ask them to fit, so these problems aren’t inherent to the technology. It’s with the scope of the problem researchers are asking it to solve.

    • @randybobandy9828
      @randybobandy9828 Рік тому +1

      @Kaitlyn L ya because light just happens to reflect off of lighter skin... oh no.. how dare light behave this way!!

    • @rye419
      @rye419 Рік тому +3

      Do you think this issue would occur in a society of majority dark faces? Think about that

    • @user-gu9yq5sj7c
      @user-gu9yq5sj7c Місяць тому

      3:37 What about how they used pics of people with all white backgrounds? Why isn't AI thinking the light faces are the light background then? Why is the AI able to pick up the dark hair of caucasians as not part of the background?

  • @bersl2
    @bersl2 3 роки тому +876

    There's also the possible issue of "white balance" of the cameras themselves. My understanding is that it's difficult to set this parameter in such a way that it gives acceptable/optimal contrast to both light and dark skin at the same time.

    • @unh0lys0da16
      @unh0lys0da16 11 місяців тому +1

      That's why you use multiple models, one to detect whether there is a black or white person in view and then have one model for each.

  • @NDAGR-
    @NDAGR- 3 роки тому +2917

    The hand soap dispenser is a real thing. Straight up

    • @TheSands83
      @TheSands83 2 роки тому

      The black guy didn’t have his hand underneath it correctly clearly…. N I’ve had soap dispensers not work…. U r so oppressed

  • @andysalcedo1067
    @andysalcedo1067 3 роки тому +311

    I'm sorry Joss, but how did the only two people in this video that actively work in the tech industry, that are building these automated systems, only have a combined 5 minutes on screen? You don't talk to the computer scientists about solutions or even the future of this tech, but yet you talk to Dr. Benjamin and Dr. Noble (who don't code) about "implications" and examples in which tech was biased. Very frustrating as a rising minority data scientist myself, to see this video focused on opinion instead of actually finding out how to fix these algorithms (like the description says.)
    Missed an excellent opportunity at highlighting minority data scientists and how they feel building these algorithms.

    • @jezuconz7299
      @jezuconz7299 2 роки тому +6

      These people don't seek facts or objetiveness, only points to blame others for not being politically correct...

    • @kaitlyn__L
      @kaitlyn__L Рік тому

      I would’ve certainly liked to have seen input from Jordan B Harrod, as she’s done a number of great videos on this subject, but with Vox’s traditional print journalism background I can understand gravitating toward book authors.

    • @anonymousperson1771
      @anonymousperson1771 Рік тому

      That's because the intent of the video is supposed to impart the outcome of racism regardless of how it actually works. Emotional perception is what they're after.

  • @Lightningflamingice
    @Lightningflamingice 3 роки тому +1846

    Just curious, but was it randomized which of the faces (darker/lighter) was on top, and which was on the bottom? It wasn't immediately apparent with the tests that were run after, but in both the Obama/McConnell and the 2 cohosts tests, the darker face was on top, which may be why there was an implicit bias towards the lighter face.
    If not that, the "racist" face detection can largely be boiled down to the algorithm being fed more training data of white people rather than black people, a consequence of darker skin tones comprising a minority of the population. As such the ML cropper will choose the face it has a higher confidence is a face. That could be the source of a racial skew.

    • @kjh23gk
      @kjh23gk 3 місяці тому

      The Obama/McConnell test was done with two versions of the image, one with Obama at the top and one with Obama at the bottom. The face detection chose McConnell both times.

  • @gleep23
    @gleep23 Рік тому +25

    This is first video I've seen with "Audio Description" to assist the vision impaired. I'd like to commend Vox for putting in the effort to help differently abled people, especially considering this videos subject matter. Well done for being pro-active with assistive technology.

  • @NANI-ys5pc
    @NANI-ys5pc 3 роки тому +1343

    This video also seems a bit biased, I don’t believe “racism” is the most appropriate reality to associate this phenomena with.

  • @syazwan2762
    @syazwan2762 3 роки тому +1601

    16:15 that got me trippin for a second until I realize they probably just mirror the video so that the writing comes out right and she's not actually writing backwards.

  • @booksandocha
    @booksandocha 3 роки тому +849

    Funnily enough, this reminds me of an episode in Better Off Ted (S1:E4), where the central parody was on automated recognition systems being "racist" and how the corporation tried to deal with it. Well, that was in 2009...

    • @MsTifalicious
      @MsTifalicious Рік тому

      That sounds like a funny show. Sadly it won't fly today, but I'll be looking for it online now.

  • @divinebitter1638
    @divinebitter1638 3 роки тому +932

    I would like to see the contrast pic of the two men Joss took at the beginning and uploaded to Twitter been repeated, but with the black man on the bottom. The background at the top of the pic they took was quite dark, and the lack of contrast might have contributed, along with Twitter's weighting bias, to the white face being featured. I don't think Twitter would switch to picking the black face but it would have helped control for an extra variable.

  • @the_void_screams_back7514
    @the_void_screams_back7514 3 роки тому +1176

    the level of production on this show is just
    * chef's kiss *

  • @SpaceWithSam
    @SpaceWithSam 3 роки тому +908

    Fact: Almost everyone go straight to the comments section!

  • @robina.9402
    @robina.9402 3 роки тому +812

    Can you please put the names of the people you are interviewing in the description and links to their work/social media? Especially if they have a book we could support!

  • @bellatam_
    @bellatam_ 3 роки тому +604

    Google photos thinks that all my Asian family and friends are the same person

  • @fabrizio483
    @fabrizio483 3 роки тому +502

    It's all about contrast and how cameras perceive darker subjects. The same thing happens when you try to photograph a black cat, it's very difficult.

  • @Dan007UT
    @Dan007UT 3 роки тому +97

    I wish they tested the same picture test in the beginning put put a bright white background on both guys.

  • @elokarl04
    @elokarl04 3 роки тому +668

    It's been forever since I've last seen Joss in a video. I've almost forgotten how good and well-constructed her videos are.

  • @I_am_Theresa
    @I_am_Theresa 3 роки тому +1661

    I swear I know some of those AI people! Imagine seeing your face pop out of that face randomiser!

  • @WMDistraction
    @WMDistraction 3 роки тому +666

    I didn’t realize I missed having Joss videos this much. She’s so good!

  • @TheAstronomyDude
    @TheAstronomyDude 3 роки тому +64

    Not enough black people in China. Most of the datasets every algorithm uses were trained by CCTV data from Chinese streets and Chinese ID cards.

  • @jasonpce
    @jasonpce 3 роки тому +162

    Ya know what? Good for black people. We don't need facial recognition in today's society, and I genuinely perceive it as a slippery slope when it comes to surveillance. If computers are having trouble recognizing black people, all that means to me is that corporations and the government will have a harder time collecting data on them. I swear to God, we should be having conversations about wether or not facial recognition software should exist, not wether or not it's racist, because imo the former conversation is of much more importance.

    • @CleverGirlAAH
      @CleverGirlAAH 2 роки тому

      Yeah, we can certainly agree on this without even bringing the possible racial incongruencies into the conversation. The militarized police state is evil. Period.

    • @CarlTelama
      @CarlTelama 2 роки тому

      They literally discuss whether it should exist at all if you watch the video the whole way through

  • @aguBert90
    @aguBert90 3 роки тому +179

    "the human desicions in the design of something (technology or knowledge)" is what actually means when academics say "facts are a social construction" it doesn't mean it is fake (which is the most common and wrong read), it means that there are some human externalities and non intended outcomes in the process of making a technology/knowledge. Tech and knowledge is presented to the public as a finished factual black box, not many people know how them were designed, investigated, etc

  • @marqkistoomqk5985
    @marqkistoomqk5985 7 днів тому

    I just took a C1 English exam and the third listening was literally a clip from this video. It was nice to see a youtube video as part of such an exam.

  • @vijayabhaskarj3095
    @vijayabhaskarj3095 3 роки тому +282

    This is exactly why AI powered software are not for 100% automation, they should always be used as a support tool to the human who is responsible for the job, for example: In your health risk prediction task, the threshold of predicting high risk patient should be lowered from 90%+ to 70%+ and a human should verify they are indeed high risk patient or not, this will both save time(as humans are looking at only mid risk-high patients) and resources, and reduce the bias.

  • @user-vn7ce5ig1z
    @user-vn7ce5ig1z 3 роки тому +55

    2:58 - Lee was on the right track, it's about machine-vision and facial-detection. One test is to try light and dark faces on light and dark backgrounds. It's a matter of contrast and edge- and feature-detection. Machines are limited in what they can do for now. Some things might never be improved, like the soap-dispenser; if they increase the sensitivity, then it will be leaking soap.
    8:13 - And what did the search results of "white girls" return? What about "chinese girls"? 🤨 A partial test is useless. ¬_¬
    9:00 - This is just regular confirmation bias; there aren't many articles about Muslims who… sculpted a statue or made a film.
    12:34 - Yikes! Hard to deny raw numbers. 🤦
    12:41 - A.I.s are black-boxes, you _can't_ know why they make the "decisions" they make.
    13:33 - Most of the people who worked on developing technologies were white (and mostly American). They may or may not have had an inherent bias, but at the very least, they used their own data to test stuff at the beginning while they were still just tinkering around on their own, before they were moved up to labs with teams and bigger datasets. And cats built the Internet.🤷
    14:44 - I can't believe you guys built this thing just for this video. What did you do afterwards? 🤔

    • @kaitlyn__L
      @kaitlyn__L Рік тому +2

      Re 9:00, and what is the underlying societal reason that the majority of English language newspaper reports about Muslims have that negative tilt…? The implications extracted from training data merely reflect society.

  • @danielvonbose557
    @danielvonbose557 2 роки тому +3

    There should be an analog to be precautionary principle used in environmental politics that would be a similar principle when applied to social issues. That is if there is a signficant amount or reasonable risk in doing something, then that thing should not be done.

  • @Oxideacid
    @Oxideacid 3 роки тому +269

    11:50
    We're just gonna gloss over how she writes backwards so perfectly?

  • @atticusbrown8210
    @atticusbrown8210 2 роки тому +2

    In the hand video the black person was tilting his hand so that it would go around the area that the sensor could detect easily. The white hand was directly under it. That would most likely cause a difference.

  • @virlives1
    @virlives1 2 роки тому +7

    Un claro ejemplo comienza cuando en un canal de UA-cam que publica contenido a nivel internacional. Recibe comentarios de varios idiomas. El tema es que los yanquis o estadounidenses, no toleran que las personas hablen otro idioma. Entonces desvalorizan cualquier comentario en otro idioma. Lo hemos estado experimentando.

  • @arturchagas7253
    @arturchagas7253 Рік тому +2

    love the fact that this video has audio description! this is so important

  • @terrab1ter4
    @terrab1ter4 3 роки тому +362

    This reminds me of that book, "Weapons of Math Destruction"
    Great read for anyone interested, it's about these large algorithms which take on a life of their own

  • @greyowul
    @greyowul 3 роки тому +153

    People seem to be noticing how nicely the professor can write backwards...
    Fun fact: That's a camera trick!
    ua-cam.com/video/eVOPDQ5KYso/v-deo.html
    She is actually writing normally, (So the original video shows the text backwards) but then in editing, the video was flipped again, making the text appear normal. Notice that she is writing with her left hand, which should only be a 10% chance.
    Great video btw! I thought that the visualization of the machine learning process was extremely clever.

  • @emiliojurado5069
    @emiliojurado5069 3 роки тому +262

    it will be funny when machines start to preffer machines and ai than humans itself.

  • @danzmachinz2269
    @danzmachinz2269 3 роки тому +249

    Joss!!! Why did you print all those photos!!!!!?

  • @civ20
    @civ20 3 роки тому +438

    The most important thing when it comes to training A.I is the raw data you feed it. Give the A.I 51% images of white people and 49% images of black people and the A.I will have a ~1% bias towards white people.

  • @TubOfSun
    @TubOfSun 3 роки тому +224

    Waited for Joss for what feels like years

  • @mequellekeeling5029
    @mequellekeeling5029 3 роки тому +359

    At the beginning of the video i thought this was dumb but by midway through I’m like this is what we need.

  • @testos2701
    @testos2701 11 місяців тому +1

    This is happening everywhere right now, when you go shopping, to restaurants, to buy a house, to buy a boat, to get a job. It is designed that way, from start to finish, and there are always excuses of why this is happening and promises that it will change but I have yet to see any changes! As a matter of fact the more you dig the more you will find! 😅🤣😂

  • @wj35651
    @wj35651 3 роки тому +69

    18:36 why are they pretending they are talking on a video chat, when they had crystal clear picture from another camera? Reality and perception, subtle differences.

    • @Vox
      @Vox  3 роки тому +79

      We had a camera crew on each end of our zoom call, since we couldn't travel due to Covid. - Joss

  • @d_as_fel
    @d_as_fel 3 роки тому +139

    15:50 how she can write in mirror image effortless??

  • @caioarcanjo2806
    @caioarcanjo2806 2 роки тому +4

    Why don't just post the same picture changing both positions, so we can get already a good estimative :)

  • @Bethan.C
    @Bethan.C Рік тому +1

    Haven’t seen any new videos come from Joss, miss her so much~

  • @kuldeepaurya9178
    @kuldeepaurya9178 3 роки тому +8

    Wait.. how did the soap dispenser differentiate between the two hands???

  • @retrocat604
    @retrocat604 3 роки тому +77

    It's the same with face scan security.

  • @michaelfadzai8221
    @michaelfadzai8221 3 роки тому +199

    So Twitter said they didn't find evidence of racial bias when testing the tool. My opinion is that they were not looking for it in the first place.

  • @ShionChosa
    @ShionChosa 3 роки тому +30

    There was a Better off Ted episode. Corporate head office decided to discontinue use of the energy saving technology to save money.

  • @bananaluvs111
    @bananaluvs111 3 роки тому +42

    I am amazed with the look of the studio. I would love to work there, the atmosphere is just different, unique and everyone have a place there 😍

  • @ChristianTheodorus909
    @ChristianTheodorus909 3 роки тому +102

    long time no see Joss!

    • @torressr3
      @torressr3 3 роки тому +5

      Right? I missed her too. She's a great jornalist and freaking cute as all hell!

  • @chafacorpTV
    @chafacorpTV 3 роки тому +365

    Me at first: "who even asks these questions, sreiously?"
    Me after finishing the video: "Aight, fair point."

  • @Curious_mind3408
    @Curious_mind3408 3 роки тому +42

    Wait, the cameraman is in both rooms but they are face timing each other???

  • @AnthonyVasquezEndZz
    @AnthonyVasquezEndZz 2 роки тому +2

    Could it be contrast? What if you photoshop the skin tones to green, yellow, or red and hair color with inverted color. Then use people like dark skin and light colored hair and light hair light skin to see if the contrast difference is what's causing this.

  • @jordanjj6996
    @jordanjj6996 3 роки тому +172

    What a thought provoking episode! That young woman Inioluwa not only knew the underlying problem but she even formed a solution.. when she said that it should be devs responsibility to proactively be conscious of those that could be targeted or specified in a social situation and do their best to prevent it in advance. She’s intelligent, and understands just what needs to be done and stated in a conflict; A solution.. Hats off to her..

  • @Bthepig
    @Bthepig 3 роки тому +144

    Wow, another amazing video. I love the high-minds meet middle-school science fair feel of these videos. They're so accessible but also tackling really massive questions. Each one is so well put together and so thought provoking.

  • @venusathena3560
    @venusathena3560 3 роки тому +204

    Thank you so much to make this free to watch

  • @pendlelancashire
    @pendlelancashire 2 роки тому +1

    *I am surprised the scientists working on these algorithms only facilitate europhilic imaging.*

  • @anonymousbub3410
    @anonymousbub3410 3 роки тому +10

    7:45 me seeing a little hand wave on the edge of the screen

  • @dEcmircEd
    @dEcmircEd 3 роки тому +7

    maybe it was more tech focused but it was way more interesting to me than the one about assessing is own racism, which seemed a bit more frivolous in its sourcing and it's overall process.
    Joss does really great stuff

  • @meredithwhite5790
    @meredithwhite5790 3 роки тому +63

    Algorithms of Oppression is a really good book if you want to learn about racial and gender bias in big tech algorithms, like Google searches. It shows that machines and algorithms are only as objective as we are. It seems like machine learning and algorithms are more like groupthink and are not objective.

  • @dr_jj
    @dr_jj 3 роки тому +64

    Worlds of Math Destruction by Cathy O’Neil really touches hard on this subject, where biases of the designers or the customers of the algorithm have big negative impacts to society. There seriously needs some kind of ethical standard for designing algorithms but it’s so damn hard... :/

  • @killianbecker1164
    @killianbecker1164 3 роки тому +19

    This feels like a pbs kids show. With the set and all!

  • @MikosMiko
    @MikosMiko Рік тому +1

    I am black and I build models. The theory is: bad data in, bad data out. Whatever data and rules that these algorithms were built on is what should be in question. Machines are not racist, the people (in tech companies, vendors, agencies) who build them are.

  • @augustlions
    @augustlions 3 роки тому +84

    I see Joss Fong I click

  • @charlespaine987
    @charlespaine987 2 роки тому

    Have you considered infrared (radiated heat) differences , light and dark surfaces throw at different rates.

  • @cheesecakelasagna
    @cheesecakelasagna 3 роки тому +30

    Love the production especially on the set!

  • @pranavkakkar7637
    @pranavkakkar7637 3 роки тому +51

    I missed seeing Josh in videos. Glad she's back.

  • @ZubinSiddharth
    @ZubinSiddharth 3 роки тому +22

    Wait, how was the professor from Princeton able to write in reverse on that glass, so that we could read straight?

  • @lorentianelite63
    @lorentianelite63 3 роки тому +48

    I'm a simple man. I see Joss, I click.

  • @Noahsjpgs
    @Noahsjpgs 3 роки тому +2

    what's the music at 19:30?

  • @gadmas2670
    @gadmas2670 3 роки тому +57

    Goddamn interesting as a cs student, thanks!

  • @ronxaviersantos3184
    @ronxaviersantos3184 3 роки тому +51

    Joss talking about twitter in 10:09 then went straight to ad, an you guessed it: twitter

  • @eduardomirafuentes1420
    @eduardomirafuentes1420 2 роки тому +1

    I understand all the video purposes but my question is how mucho machines have to know about us and how we act?

  • @user-dv1dd3cf8v
    @user-dv1dd3cf8v Рік тому

    We found in class that this video cuts off at about 5 minutes from the end.

  • @rizkypratama807
    @rizkypratama807 3 роки тому +40

    Stop with the Joss Fong comments, I can't stop liking them

  • @pavanyaragudi
    @pavanyaragudi 3 роки тому +42

    Joss Fong!❤️🔥

  • @vivekburuj414
    @vivekburuj414 2 роки тому +4

    Or you could rather be less insecure of such negligible differences and be happy. Just make yourself stronger from inside and let the machine do its work.

    • @CleverGirlAAH
      @CleverGirlAAH 2 роки тому

      My man.

    • @johnmaris1582
      @johnmaris1582 2 роки тому +1

      Unintended consequence or outlier are itself an interesting things to discover. Not to mention this experiment would have stronger implications down the line when algorithm play a much bigger role in society decision making for job, crime background check, recruitment, remuneration and etc... You are right that this example is not a big problem. What it can or will be are worth understanding.

  • @kingjulian420
    @kingjulian420 3 роки тому +10

    4:30. Why are you filming and driving!! No don’t read a quote!! JOSS NOOO. *boomp*
    *beep beep beep*

  • @MissSarcasticBunny
    @MissSarcasticBunny 3 роки тому +57

    This is a really interesting look into machine learning - great job Glad You Asked team! It stands to reason that there would be bias no matter what because even if the machine doesn't have any inherent bias or self-interest in focusing on one face over another, people are still feeding information into the machine and the machine is basing its results on that information. And humans are still flawed beings who bring with them their own personalities, thought patterns, biases, childhood backgrounds, class backgrounds, et cetera. The only solution is to focus on what information we're feeding machines.

  • @LouisEdouardJacques
    @LouisEdouardJacques 3 роки тому +28

    Good video! It is nice that you are taking the time to point out where it manifests concretely in the present world. Many other discussions on the subject revolve around automatic guilt. This video seems to start with a different approach and hopefully will appeal to a wider audience, not simply preaching to the choir. The counter-bias briefly mentioned at the end is what worries me. It should be part of the solution but we have to be careful. First, is it being done at all or just talked about when the public is listening? And most importantly to me, are we careful with which framework is used to design this counter bias? Some are worried that letting people think for themselves is amplifying racism. Others think that we are made blind to other and future kinds of diversities when we don't let people bring their own analysis and solutions, even if they can be contradictory.

  • @aronjohnreginaldo1913
    @aronjohnreginaldo1913 3 роки тому +45

    When you first see Joss's face on the thumbnail for sure every topics are interesting 😅

  • @alvakellstrom9109
    @alvakellstrom9109 8 місяців тому

    Great video! Really informative and very important. Thanks for a great watch

  • @Dallas_AWG
    @Dallas_AWG 3 роки тому +55

    Joss is so good. She has the perfect voice

  • @luizmpx833
    @luizmpx833 3 роки тому +216

    very good information, it reminded me of your video from 5 years ago...
    "Color film was built for white people. Here's what it did to dark skin"

  • @mariofrancisco6717
    @mariofrancisco6717 2 роки тому +1

    As máquinas não são racistas, elas são mal programadas ou mal configuradas por pessoas que não tomaram os cuidados adequados durante o projeto.
    The machines are not racist, they are poorly programmed or misconfigured by people who did not take proper care during the project.

    • @bluebutterfly4594
      @bluebutterfly4594 2 роки тому +1

      And why do the creators still not bother to take care this is not the first time these issues have been raised. Its not getting better.
      So why do you think they choose to disregard part of the population?

  • @Atheral
    @Atheral 3 роки тому +20

    Joss Fong. That is all.

  • @santif90
    @santif90 3 роки тому +59

    I'll take this video as it has a good intention of creating an important conversation. But your data is kind of funky

  • @Nicole__Natalia
    @Nicole__Natalia Рік тому

    Off topic but why is the progress bar blue instead of red?

  • @fanary5454
    @fanary5454 3 роки тому +3

    joss fong never disappoint me

  • @hotroxy240
    @hotroxy240 3 роки тому +42

    Is this why the automatic sink in public restrooms barely work for me because it’s designed to read lighter skin 🧐🥲🧐🥲

  • @DigitalicaEG
    @DigitalicaEG 3 роки тому +24

    Joss is LIFE

  • @RealCosmosry
    @RealCosmosry 3 роки тому +47

    She is one of my fav vox hosts. Her videos are structured very well and have something interesting that hooks me up for the entire video. Nice Initiative by Vox 'Glad You Asked S2'

  • @elbaecc
    @elbaecc 3 роки тому +29

    As more and more governments, like say China, India, Middle Eastern countries, are employing face recognition tools that use such AI for law enforcement and surveillance, and they're buying said software from Western Countries, I am wondering how accurate these systems are seeing as the AI were trained on primarily white faces. Do these AI then learn "locally", and if so, can this data then be fed back into the original AI to make it learn how to recognise those ethnicities in western countries with an ethnically diverse population, like USA, UK, etc.?

    • @kaitlyn__L
      @kaitlyn__L Рік тому +1

      They don’t learn while being run. Training is very compute intensive and is done once, centrally, on large servers. After training is complete the neural networks can run on very little compute power, on a phone or laptop or camera, but they’re totally static.

  • @daylight1nsomniac536
    @daylight1nsomniac536 3 роки тому +39

    as always the editing is absolutely superior.
    keeps me hooked.