How I'm fighting bias in algorithms | Joy Buolamwini

Поділитися
Вставка
  • Опубліковано 15 лис 2024

КОМЕНТАРІ • 662

  • @MarkArandjus
    @MarkArandjus 7 років тому +261

    Okay let me break it down for you folks real simple-like: if a webcam finds it difficult to detect, for example, dark skintones, then the functionality of the webcam is biased towards dark-skin users, because it performs poorly due to their appearance. She's not saying this is a result of racism on the part of programmers, or that webcams are racists, its just an unfortunate by-product of the technology and she's working to correct it. Facial recognition is a powerful tool with a wide application from privacy to security to entertainment, this isn't some SJW nonsense. Jeez.

    • @Samzillah
      @Samzillah 7 років тому +15

      Seriously. Imagine cops are trying to find a criminal with the technology but cant find them because of this mistake? There are billions of non-white people, so it needs to work on them too.

    • @DenGuleBalje
      @DenGuleBalje 4 роки тому +6

      @gbmpyzochwfdisurjklvanetxq You obviously have no idea how a camera works.

    • @DenGuleBalje
      @DenGuleBalje 4 роки тому +17

      @gbmpyzochwfdisurjklvanetxq Are you unaware that a camera relies on light hitting the sensor? Darker skin reflects less light. The less light the longer the exposure needs to be to give a good visual representation of what you're looking at. A webcam sets the auto exposure to get a set amount of light to the sensor. If the background is a lot lighter than the person's skin then the face will look even darker, because the camera shortens the exposure to reduce the overall brightness.
      Another factor is that face recognition relies on contrast to make out what an eyebrow, mouth or nose is. Dark brown on black is just harder for a computer to define than for example brown on "white".

    • @amychittenden8993
      @amychittenden8993 4 роки тому +8

      This would not happen if the coders had dark skin. It depends on who the coders are. So, yes, it really is racism, albeit a more passive form, but with the same results.
      ua-cam.com/video/gV0_raKR2UQ/v-deo.html

    • @MarkArandjus
      @MarkArandjus 4 роки тому +5

      @@amychittenden8993 Okay sure if we look at it by outcome then it is racism, the same way systemic racism is racism even if that was not the intended design.

  • @ShonTolliverMusic
    @ShonTolliverMusic 7 років тому +40

    reminds me of that video explaining how Kodak film couldn't replicate brown tone people right until the mid 80s

  • @Nick-kb2jc
    @Nick-kb2jc 7 років тому +165

    I can see that lots of the people in the comments have no clue how machine learning works...

    • @HarikrishnanR95
      @HarikrishnanR95 7 років тому +27

      Nick just like the lady in the video, who has no clue how facial recognition works

    • @thebrochu1983
      @thebrochu1983 6 років тому

      Red

    • @fakeapplestore4710
      @fakeapplestore4710 6 років тому +20

      "MIT grad student Joy Buolamwini was working with facial analysis software "

    • @ckilr01
      @ckilr01 4 роки тому +9

      Do you understand how bias works? Bias is the Unconscious preference. It's related to the mirror principal where we are unconsciously attracted to those like us and repeled by things we dislike unconsciously. She is saying she is fighting your unconscious choices, in other words forcing your choices not based in racism or hate.

    • @SuperMsmystery
      @SuperMsmystery 3 роки тому +19

      @@HarikrishnanR95 she has a PhD ,but continue mansplaining.
      How about you publish your reasoning?

  • @BERNARDO712
    @BERNARDO712 7 років тому +62

    Nice resume:
    Joy Buolamwini is a poet of code on a mission to show compassion through computation. She is a Rhodes Scholar, Fulbright Fellow, Google Anita Borg Scholar, Astronaut Scholar, A Stamps President's Scholar and Carter Center technical consultant recognized as a distinguished volunteer. She holds a master's degree in Learning and Technology from Oxford University and a bachelor's degree in Computer Science from the Georgia Institute of Technology.

    • @austinjohn8713
      @austinjohn8713 2 роки тому +2

      and she does not understand how AI works. The problem she is calling bias isn't bias. it was not coded. an AI algorithm is given a data set to learn features and this learning is then used to make prediction given new a data. if the AI struggled with her face, it was because it wasn't trained with a data set that contained dark skin tones. to fix the problem feed more black faces to it. it has nothting to do with bias. if the AI was trained only on black faces, it would not recognize white faces except the face was covered with a black mask

    • @abdullahb4453
      @abdullahb4453 2 роки тому +6

      @@austinjohn8713 That is what she said. the algorithm is biased because of that. :)

    • @austinjohn8713
      @austinjohn8713 2 роки тому +2

      @@abdullahb4453 algorithms are not biased. what a machine learning algorithms does is not explicitly programmed so it makes no sense to accuse it of bias. if she is an expert in the field she was supposed t retrain it using black faces only and see that the same way it behaved with black faces it would with white faces. she is looking for racism where it does not exists. I say this as a black person

    • @randomguy_069
      @randomguy_069 2 роки тому +7

      @@austinjohn8713 Correct. Algorithms are learning from what we are teaching them. They are not biased we are biased in feeding the training data. It feels as if humans are masking an unethical aspect of their society by calling it the fault of AI.
      And in recent years I have actually seen many leaders in this Algorithmic Bias movement moving ahead and designing proper ethical AIs instead of crying about bias or whatnot and blaming everything on an AI Evil god which they, ironically, trained themselves.

    • @austinjohn8713
      @austinjohn8713 2 роки тому

      @@abdullahb4453 it is not algorithmic bias. if any bias exits at all it should be in the data fed to the AI.

  • @whipshaw
    @whipshaw 7 років тому +121

    I liked her introduction, "a poet of code", as a programmer I'm feeling flattered

  • @robbieh6182
    @robbieh6182 7 років тому +8

    You can tell most of the people who disliked the video don't have a science background. It IS a bias if the algorithm only recognizes a certain type of face. The word bias has no negative connotation by itself, it simply means preference or "works better with". She isn't saying the algorithms are "racist".

  • @brendarua01
    @brendarua01 7 років тому +26

    Unless there is a wide variance in the delivery of email notices at least 17 people disliked this before saw more than they saw the first 5 minutes. That says a lot about some very foolish people.
    This is a great presentation! She's a wonderful presenter who is dynamic and entertaining on what is a technically and socially complex topic. It's exciting to have and example of discrimination that, while unintended or even unconscious , is very real and has very concrete results. Thank you for sharing, TED.

    • @ArtInMotionStudios
      @ArtInMotionStudios 7 років тому +3

      It is the title more than anything and the way she starts off the video. The issue is more complicated than just, it can't detect black faces. Which is simply not true, it does just not as many.
      I met someone who has been trying to solve this for years because I live in a black majority country and well it is easier said than done,

    • @Wegnerrobert2
      @Wegnerrobert2 7 років тому +3

      Brenda Rua tell me, don't you ever like a video the moment you click on it?

    • @brendarua01
      @brendarua01 7 років тому

      Golo, I don't click like or dislike until I've listened to at least have of a clip. Sure, I have plenty of topics and presenters that I'm attracted to. But whether I agre or not, I try to listen objectively and critically. I can't recall ever disliking something because of the subject matter. I will do so and I'll post a comment, if I find "alt facts" or fallacious arguments.

    • @brendarua01
      @brendarua01 7 років тому +1

      Ok, Golo. I can see how using "bias" in the title would be a trigger. One would have to listen for several minutes to realize she wasn't talking about social issues but about skewed data used in the training. Even then that might not get through to some listeners.

    • @Wegnerrobert2
      @Wegnerrobert2 7 років тому +1

      my point is that most people on youtube frequently immediately like videos because they simply expect that it will be good. Disliking a video directly is just as normal as liking it.
      But since that doesn't apply for you I will give you some other arguments why immediately liking a video is no problem.
      You mentioned it already in the other comment; I think that a title can be enough for a video to get a dislike because it's a simple method of feedback.
      And a rating is only temporary anyways. I don't pretend that my brain doesn't immediately form an opinion when it sees the video in a feed. But you can always change the rating.

  • @Nick-kb2jc
    @Nick-kb2jc 7 років тому +75

    The real reason some people are triggered in the comments: they hate seeing a smart, educated African American MIT student.

    • @IronLungProductionsOfficial
      @IronLungProductionsOfficial 3 роки тому +13

      Haha nope, she's got nothing productive to bring to the table in the field of AI, constantly touting lol, first world problemos

    • @mtngirl4944
      @mtngirl4944 3 роки тому

      😂😂😂

  • @skdooman
    @skdooman 5 років тому +15

    I appreciate the content of the video, but I wish she would have included more statistical examples. Its one thing to claim your face wasn't recognized. Its another thing to display data on many people who's faces were scanned and were or were not recognized.

  • @ijousha
    @ijousha 7 років тому +34

    Personally , I wish my face was less detectable by surveillance cameras.

    • @AjahSharee
      @AjahSharee 4 роки тому +28

      Yeah until you get misidentified and arrested because of an algorithm.

    • @SuperMsmystery
      @SuperMsmystery 3 роки тому +2

      The problem that you don't see is why surveillance in the first place?

    • @insearchof9903
      @insearchof9903 3 роки тому

      @@AjahSharee but if you're innocent why worry? You will just be like bye when they see they have the wrong person.

    • @aelf_ears9119
      @aelf_ears9119 3 роки тому +3

      @@insearchof9903 but if the system that determines right/wrong is flawed then it doesn't matter if you view yourself to be innocent or not...

  • @Ashberrysoda
    @Ashberrysoda 7 років тому +25

    🙄 I am always confused by people who comment on and dislike things they haven't seen. Clearly shows a bias of the viewers. Good job Joy.

  • @miss118jess
    @miss118jess 3 роки тому +11

    This is such powerful talk! Let's imagine that the reason facial recognition works 'better' on light skinned people because cameras are good at picking up light, (and not biased datasets). If facial recognition technology is not working on darker skin then it's not working at all. It's not recognising faces! Facial recognition is still unreliable using high quality photos of members of congress and honestly, surveillance cameras using this tech would be low quality anyway. AI, years later, still excludes a large proportion of the population and needs to be a big topic of discussion as society increasingly relies on its decision making.

    • @mouath_14
      @mouath_14 2 роки тому +1

      Laying concern over this technology's ability, or better phrased, inability to detect and identify faces is like scratching the tip of the iceberg, because facial analysis is physiognomy and we all know that that idea is horrible. Yet, facial analysis is also becoming super popular. Some propose complete bans from certain domains and they are not wrong for proposing that either...

    • @karthickkrish5098
      @karthickkrish5098 2 роки тому

      It isn’t because of the brightness/contrast/camera quality, it’s because of the datasets we got till now. These algorithms have been trained with certain age group and certain colour, if the subject matches with the face it recognises in fraction of seconds, if it’s quite opposite, there’s a problem. Lack of brightness/contrast/Camera quality just make’s the problem worse!
      I’m a MSc Artificial intelligence student, So I know what we use to train the systems!
      It’s hard to digest but it’s the truth!

    • @PatrickInCayman
      @PatrickInCayman Рік тому

      @@karthickkrish5098 Right, so I guess Microsoft also failed at this as well because their entire team didn't think to train their facial recognition on people of color.. I think MIT and other AI students should return to learn basic physics.

  • @chinweogedegbe5449
    @chinweogedegbe5449 Рік тому +21

    This is really great..... even 7 yrs later .... this information is very relevant, Thank you Joy !!

  • @TheEndofThis
    @TheEndofThis 7 років тому +9

    how interesting that a video on algorithmic bias has equal parts likes to dislikes.

  • @erricomalatesta2557
    @erricomalatesta2557 7 років тому +21

    You could use this to your advantage instead of fixing it.
    Being anonymous these days is a gift

    • @Melusi47
      @Melusi47 3 роки тому

      Snitch to the whole race. Now they will play attention 😂

  • @tigerlilly9038
    @tigerlilly9038 3 роки тому +2

    Human humans forget that computers are only as you are only as smart as you make them there is no secret to be unfolded this was a wonderful talk

  • @moneysittintall3611
    @moneysittintall3611 3 роки тому +4

    why does this have so many dislikes, she makes a valid point

    • @impanthering
      @impanthering 3 роки тому +5

      Willful ignorance

    • @GhostMillionairesTV
      @GhostMillionairesTV 3 роки тому +4

      Because she doesn't make a valid point and you can't think well enough to even figure out why.

    • @wingedsn8670
      @wingedsn8670 2 місяці тому

      Say why then, bud​@@GhostMillionairesTV

  • @aizenplay
    @aizenplay 3 роки тому +2

    what's the name of the facial tracking system thanks

  • @maximilianpohlmann9106
    @maximilianpohlmann9106 7 років тому +1

    Thank you TED for not disabling comments!

  • @Dee-jp7ek
    @Dee-jp7ek 7 років тому +6

    This isn't her screaming racism kids, this is her saying that we should probably test facial recognition on a more diverse group of faces to make sure they work on more than just one face type.
    Its like I'm attempting to make a one size fits all sweater, trying it on a bunch of S - M people and saying 'it fits!' without trying it on L or XL people.

    • @kiernan3148
      @kiernan3148 7 років тому

      people shouldn't be L or XL though

  • @daniels7568
    @daniels7568 7 років тому +5

    1,092 ppl didn't watch the video before rating

  • @tunjilegba
    @tunjilegba 7 років тому +30

    Hopefully when Robocop comes into fruition it will mistake me for a Tree 😊

    • @dividedperceptions6626
      @dividedperceptions6626 7 років тому +3

      Tunji Legba That is the kind of positiv thinking we all should learn from:)

    • @jillyjoan8416
      @jillyjoan8416 4 роки тому +1

      I hope we don't have Robocops. I want smart humans, not smart machines.

    • @hannahl1387
      @hannahl1387 3 роки тому

      @tunji legba you can but dream.

  • @SergioLongoni
    @SergioLongoni 7 років тому +1

    I agree with the content of the video about the potential bias of algorithm but I have a problem with the example of face recognition. My phone as no problem in tracking the speaker face, so I think that this is not a real problem for marketable applications not trained with outdated and small data sample

  • @laurencross6240
    @laurencross6240 5 років тому +20

    This is so interesting! Joy Buolamwini rocks.

  • @gg-wk2ww
    @gg-wk2ww 2 роки тому +2

    Keen curiosity brings things to light, good and bad, good job

  • @sundar72
    @sundar72 8 місяців тому +1

    A few years ago I was in Frankfurt airport on transit. The restrooms have automatic soap dispensers.. it could only detect light skinned hands for some reason. I am from India. There were two people in that restroom trying every dispenser. A black person and myself .. we were telling each other that the damn things are broken. In walks a white guy and uses it. We asked him to try the other soap dispensers and they all worked for him! We laughed, shook our heads and moved on saying "someone did not design and test their product right!" . At the end of the day, design and test should always consider the spectrum of end users. Always remember this mantra when designing things with AI "You are not your user"!

    • @Boomchacle
      @Boomchacle 2 місяці тому

      The same thing occurs in some malls I used to work at. A black guy couldn't get soap that a white guy could. I'm sure it wasn't intentional but come on!

  • @coulorfully
    @coulorfully 7 років тому +10

    The people programing/coding the algorithms that impact us all have implicit (internal) biases that become implicit in their code.

  • @socdemigod
    @socdemigod 6 років тому +1

    Brilliant. But I don't think anyone should want to have their face recognized by a software. Doesn't that seem a bit intrusive?

  • @mr_lrb6879
    @mr_lrb6879 4 роки тому +5

    I'm a bit embarrassed to admit that every time she said "Coded Gaze", I heard it as "coded gays" and got really confused about what that had to do with coding until she showed us the Coded Gaze webpage. Still a good and eye-opening talk though.

    • @tammieknuth6020
      @tammieknuth6020 3 роки тому

      That would mean shes biases against gays and the LGTQ+ and shes literally a different race

  • @jyotiswamy6305
    @jyotiswamy6305 6 років тому +3

    Thank you Joy! This world needs you, because programmers (as evidence from the comments), have no understanding of the SOCIAL IMPACT of their work. Of course there may be other solutions, but it is the SOCIAL structure of your field that matters. This would not be a TECHNICAL issue if every programmer was black, but because RACIAL MINORITIES are highly disadvantageous due to unethical practices and historical processes, (that have kept them from learning about such software relative to others dependent on race and gender), cannot be due to a "glitch" in the system. WAKE UP PEOPLE!!! Racial paradigmatic bias exist in computer science as well. Also, this is a BLACK WOMAN talking about facial analysis software which is changing the SOCIAL STRUCTURE of the field, and needed to prevent issues like this in the future. You can most definitely argue that there are other ways to fix this solutions, but you can't argue that the minority elite does not look like Joy. I swear this world needs to be more reflexive........UGH. JOY YOU ARE A QUEEN! Thank you so much for speaking up and being a voice for the voices in a very underrepresented field. (Simply look at the representation of the audience). WAKE UP YALL.

    • @austinjohn8713
      @austinjohn8713 2 роки тому

      no. she is mis characterizing the problem. the AI would behave the same to a white person if it was trained only on black faces. it is not bias. the dataset was skewed

  • @Zoza15
    @Zoza15 7 років тому +3

    Well, i once had to put my face on a cam and the cam didn´t recognize my face either..
    She actually does something about it, so why the dislikes for this video?.
    I support her actions, as long as it doesnt results in consequences that leave other groups out for the sake of the main group..

    •  5 років тому

      Because she displacing science for ideology.

  • @dud3man6969
    @dud3man6969 4 роки тому +8

    In my opinion the ability to defeat AI facial recognition is an advantage.

  • @israelip
    @israelip 5 років тому +7

    For those who don't know how machine learning works and can't even hear her. Try to read about training sets.

    • @austinjohn8713
      @austinjohn8713 2 роки тому

      if she knew it was due to training set and not bias she would not make this talk calling it bias. the AI would do the same to a white face if it was trained only on black faces

  • @Dataanti
    @Dataanti 7 років тому +4

    i bet it has more to do with the camera having a hard time picking up darker skin tones... because.... cameras in general have harder times with darker colours. I dont see how you will be able to fix this without upgrading all cameras to have direct light sources or IR Depth sensors. this has nothing to do with algorithms

  • @DeoMachina
    @DeoMachina 7 років тому +21

    >nonpolitical video about software
    >mass dislikes
    Tell me again how there isn't a problem with racism in this channel's audience

    • @DeoMachina
      @DeoMachina 7 років тому +5

      What's debatable about it? Why doesn't the same thing happen when white guys talk about software?

  • @barneypotts9868
    @barneypotts9868 7 років тому +4

    It turns out that if you get a cheap webcam with a cheap face recognition software, you don't get very good face recognition

  • @TripodJonas
    @TripodJonas 7 років тому

    Also, why not use things other than visible light to do the same thing, darker faces are darker in visible light, not under other sources.

  • @pinegulf
    @pinegulf 7 років тому +1

    I'd like to see the code she writes.

  • @BERNARDO712
    @BERNARDO712 7 років тому +1

    Great accomplishments, Joy:
    Joy Buolamwini is a poet of code on a mission to show compassion through computation. She is a Rhodes Scholar, Fulbright Fellow, Google Anita Borg Scholar, Astronaut Scholar, A Stamps President's Scholar and Carter Center technical consultant recognized as a distinguished volunteer. She holds a master's degree in Learning and Technology from Oxford University and a bachelor's degree in Computer Science from the Georgia Institute of Technology.

  • @timojissink4715
    @timojissink4715 7 років тому +6

    I've actually studied everything to do with 3D printing and so also 3D scanning, I've learned that there are 3 things that are difficuld to scan by a 3D scanner. the first is shiny objects, the second in translucent objects and the last was black objects...
    "Black objects" it's true, licht gets absorbed by the color.

  • @inachu
    @inachu 2 роки тому +1

    In years to come this will be an issue. Companies will need test subjects or images of all races to make sure technology truly works for all races.
    What if the next super smart techie nerd is born in india and the camera only works with people from India? It can and will happen.

  • @missachol24
    @missachol24 7 років тому +51

    Omg goodness did people even watch the whole video? People are crazy 🙄

    • @jacob5208
      @jacob5208 7 років тому +4

      missachol24 the algorithm she is talking about is outdated and will soon be replaced by pattern tracking software

  • @nerdtuts797
    @nerdtuts797 7 років тому

    All those who are saying that she doesn't make sense doesn't know anything about machine learning. If the training data doesn't have enough images of black people, the algorithm will have a hard time detecting them. It's not about the lighting or the camera. I am surprised by the number of dislikes on this video!

  • @morgrimx5732
    @morgrimx5732 7 років тому +5

    It seems there is also a human bias toward facial recognition software. The bias gives more credibility to it since it's a computer !

  • @matthewfanous8468
    @matthewfanous8468 7 років тому

    i was waiting for her to say "if you cant tell, this is because i was black" BUT SHE DIDNT AND NOW I DONT KNOW WHY THEY COULDNT DETECT HER

  • @viperxiiii
    @viperxiiii 7 років тому +7

    Love how her example was as she called it a cheap webcam and not a more complex one.

  • @stephenclement3349
    @stephenclement3349 7 років тому +8

    Cool initiative! I am sure coders would love you helping them identify their bugs and provisioning them with free data. Just make sure you remember they probably didn't do it intentionally and approach them kindly. Otherwise you will end up as crusaders fighting someone who isn't really your enemy.

    • @Skinny97214
      @Skinny97214 4 роки тому +5

      Might want to google "tone policing."

  • @stew_baby7942
    @stew_baby7942 9 місяців тому +1

    Not good for some computer to judge by facial features...or to judge anyone

  • @vonneely1977
    @vonneely1977 7 років тому

    Is this to be an empathy test? Capillary dilation of the so-called "blush response?" Fluctuation of the pupil, involuntary dilation of the iris?

  • @siarheipilat8152
    @siarheipilat8152 7 років тому +2

    I personally encountered that issue while working on a image processing project at my university, what does some justice have to do with it? It is a valid problem. The project was about hand detection, though, and the program was trained on white students. But wait, I just read some comments below, so I'll just go throw up. You guys enjoy.

  • @leviengstrom7359
    @leviengstrom7359 4 роки тому

    why the background look like it was built out of solo cups

  • @milanpaudel9624
    @milanpaudel9624 7 років тому +2

    wtf.. Whats with those many Dislikes ? This is genuinely good ted-talk.

  • @tbpp6553
    @tbpp6553 7 років тому +6

    More Dislikes than Likes ?? MY GOD ! This is a Real issue. My racist coolpad camera doesn't recognize my face when I select face-detection mode. It is so embarrassing !!

  • @CrazySondre
    @CrazySondre 7 років тому +3

    What happened to TED Talk...?

  • @evilplaguedoctor5158
    @evilplaguedoctor5158 7 років тому +1

    I wish she did more research, as in, the details as to what part of the algorithms that causes them to fail with different skin colours, and how to fix those issues. because it kind of sounds like she is just complaining wanting others to fix this problem for her.. but I could be mistaken.

  • @theegreatestever2420
    @theegreatestever2420 4 роки тому +6

    This was such an important TD Talk. I cant believe I just recently found out about it when diving deep into AI and using it in my apps but I am glad I didnt find out later.
    Its unfortunate the domain name is now for sale and no longer operated by them but I loved this

  • @giordanoparisotto5617
    @giordanoparisotto5617 2 роки тому +1

    Excellent!!! Loved her! Shes awesome!

  • @hvbris_
    @hvbris_ 5 років тому +11

    She's great

  • @st8of1der
    @st8of1der 7 років тому +2

    Couldn't this be addressed by using a light that's outside of the visible spectrum? What about combining a high-resolution camera with infrared light?

  • @Yirsi
    @Yirsi 7 років тому +1

    While the problem that an algorithm does not work correctly is true in this case, I don't think it's connected to bias at all.
    But you certainly have to point out where the problem lies within the code, so the people behind it can fix it. Focusing on that issue seems more important to me.

    • @Joe-yr1em
      @Joe-yr1em 5 років тому +2

      Bias just means it is geared towards certain features more than others. It is connected to bias. Not in the sense that you have a coder that's biased or anything but in the sense that the model is making predictions based on datasets that dont accurately represent the target market.

  • @jddebr
    @jddebr 7 років тому

    Awful lot of folks in this comments section who don't understand what the word bias means in the scientific community. Bias is a real thing in machine learning. All algorithms have inductive bias. She is not saying that algorithms are racist...

  • @canUfeelMYface
    @canUfeelMYface 4 роки тому +7

    "Someone else will solve this problem. "

    • @phantomcruizer
      @phantomcruizer 2 роки тому

      Yes, “Colossus/Guardian…SkyNet/Legion” !

  • @Frozlie1
    @Frozlie1 7 років тому +8

    When security cameras start using facial recognition this will cease to be an issue.

    • @aitortilla5128
      @aitortilla5128 4 роки тому

      Many security cameras in many countries already use facial recognition.

  • @josephinegrey4517
    @josephinegrey4517 7 місяців тому

    i wonder how this can be applied to ageing faces?

  • @onnoderkman3760
    @onnoderkman3760 7 років тому +103

    I love how she is not playing the racism card, as we have seen in other scenarios. She is trying to solve the problem, instead of whining about it

    • @anthonycoleman4631
      @anthonycoleman4631 7 років тому +22

      I love that aspect of it also but I think a large part of the viewers didn't watch the whole video or based it on the caption. An overwhelming number of dislikes. Bias perhaps?

    • @godofthisshit
      @godofthisshit 7 років тому +3

      +Onno Derkman Yet you people still crying about the video.

    • @msms47
      @msms47 7 років тому +6

      still not working raceist dipshit get triggerd just by her on the stage .

    • @NathanGatten
      @NathanGatten 7 років тому

      godofthisshit msm47 hate is still hate no matter who you throw it at. No need to take a page from their books.

    • @godofthisshit
      @godofthisshit 7 років тому

      @Nathan Gatten What?

  • @Sirius_Blazing_Star
    @Sirius_Blazing_Star Рік тому

    If the Training sets aren’t really that Diverse, any Face that Deviates too much from the Established Norm will be Harder to Detect...

  • @luisfernandez7426
    @luisfernandez7426 2 роки тому +4

    Great talk, Joy! It’s important that this topic is getting visibility. Great work you’re doing toward combatting these biases

  • @Alitari
    @Alitari 7 років тому

    I agree that this is a problem, but she seems to really have a shotgun approach to trying to create new or take over existing phrases / memes / acronyms ... feels to me like she's hoping one or more of them will gain traction for her own self aggrandisement ... self promotion is one thing, but it feels like this speaker took it to another level, beyond that which TED is normally known for.

  • @jacobcromer7192
    @jacobcromer7192 7 років тому +1

    Nobodies fighting anything, no one is trying to stop you from fixing this. Why is she framing this like a civil cause?

  • @readingrebellion9758
    @readingrebellion9758 5 років тому +1

    I agree with the premise of making services equitable in access and fair, but I think "unlocking equality" through digital technology is a vague and concerning mission. Equality of what? Between which groups/sub-groups? And who decides?

  • @theaminswey9733
    @theaminswey9733 7 років тому +2

    Great talk, I'll leave without checking comments section now, thank you, Joy❤

  • @Chronomatrix
    @Chronomatrix 7 років тому +1

    Nothing wrong can come from a facial analysis software...

  • @shell9918
    @shell9918 3 роки тому +1

    she is so well spoken and likeable

  • @hinonashi9439
    @hinonashi9439 7 років тому +6

    Let's talk about facial recognition algorithm. There is some facial recognition algorithm that the pattern are putting online right now. In this case, I will put the facial recognition algorithm that calls "Viola-Jones" like the example for everyone can easily understand. It works by repeatedly scanning through the image data calculating the difference between the grayscale pixel values underneath white boxes and black boxes. So what are the black boxes and white boxes mean? Just look at your nose, the bridge of your nose is usually lighter (white boxes) than the surrounding area on both left and right sides (black boxes). The middle of your forehead (white boxes) is lighter than the size of it (black boxes). These are the crude test for facial recognition algorithm, but if they find enough matches in on an area of the image, it concludes there a face there and mark it that was your face. This algorithm can't find your face if you're tilted or facing sideway to the camera, but this algorithm is very accurate for frontal faces. This is how the digital cameras, the smartphone camera have been putting a square box around your face. The algorithm must do more to detect your eyes, your lips, your nose that why big data come in. It's gathering images on the internet, around the world to make it more accurate. So why the algorithm in your video isn't detected your face. First, it's not an algorithm fault; it's you. I had already mentioned above; you have to put your face in front of the camera. In the wear glasses video, your head is tilted and keep moving all the time. And when you put your face in front of the camera, you have to look down, because the webcam is looking up the ceiling, that why I can see the light on the ceiling. Your webcam doesn't have the algorithm that will balance the light around you. Just use your smartphone or any digital camera and put it in front of the light, you will see the camera will automatically balancing the light. And then you say, you sit in front of the cheap camera (let me say it a second-time cheap camera), so for the god's sake, are you kidding me? You know the algorithm, and you put your face in the environment that the algorithm can't detect your face. Stop lying, playing the victim, please. Maybe the algorithm is racist, bias........ but it happens because you put it there, your fault. In this case, you have the bias, not the algorithm, not the people who made it.

    • @alisonnguyen4483
      @alisonnguyen4483 7 років тому

      the world need more people like you.

    • @DeoMachina
      @DeoMachina 7 років тому

      You...made all of that up.

    • @jacobkleifges5246
      @jacobkleifges5246 7 років тому +3

      Your description of the algorithm is true and correct, however, I believe that you have missed the point; Ms. Buolamwini argues not that the code is at fault, but rather that the implications of this unintended outcome are detrimental, and that she wishes for awareness, and aid in rectifying the side effect of bias in these algorithms. Also, it was clear that the Hong Kong startup did not create their code, nor did Ms. Buolamwini. The problem is not that the code is poor. The problem is not an intended bias. The problem is not thin skin. The problem is that the flawed code is not only used but widespread. She is worried that similar flaws could exist in algorithms that are used in policing and sentencing, which would lead to greater problems with constitutional law. Regardless of political outlook, this would be a monumental issue.

    • @hinonashi9439
      @hinonashi9439 7 років тому +1

      I had already said that she put her face in front of the cheap camera. It's getting light pollution, so the algorithm can't see her face. There is no bias in any algorithm out there, people made the algorithm to serve human desire. You can see algorithm everywhere around you. Your laptop, phone, car.... In the end, she made it all up.

    • @jacobkleifges5246
      @jacobkleifges5246 7 років тому +3

      Again, you miss the point. She argues not that there is bias, but that these algorithms work with differing effectiveness across varying demographics, and that the implications of that flaw in more important fields could be disastrous. It is not that the algorithm is at fault, it is that the algorithm could be used, flaw intact, to perform operations that involve a necessity for absolute neutrality.

  • @jorgerincon6874
    @jorgerincon6874 4 роки тому

    Ok I wasn't to keen on seeing this video mainly because the title, but it's a good theme honestly.

  • @letsgoiowa
    @letsgoiowa 7 років тому +178

    Making machines discriminate is probably _a very bad idea._

    • @ArtArtisian
      @ArtArtisian 7 років тому

      +

    • @cybercat1531
      @cybercat1531 7 років тому +6

      Computing Machines ONLY discriminate, they're binary after all. So are you a 1 or a 0?

    • @agamergirl9801
      @agamergirl9801 7 років тому

      letsgoiowa +

    • @kevinscales
      @kevinscales 7 років тому +6

      Discrimination is not a bad thing, it's what computers and minds are supposed to do. It's being unfair and not inclusive that is the problem.

    • @letsgoiowa
      @letsgoiowa 7 років тому +1

      THIS IS HOW YOU GET SKYNET

  • @IshtarNike
    @IshtarNike 7 років тому +52

    This always annoys me. Taking selfies with my mates, never get facial recognition. It's a small peeve, but it's quite annoying.

    • @ArtArtisian
      @ArtArtisian 7 років тому

      +

    • @dansadler
      @dansadler 7 років тому +1

      But it also means you have kinda natural visual surveillance protection because your face is less contrastive.

    • @premier69
      @premier69 7 років тому

      +Dan Sadler rofl

  • @sbongisenimazeka8652
    @sbongisenimazeka8652 7 років тому +3

    Black people aren't "people"... they are Gods.

  • @derekvaillant6303
    @derekvaillant6303 5 років тому +2

    Take back the algorithm and open up those black boxes. Thanks for the encouraging message, Joy.

  • @garfield2406
    @garfield2406 5 років тому +2

    Could never be related to the fact that dark colors are harder to get contrast on.

  • @dermihai
    @dermihai 7 років тому

    Wow, people do overreact... It is very true what she said, just watch the whole video. One thing tho... When she says that we need diversity among coders, so that they can fill each other's gaps, I hope she means diversity of exeperience and of field of study, not racial/national/sexual diversity.

  • @davlmt
    @davlmt 7 років тому

    Yep the face detection on my sony camera always ignore black pple's faces and track white faces flawlessly

  • @WrUSasu
    @WrUSasu 7 років тому +1

    TI;DW: Algorithms for facial recognition using machine learning need more diverse training sets and longer training periods.

  • @MysticScapes
    @MysticScapes 7 років тому

    She is just seeing the problem from only one perspective. Hardwares like webcam are so important as much as these algorithms. I'm not a black person and even my webcam sometimes doesn't recognize my face cause I would have a long hipster beard. She was trying to make this bias as political and racial as possible however science doesn't care about all these labels. To avoid over fitting simply look outside the box and stop blaming others.

  • @robertsolem9234
    @robertsolem9234 2 роки тому

    Yes, what we need to do is *improve* facial recognition technology /s

  • @Tripp393
    @Tripp393 7 років тому

    Guys this is just something she's doing with her life. It would be dumb if they didn't talk about it

  • @Brutaful
    @Brutaful 7 років тому +61

    People complaining about the like/dislike ratio in 5, 4, 3, 2, 1.....

    • @tunjilegba
      @tunjilegba 7 років тому +19

      Brutaful People being hyper defensive over the word bias in 3...2...1...

    • @mridulpj
      @mridulpj 7 років тому

      Brutaful I'm not complaining. The like/dislike ratio is perfectly balanced. Let's keep it that way.

  • @JoaDrath
    @JoaDrath 7 років тому +3

    As long as she isn't making other people fight "algorithmic bias", I support her.

  • @TheSkipper1921
    @TheSkipper1921 7 років тому

    Everybody has their faces in a database. That is, anyone with a driver's licence or government issued ID. Have you heard of "Real ID"

  • @thierrybock3847
    @thierrybock3847 7 років тому +2

    that ain't no bug it's a privacy feature. don't break features.

  • @TheCreativeKruemel
    @TheCreativeKruemel 7 років тому +89

    MBY BECAUSE THE BIAS ISN'T REALLY A BIAS?!?!?

  • @markphc99
    @markphc99 7 років тому +6

    The Chinese have their work cut out

  • @clearpill
    @clearpill 5 років тому +4

    This video is one of the best examples of the victim complex so pervasive these days. I'm a graphic artist and know better than most the issues that emerge with video compression and low contrast values. The solution to her problem isn't activism, it's 16 bit per channel imagery.

    • @clearpill
      @clearpill 5 років тому +3

      ...and using "cheap" face recognition software as a base doesn't help.

    • @luisrogelio98
      @luisrogelio98 3 роки тому +1

      @@clearpill You are missing the point because those "cheap face recognition" algorithims are the ones most people will use, also contrast and resolution should be irrelevant to a properly train Image Recognition Model. They are based on data in , process and result , and face recognition is only one of the many problems people are trying to fix with Biased Artificial Inteligence.

    • @clearpill
      @clearpill 3 роки тому +2

      @@luisrogelio98 I see. So you're saying technology is racist?

    • @luisrogelio98
      @luisrogelio98 3 роки тому +2

      @@clearpill I wouldn't say yes , is just the fact that AI is still a young technology and problems of this kind that are created intentionally or unintentionally need to be addressed as early as we find them because if we overlook them now in the future it can ge harder to solve problems that are such a core part of an AI.
      Another example is in Image detection software that people tried to teach it what is beautiful picture, and then it turn out that pictures of fire and blood the AI tagged them as beautiful because of the bright vibrant red colors.
      AI is just a tool and it can't have an opinion, so is the duty of those who make them that it causes more good than harm.

  • @swordwaker7749
    @swordwaker7749 7 років тому

    Well after AI learned once. It rarely try to think against it. It should identify reasons.
    What about training while working?
    If it found many faces near this spectrum then it recognize those faces. but beware not to do it for gorilla.

  • @MedEighty
    @MedEighty 7 років тому +10

    The process by which the majority of viewers of this video decided to rate the video:
    1. Notice that the person presenting is black.
    2. Notice that the person presenting is a woman.
    3. Notice that the title of the video has "fighting" and "bias" in it.
    4. Switch on racist and sexist brain circuitry.
    5. Click thumbs down, before the video beings.
    6. Move on.

    • @JamesJeude
      @JamesJeude 6 років тому

      As of May 2018 the like/dislike ratio is about 51/49, so "majority" might be an overstatement ... but ... the problem of inadequate training data is worth discussing and is the obligation of everyone in the AI field to discuss. Look at the google search for something as basic as 'grandpa' and you'll see almost entirely whites in the top few dozen results. This is not a result of bias at Google Image Search, but the preponderance of examples that have the word 'grandpa' on (EXIF) or near the photograph, and the link and click history as processed by an algorithm that is proprietary to Google or Bing or Yahoo or whatever. The softer side of the question, the less technical side, is whether a company like google has an obligation to un-do the bias it picks up from the actual clicks, link, and web design behaviors of its billions of websites and users. So to the point of her video - does an image-recognition engineer have an obligation to look beyond the mass of evidence and check for bias in the less common cases. It's analogous to a statistician designing a survey to 'oversample' certain segments with low representation. ("oversampling" doesn't mean "over-representing", contrary to the misunderstanding some politicians had during the 2016 election. The numbers are normalized before the survey is published.)

  • @signalamplifier
    @signalamplifier 7 років тому +1

    The one that used vanilla OpenCV to detect a face should not appear on TED.

  • @CandyLemon36
    @CandyLemon36 Рік тому

    This content is relevant and timely. A book I read on this topic was equally pertinent. "Game Theory and the Pursuit of Algorithmic Fairness" by Jack Frostwell

  • @bunkertons
    @bunkertons 3 роки тому +4

    This is so informative, thank you for sharing.

  • @tracykarinp
    @tracykarinp 7 років тому +3

    Thank you for a "Very Informative" presentation! It's wonderful that you brought this issue to the front burner! Kudos Joy! :-)

  • @jamespharris2494
    @jamespharris2494 5 років тому

    It's simple math. How many bubbles are in a bar of soap?