[UPDATE May 20, 2021] CNN reports: "Twitter has largely abandoned an image-cropping algorithm after determining the automated system was biased." www.cnn.com/2021/05/19/tech/twitter-image-cropping-algorithm-bias/index.html
Well. Now I would like to see stats on distribution between black and white software engineers and ML specialists. And no. I don't say it should have quotas. I just wonder wherever it was tested at all
@@onee1594 Feel free to look for that at your nations government DB or draw a conclusion from a credible sample size. I am not obligated to provide that for you.
@@AtiqurRahman-uk6vj You are not obligated and I didn't ask you to provide it. There's no need to be so uncivilized unless you think world turns around your comment and personally.
@@onee1594 Since you replied under my comment instead of opening a separate comment it is a logical assumption that you were placing the request to me and I declined. You and your colonial mindset of xenophobia are a few centuries too late to call others "uncivilized" simply for declining to do your bidding. Good day
lets not forget how light works in camera. I am a dark skinned person and I can confirm that a light skin would physically reflect higher amount of photon which will result in higher probability of the camera to capture that picture better than that of a black counterpart. same goes for computational photography and basic algorithm that are based on photos that we upload. it only makes sense that it would be bias towards white skin. why does everything have to be taken as an offensive scenario? We are going too far with this political correctness bullshit. Again, I am a person or dark skin and even I think this is bullshit. Now if you use it as if this is an issue in identifying person's face for security reasons or such, then, yes I am all for it to make it better to recognize all faces. But please, please make this political correctness bullshit stop.
AMEN! There is no such thing as racism, there is only one race, the human race! Let's stop speaking the negatives! Words are very powerful. When you speak it, you give even fantasy power to 'become'!
Yes, as a Computer Engineering Bachelor and someone who's working with a Camera for almost 4 years now it's good to address the apparent weakness for Camera to capture darker object can mess up AI detections. My own Bachelor Thesis was about Implementation of Pedestrian Detection and its really hard to make sure the Camera is taking a favourable image... And since I am from Indonesia... Which, you guess it, have less white skinned population... Its really hard to find a good experiment location, especially when I use already developed Algorithm as a backbone. There are a lot of False positives... Ranging on "misses counts" due to the person is darker, to double counts due to a fairer skinned person passes while there are human shaped shadows. We need to improve the technology of AI with better Diversity for its Training datasets. It's good to address that weakness to create a better technology than to point fingers... Learn from our mistakes and improve from that... If a hideous person like Edison can do that with his electric lightbulb? Why aren't we doing the same while developing even more advanced tech than him? The title is very nuanced... But hey, it gets me to click it... And hopefully others can stand through the Headline.
They were using racist as a description which the AI outcomes were. They even said in this video that it doesn't mean there has to be malicious intent. Tho there probably is some cause the AI just learns from the prejudice stereotypes and beliefs of what people post online. I saw a video that someone received hardworking AI pics as caucasian men in suits in the office. So there were prejudice stereotypes excluding other kinds of activities or jobs has hardworking too.
@Sigmaxon in the case of training datasets, because they can be so expensive to produce, the demand is actually constrained by supply rather than the other way around. Changing the demographics in the dataset is a slow process.
At work, I have an IR camera that automatically measures your temperature as you walk into my facility. How it is supposed to do this is by locking on to the face, then measuring the person’s temperature. Needless to say, I want to take a sledgehammer to it. When it actually works, it’s with a dark face. The type of face it has the most problem with is a light face. If you also have a bald head, it will never see you.
This is first video I've seen with "Audio Description" to assist the vision impaired. I'd like to commend Vox for putting in the effort to help differently abled people, especially considering this videos subject matter. Well done for being pro-active with assistive technology.
I feel like a lot of these things aren’t the result of anything racist, but of other external factors end up contributing to that. The example of the hospital algorithm looking at expensive patients, for instance, isn’t inherently racist. The issue there should be with the factors that cause minority groups to cost less (ie. worse access to insurance), not with the software.
Could also be due to non-racist factors such as cultural preferences like opting for at home care, or even subsidizes for low income areas/households which reduce the recorded expenditure of a patient. But of course as a media organization they need to jump to the most rage & offence inducing headline which gets them the most clicks, this is why I never trust Vox & other companies like this.
@@Zaptosis This Vox video did say that there could be factors that didn't have to do with just active racists too. So what are you talking about? You were the one who jumped to rage like you were accusing this vox video. Also, you jumped to conclusions that racism doesn't exist too. When it always does and there's evidence. You also shouldn't just assume "most African Americans want home care" when the African woman in this video said otherwise. Same with some other African yt-ers I've watched. You should see different perspectives. It just seemed like you wanted to not care that there are people negatively impacted by this or racism. It's double standards cause if there was prejudice against you or your group you would want the injustice to be amended. So far I think Vox is pretty educational. There's also conservatives who falsely cry about prejudice hoaxes towards them or caucasians too. There's people who received resulted from AI art that were racist or sexist stereotypes. I saw a video that someone received hardworking AI pics as caucasian men in suits in the office. So there were prejudice stereotypes saying other kinds of activities or jobs were less hardworking too.
Just curious, but was it randomized which of the faces (darker/lighter) was on top, and which was on the bottom? It wasn't immediately apparent with the tests that were run after, but in both the Obama/McConnell and the 2 cohosts tests, the darker face was on top, which may be why there was an implicit bias towards the lighter face. If not that, the "racist" face detection can largely be boiled down to the algorithm being fed more training data of white people rather than black people, a consequence of darker skin tones comprising a minority of the population. As such the ML cropper will choose the face it has a higher confidence is a face. That could be the source of a racial skew.
The Obama/McConnell test was done with two versions of the image, one with Obama at the top and one with Obama at the bottom. The face detection chose McConnell both times.
I just started the episode but I would think that this has something to do with the basic principles of photography. When you take a photo, the subject is usually in the light and the background is darker for obvious reasons. So, the algorithm simply sees the darker faces as part of the background.
Indeed - but why didn’t the researchers train it to not do that? Because insufficient testing was done. Which comes back to a blind spot re race in humans. These algorithms are amazingly good at fitting to the curves we ask them to fit, so these problems aren’t inherent to the technology. It’s with the scope of the problem researchers are asking it to solve.
3:37 What about how they used pics of people with all white backgrounds? Why isn't AI thinking the light faces are the light background then? Why is the AI able to pick up the dark hair of caucasians as not part of the background?
There's also the possible issue of "white balance" of the cameras themselves. My understanding is that it's difficult to set this parameter in such a way that it gives acceptable/optimal contrast to both light and dark skin at the same time.
16:15 that got me trippin for a second until I realize they probably just mirror the video so that the writing comes out right and she's not actually writing backwards.
I'm sorry Joss, but how did the only two people in this video that actively work in the tech industry, that are building these automated systems, only have a combined 5 minutes on screen? You don't talk to the computer scientists about solutions or even the future of this tech, but yet you talk to Dr. Benjamin and Dr. Noble (who don't code) about "implications" and examples in which tech was biased. Very frustrating as a rising minority data scientist myself, to see this video focused on opinion instead of actually finding out how to fix these algorithms (like the description says.) Missed an excellent opportunity at highlighting minority data scientists and how they feel building these algorithms.
I would’ve certainly liked to have seen input from Jordan B Harrod, as she’s done a number of great videos on this subject, but with Vox’s traditional print journalism background I can understand gravitating toward book authors.
That's because the intent of the video is supposed to impart the outcome of racism regardless of how it actually works. Emotional perception is what they're after.
Can you please put the names of the people you are interviewing in the description and links to their work/social media? Especially if they have a book we could support!
I would like to see the contrast pic of the two men Joss took at the beginning and uploaded to Twitter been repeated, but with the black man on the bottom. The background at the top of the pic they took was quite dark, and the lack of contrast might have contributed, along with Twitter's weighting bias, to the white face being featured. I don't think Twitter would switch to picking the black face but it would have helped control for an extra variable.
The most important thing when it comes to training A.I is the raw data you feed it. Give the A.I 51% images of white people and 49% images of black people and the A.I will have a ~1% bias towards white people.
Ya know what? Good for black people. We don't need facial recognition in today's society, and I genuinely perceive it as a slippery slope when it comes to surveillance. If computers are having trouble recognizing black people, all that means to me is that corporations and the government will have a harder time collecting data on them. I swear to God, we should be having conversations about wether or not facial recognition software should exist, not wether or not it's racist, because imo the former conversation is of much more importance.
Yeah, we can certainly agree on this without even bringing the possible racial incongruencies into the conversation. The militarized police state is evil. Period.
Funnily enough, this reminds me of an episode in Better Off Ted (S1:E4), where the central parody was on automated recognition systems being "racist" and how the corporation tried to deal with it. Well, that was in 2009...
• 2:58 - Lee was on the right track, it's about machine-vision and facial-detection. One test is to try light and dark faces on light and dark backgrounds. It's a matter of contrast and edge- and feature-detection. Machines are limited in what they can do for now. Some things might never be improved, like the soap-dispenser; if they increase the sensitivity, then it will be leaking soap. • 8:13 - And what did the search results of "white girls" return? What about "chinese girls"? 🤨 A partial test is useless. ¬_¬ • 9:00 - This is just regular confirmation bias; there aren't many articles about Muslims who… sculpted a statue or made a film. • 12:34 - Yikes! Hard to deny raw numbers. 🤦 • 12:41 - A.I.s are black-boxes, you _can't_ know why they make the "decisions" they make. • 13:33 - Most of the people who worked on developing technologies were white (and mostly American). They may or may not have had an inherent bias, but at the very least, they used their own data to test stuff at the beginning while they were still just tinkering around on their own, before they were moved up to labs with teams and bigger datasets. And cats built the Internet.🤷 • 14:44 - I can't believe you guys built this thing just for this video. What did you do afterwards? 🤔
Re 9:00, and what is the underlying societal reason that the majority of English language newspaper reports about Muslims have that negative tilt…? The implications extracted from training data merely reflect society.
In the hand video the black person was tilting his hand so that it would go around the area that the sensor could detect easily. The white hand was directly under it. That would most likely cause a difference.
I just took a C1 English exam and the third listening was literally a clip from this video. It was nice to see a youtube video as part of such an exam.
This is exactly why AI powered software are not for 100% automation, they should always be used as a support tool to the human who is responsible for the job, for example: In your health risk prediction task, the threshold of predicting high risk patient should be lowered from 90%+ to 70%+ and a human should verify they are indeed high risk patient or not, this will both save time(as humans are looking at only mid risk-high patients) and resources, and reduce the bias.
"the human desicions in the design of something (technology or knowledge)" is what actually means when academics say "facts are a social construction" it doesn't mean it is fake (which is the most common and wrong read), it means that there are some human externalities and non intended outcomes in the process of making a technology/knowledge. Tech and knowledge is presented to the public as a finished factual black box, not many people know how them were designed, investigated, etc
Un claro ejemplo comienza cuando en un canal de UA-cam que publica contenido a nivel internacional. Recibe comentarios de varios idiomas. El tema es que los yanquis o estadounidenses, no toleran que las personas hablen otro idioma. Entonces desvalorizan cualquier comentario en otro idioma. Lo hemos estado experimentando.
People seem to be noticing how nicely the professor can write backwards... Fun fact: That's a camera trick! ua-cam.com/video/eVOPDQ5KYso/v-deo.html She is actually writing normally, (So the original video shows the text backwards) but then in editing, the video was flipped again, making the text appear normal. Notice that she is writing with her left hand, which should only be a 10% chance. Great video btw! I thought that the visualization of the machine learning process was extremely clever.
18:36 why are they pretending they are talking on a video chat, when they had crystal clear picture from another camera? Reality and perception, subtle differences.
There should be an analog to be precautionary principle used in environmental politics that would be a similar principle when applied to social issues. That is if there is a signficant amount or reasonable risk in doing something, then that thing should not be done.
I am black and I build models. The theory is: bad data in, bad data out. Whatever data and rules that these algorithms were built on is what should be in question. Machines are not racist, the people (in tech companies, vendors, agencies) who build them are.
Wow, another amazing video. I love the high-minds meet middle-school science fair feel of these videos. They're so accessible but also tackling really massive questions. Each one is so well put together and so thought provoking.
What a thought provoking episode! That young woman Inioluwa not only knew the underlying problem but she even formed a solution.. when she said that it should be devs responsibility to proactively be conscious of those that could be targeted or specified in a social situation and do their best to prevent it in advance. She’s intelligent, and understands just what needs to be done and stated in a conflict; A solution.. Hats off to her..
This reminds me of that book, "Weapons of Math Destruction" Great read for anyone interested, it's about these large algorithms which take on a life of their own
This is happening everywhere right now, when you go shopping, to restaurants, to buy a house, to buy a boat, to get a job. It is designed that way, from start to finish, and there are always excuses of why this is happening and promises that it will change but I have yet to see any changes! As a matter of fact the more you dig the more you will find! 😅🤣😂
Could it be contrast? What if you photoshop the skin tones to green, yellow, or red and hair color with inverted color. Then use people like dark skin and light colored hair and light hair light skin to see if the contrast difference is what's causing this.
Algorithms of Oppression is a really good book if you want to learn about racial and gender bias in big tech algorithms, like Google searches. It shows that machines and algorithms are only as objective as we are. It seems like machine learning and algorithms are more like groupthink and are not objective.
She is one of my fav vox hosts. Her videos are structured very well and have something interesting that hooks me up for the entire video. Nice Initiative by Vox 'Glad You Asked S2'
As máquinas não são racistas, elas são mal programadas ou mal configuradas por pessoas que não tomaram os cuidados adequados durante o projeto. The machines are not racist, they are poorly programmed or misconfigured by people who did not take proper care during the project.
And why do the creators still not bother to take care this is not the first time these issues have been raised. Its not getting better. So why do you think they choose to disregard part of the population?
Vox recruitment notice: Job description: Vox content developer Job essential requirement: 1.applicant should had fall on head while being born. 2.should not have even studied science or anything remotely scientific their whole life.
Black does not work the same way with the pixels. It is much more difficult to frame dark images. I suggest you build a better algorithm and see how hard it will be.
The same problem applies to copying pictures of dark skinned people. I go to the light/dark option and put it for lighter so more pixels are eliminated. In a similar way, opting for copying a photo will also use fewer pixels.
Cameras work with the perception of light, if I’m not mistaken. So it’s way easier to detect light skins.. the technology isn’t racist. The outcome don’t favor blacks, but nonetheless, it’s not racist. It’s just than a point to improve in tech. When you broadcast that everything is racist, racism itself gets banalized.
Yeah it's not bad until when it comes to crime. Black people have to this day still being arrested over mistaken identity at an alarming rate . So weather its machine or human something has to be done!
Google has published an article stating their face recognition is working less optimal with people of color than with white people due to the training sets involved. It wasn't maliciously trained to be racist but the outcome due to simple oversights in curating the training data and subsequent tweaking made the outcome drastically favor white faces over others. Though Google has been making improvements to it since then.
the camera thing is not racist it is just black colors blend with the back ground while whites are more strong and bright so white is hard not to see for a computer
En otros países ni nos mosquea el "racismo", pero en EEUU y Europa, todo se trata de aventajarse mediante la *queja* si no es racismo es feminismo, ultimadamente el hombre hetero blanco promedio tiene que ceder y ceder y ser desplazado
As more and more governments, like say China, India, Middle Eastern countries, are employing face recognition tools that use such AI for law enforcement and surveillance, and they're buying said software from Western Countries, I am wondering how accurate these systems are seeing as the AI were trained on primarily white faces. Do these AI then learn "locally", and if so, can this data then be fed back into the original AI to make it learn how to recognise those ethnicities in western countries with an ethnically diverse population, like USA, UK, etc.?
They don’t learn while being run. Training is very compute intensive and is done once, centrally, on large servers. After training is complete the neural networks can run on very little compute power, on a phone or laptop or camera, but they’re totally static.
These videos are so solid! I'm about to graduate college with a degree in sociology and so far these videos are hitting a ton of the main points that I've learned about over my four years of education.
this was actually a really interesting video! definitely makes me think more deeply about how my biases may affect technology and how they affect me O_O
I subscribed a while ago, and now most videos seem to be related to racism. I respect the choice, but it’s getting monothematic. Anyway, hope the US gets better. Good luck over there
On this season of Glad You Asked, we explore the impact of systemic racism on our communities and in our daily lives. Watch the full season here: bit.ly/3fCd6lt Want updates on our new projects and series? Sign up for the Vox video newsletter: www.vox.com/video-newsletter For more reading about bias in AI, which we covered in this episode, visit our post on Vox.com: bit.ly/3mcZD4J
Thought provoking video! However, using AI generated faces is probably the worst thing to do in this case, since whichever model generated the faces would presumably suffer the same systemic bias. There is a reason we bother collecting actual real-world data instead of using simulated data.
[UPDATE May 20, 2021] CNN reports: "Twitter has largely abandoned an image-cropping algorithm after determining the automated system was biased." www.cnn.com/2021/05/19/tech/twitter-image-cropping-algorithm-bias/index.html
That’s amazing!
Machines aren't racist. The outcome feels racist due to bias in training data. The model needs to be retrained.
Lol 15K likes, no comments
Well. Now I would like to see stats on distribution between black and white software engineers and ML specialists.
And no. I don't say it should have quotas. I just wonder wherever it was tested at all
@@onee1594 Feel free to look for that at your nations government DB or draw a conclusion from a credible sample size. I am not obligated to provide that for you.
@@AtiqurRahman-uk6vj You are not obligated and I didn't ask you to provide it.
There's no need to be so uncivilized unless you think world turns around your comment and personally.
@@onee1594 Since you replied under my comment instead of opening a separate comment it is a logical assumption that you were placing the request to me and I declined.
You and your colonial mindset of xenophobia are a few centuries too late to call others "uncivilized" simply for declining to do your bidding. Good day
lets not forget how light works in camera. I am a dark skinned person and I can confirm that a light skin would physically reflect higher amount of photon which will result in higher probability of the camera to capture that picture better than that of a black counterpart. same goes for computational photography and basic algorithm that are based on photos that we upload. it only makes sense that it would be bias towards white skin. why does everything have to be taken as an offensive scenario? We are going too far with this political correctness bullshit. Again, I am a person or dark skin and even I think this is bullshit. Now if you use it as if this is an issue in identifying person's face for security reasons or such, then, yes I am all for it to make it better to recognize all faces. But please, please make this political correctness bullshit stop.
This is all indeed going to a point where everything has to be taken to the correctness debate instead of factual and objetive responses or solutions
this is how barcodes work
AMEN! There is no such thing as racism, there is only one race, the human race! Let's stop speaking the negatives! Words are very powerful. When you speak it, you give even fantasy power to 'become'!
Light is racist.
Amen+2
This video is the definition of greed
Yes, as a Computer Engineering Bachelor and someone who's working with a Camera for almost 4 years now it's good to address the apparent weakness for Camera to capture darker object can mess up AI detections.
My own Bachelor Thesis was about Implementation of Pedestrian Detection and its really hard to make sure the Camera is taking a favourable image... And since I am from Indonesia... Which, you guess it, have less white skinned population... Its really hard to find a good experiment location, especially when I use already developed Algorithm as a backbone. There are a lot of False positives... Ranging on "misses counts" due to the person is darker, to double counts due to a fairer skinned person passes while there are human shaped shadows.
We need to improve the technology of AI with better Diversity for its Training datasets. It's good to address that weakness to create a better technology than to point fingers... Learn from our mistakes and improve from that... If a hideous person like Edison can do that with his electric lightbulb? Why aren't we doing the same while developing even more advanced tech than him?
The title is very nuanced... But hey, it gets me to click it... And hopefully others can stand through the Headline.
It's all about contrast and how cameras perceive darker subjects. The same thing happens when you try to photograph a black cat, it's very difficult.
While I do think that machines are biased I think that saying they're racist is an over statement.
IQs under 83
You're a simpleton
The piece carefully de-emphasized that the technology was 'racist'; but that the technology seemingly had 'racist outcomes'.
Depends if you define "racism" as requiring malicious intent.
They were using racist as a description which the AI outcomes were. They even said in this video that it doesn't mean there has to be malicious intent. Tho there probably is some cause the AI just learns from the prejudice stereotypes and beliefs of what people post online.
I saw a video that someone received hardworking AI pics as caucasian men in suits in the office. So there were prejudice stereotypes excluding other kinds of activities or jobs has hardworking too.
This video also seems a bit biased, I don’t believe “racism” is the most appropriate reality to associate this phenomena with.
As a machine learning enthusiast, I can confirm there isn't much diverse set of data available out there. It's just sad but it's alarmingly true.
@Sigmaxon in the case of training datasets, because they can be so expensive to produce, the demand is actually constrained by supply rather than the other way around. Changing the demographics in the dataset is a slow process.
Why is it sad?
@@randybobandy9828 because it leads to less optimal outcomes for everyone, duh
@@kaitlyn__L it's a issue not Worth addressing.
At work, I have an IR camera that automatically measures your temperature as you walk into my facility. How it is supposed to do this is by locking on to the face, then measuring the person’s temperature. Needless to say, I want to take a sledgehammer to it. When it actually works, it’s with a dark face. The type of face it has the most problem with is a light face. If you also have a bald head, it will never see you.
This is first video I've seen with "Audio Description" to assist the vision impaired. I'd like to commend Vox for putting in the effort to help differently abled people, especially considering this videos subject matter. Well done for being pro-active with assistive technology.
The hand soap dispenser is a real thing. Straight up
The black guy didn’t have his hand underneath it correctly clearly…. N I’ve had soap dispensers not work…. U r so oppressed
the level of production on this show is just
* chef's kiss *
I feel like a lot of these things aren’t the result of anything racist, but of other external factors end up contributing to that. The example of the hospital algorithm looking at expensive patients, for instance, isn’t inherently racist. The issue there should be with the factors that cause minority groups to cost less (ie. worse access to insurance), not with the software.
Could also be due to non-racist factors such as cultural preferences like opting for at home care, or even subsidizes for low income areas/households which reduce the recorded expenditure of a patient.
But of course as a media organization they need to jump to the most rage & offence inducing headline which gets them the most clicks, this is why I never trust Vox & other companies like this.
@@Zaptosis This Vox video did say that there could be factors that didn't have to do with just active racists too. So what are you talking about? You were the one who jumped to rage like you were accusing this vox video.
Also, you jumped to conclusions that racism doesn't exist too. When it always does and there's evidence.
You also shouldn't just assume "most African Americans want home care" when the African woman in this video said otherwise. Same with some other African yt-ers I've watched. You should see different perspectives.
It just seemed like you wanted to not care that there are people negatively impacted by this or racism.
It's double standards cause if there was prejudice against you or your group you would want the injustice to be amended.
So far I think Vox is pretty educational.
There's also conservatives who falsely cry about prejudice hoaxes towards them or caucasians too.
There's people who received resulted from AI art that were racist or sexist stereotypes.
I saw a video that someone received hardworking AI pics as caucasian men in suits in the office. So there were prejudice stereotypes saying other kinds of activities or jobs were less hardworking too.
Just curious, but was it randomized which of the faces (darker/lighter) was on top, and which was on the bottom? It wasn't immediately apparent with the tests that were run after, but in both the Obama/McConnell and the 2 cohosts tests, the darker face was on top, which may be why there was an implicit bias towards the lighter face.
If not that, the "racist" face detection can largely be boiled down to the algorithm being fed more training data of white people rather than black people, a consequence of darker skin tones comprising a minority of the population. As such the ML cropper will choose the face it has a higher confidence is a face. That could be the source of a racial skew.
The Obama/McConnell test was done with two versions of the image, one with Obama at the top and one with Obama at the bottom. The face detection chose McConnell both times.
I just started the episode but I would think that this has something to do with the basic principles of photography. When you take a photo, the subject is usually in the light and the background is darker for obvious reasons. So, the algorithm simply sees the darker faces as part of the background.
Indeed - but why didn’t the researchers train it to not do that? Because insufficient testing was done. Which comes back to a blind spot re race in humans. These algorithms are amazingly good at fitting to the curves we ask them to fit, so these problems aren’t inherent to the technology. It’s with the scope of the problem researchers are asking it to solve.
@Kaitlyn L ya because light just happens to reflect off of lighter skin... oh no.. how dare light behave this way!!
Do you think this issue would occur in a society of majority dark faces? Think about that
3:37 What about how they used pics of people with all white backgrounds? Why isn't AI thinking the light faces are the light background then? Why is the AI able to pick up the dark hair of caucasians as not part of the background?
There's also the possible issue of "white balance" of the cameras themselves. My understanding is that it's difficult to set this parameter in such a way that it gives acceptable/optimal contrast to both light and dark skin at the same time.
That's why you use multiple models, one to detect whether there is a black or white person in view and then have one model for each.
16:15 that got me trippin for a second until I realize they probably just mirror the video so that the writing comes out right and she's not actually writing backwards.
Fact: Almost everyone go straight to the comments section!
I'm sorry Joss, but how did the only two people in this video that actively work in the tech industry, that are building these automated systems, only have a combined 5 minutes on screen? You don't talk to the computer scientists about solutions or even the future of this tech, but yet you talk to Dr. Benjamin and Dr. Noble (who don't code) about "implications" and examples in which tech was biased. Very frustrating as a rising minority data scientist myself, to see this video focused on opinion instead of actually finding out how to fix these algorithms (like the description says.)
Missed an excellent opportunity at highlighting minority data scientists and how they feel building these algorithms.
These people don't seek facts or objetiveness, only points to blame others for not being politically correct...
I would’ve certainly liked to have seen input from Jordan B Harrod, as she’s done a number of great videos on this subject, but with Vox’s traditional print journalism background I can understand gravitating toward book authors.
That's because the intent of the video is supposed to impart the outcome of racism regardless of how it actually works. Emotional perception is what they're after.
Can you please put the names of the people you are interviewing in the description and links to their work/social media? Especially if they have a book we could support!
I would like to see the contrast pic of the two men Joss took at the beginning and uploaded to Twitter been repeated, but with the black man on the bottom. The background at the top of the pic they took was quite dark, and the lack of contrast might have contributed, along with Twitter's weighting bias, to the white face being featured. I don't think Twitter would switch to picking the black face but it would have helped control for an extra variable.
Google photos thinks that all my Asian family and friends are the same person
It's been forever since I've last seen Joss in a video. I've almost forgotten how good and well-constructed her videos are.
The most important thing when it comes to training A.I is the raw data you feed it. Give the A.I 51% images of white people and 49% images of black people and the A.I will have a ~1% bias towards white people.
Ya know what? Good for black people. We don't need facial recognition in today's society, and I genuinely perceive it as a slippery slope when it comes to surveillance. If computers are having trouble recognizing black people, all that means to me is that corporations and the government will have a harder time collecting data on them. I swear to God, we should be having conversations about wether or not facial recognition software should exist, not wether or not it's racist, because imo the former conversation is of much more importance.
Yeah, we can certainly agree on this without even bringing the possible racial incongruencies into the conversation. The militarized police state is evil. Period.
They literally discuss whether it should exist at all if you watch the video the whole way through
Funnily enough, this reminds me of an episode in Better Off Ted (S1:E4), where the central parody was on automated recognition systems being "racist" and how the corporation tried to deal with it. Well, that was in 2009...
That sounds like a funny show. Sadly it won't fly today, but I'll be looking for it online now.
I didn’t realize I missed having Joss videos this much. She’s so good!
love the fact that this video has audio description! this is so important
• 2:58 - Lee was on the right track, it's about machine-vision and facial-detection. One test is to try light and dark faces on light and dark backgrounds. It's a matter of contrast and edge- and feature-detection. Machines are limited in what they can do for now. Some things might never be improved, like the soap-dispenser; if they increase the sensitivity, then it will be leaking soap.
• 8:13 - And what did the search results of "white girls" return? What about "chinese girls"? 🤨 A partial test is useless. ¬_¬
• 9:00 - This is just regular confirmation bias; there aren't many articles about Muslims who… sculpted a statue or made a film.
• 12:34 - Yikes! Hard to deny raw numbers. 🤦
• 12:41 - A.I.s are black-boxes, you _can't_ know why they make the "decisions" they make.
• 13:33 - Most of the people who worked on developing technologies were white (and mostly American). They may or may not have had an inherent bias, but at the very least, they used their own data to test stuff at the beginning while they were still just tinkering around on their own, before they were moved up to labs with teams and bigger datasets. And cats built the Internet.🤷
• 14:44 - I can't believe you guys built this thing just for this video. What did you do afterwards? 🤔
Re 9:00, and what is the underlying societal reason that the majority of English language newspaper reports about Muslims have that negative tilt…? The implications extracted from training data merely reflect society.
In the hand video the black person was tilting his hand so that it would go around the area that the sensor could detect easily. The white hand was directly under it. That would most likely cause a difference.
I swear I know some of those AI people! Imagine seeing your face pop out of that face randomiser!
11:50
We're just gonna gloss over how she writes backwards so perfectly?
Not enough black people in China. Most of the datasets every algorithm uses were trained by CCTV data from Chinese streets and Chinese ID cards.
I just took a C1 English exam and the third listening was literally a clip from this video. It was nice to see a youtube video as part of such an exam.
I’m in the same situation as you. The problem were the questions. I really don’t know how I did on the exam. I hope we have luck…
I wish they tested the same picture test in the beginning put put a bright white background on both guys.
So Twitter said they didn't find evidence of racial bias when testing the tool. My opinion is that they were not looking for it in the first place.
This is exactly why AI powered software are not for 100% automation, they should always be used as a support tool to the human who is responsible for the job, for example: In your health risk prediction task, the threshold of predicting high risk patient should be lowered from 90%+ to 70%+ and a human should verify they are indeed high risk patient or not, this will both save time(as humans are looking at only mid risk-high patients) and resources, and reduce the bias.
"the human desicions in the design of something (technology or knowledge)" is what actually means when academics say "facts are a social construction" it doesn't mean it is fake (which is the most common and wrong read), it means that there are some human externalities and non intended outcomes in the process of making a technology/knowledge. Tech and knowledge is presented to the public as a finished factual black box, not many people know how them were designed, investigated, etc
Me at first: "who even asks these questions, sreiously?"
Me after finishing the video: "Aight, fair point."
15:50 how she can write in mirror image effortless??
Un claro ejemplo comienza cuando en un canal de UA-cam que publica contenido a nivel internacional. Recibe comentarios de varios idiomas. El tema es que los yanquis o estadounidenses, no toleran que las personas hablen otro idioma. Entonces desvalorizan cualquier comentario en otro idioma. Lo hemos estado experimentando.
People seem to be noticing how nicely the professor can write backwards...
Fun fact: That's a camera trick!
ua-cam.com/video/eVOPDQ5KYso/v-deo.html
She is actually writing normally, (So the original video shows the text backwards) but then in editing, the video was flipped again, making the text appear normal. Notice that she is writing with her left hand, which should only be a 10% chance.
Great video btw! I thought that the visualization of the machine learning process was extremely clever.
it will be funny when machines start to preffer machines and ai than humans itself.
18:36 why are they pretending they are talking on a video chat, when they had crystal clear picture from another camera? Reality and perception, subtle differences.
We had a camera crew on each end of our zoom call, since we couldn't travel due to Covid. - Joss
There should be an analog to be precautionary principle used in environmental politics that would be a similar principle when applied to social issues. That is if there is a signficant amount or reasonable risk in doing something, then that thing should not be done.
Haven’t seen any new videos come from Joss, miss her so much~
I am black and I build models. The theory is: bad data in, bad data out. Whatever data and rules that these algorithms were built on is what should be in question. Machines are not racist, the people (in tech companies, vendors, agencies) who build them are.
Wow, another amazing video. I love the high-minds meet middle-school science fair feel of these videos. They're so accessible but also tackling really massive questions. Each one is so well put together and so thought provoking.
At the beginning of the video i thought this was dumb but by midway through I’m like this is what we need.
Why don't just post the same picture changing both positions, so we can get already a good estimative :)
Joss!!! Why did you print all those photos!!!!!?
What a thought provoking episode! That young woman Inioluwa not only knew the underlying problem but she even formed a solution.. when she said that it should be devs responsibility to proactively be conscious of those that could be targeted or specified in a social situation and do their best to prevent it in advance. She’s intelligent, and understands just what needs to be done and stated in a conflict; A solution.. Hats off to her..
This reminds me of that book, "Weapons of Math Destruction"
Great read for anyone interested, it's about these large algorithms which take on a life of their own
This is happening everywhere right now, when you go shopping, to restaurants, to buy a house, to buy a boat, to get a job. It is designed that way, from start to finish, and there are always excuses of why this is happening and promises that it will change but I have yet to see any changes! As a matter of fact the more you dig the more you will find! 😅🤣😂
Waited for Joss for what feels like years
very good information, it reminded me of your video from 5 years ago...
"Color film was built for white people. Here's what it did to dark skin"
long time no see Joss!
Right? I missed her too. She's a great jornalist and freaking cute as all hell!
Is this why the automatic sink in public restrooms barely work for me because it’s designed to read lighter skin 🧐🥲🧐🥲
Could it be contrast? What if you photoshop the skin tones to green, yellow, or red and hair color with inverted color. Then use people like dark skin and light colored hair and light hair light skin to see if the contrast difference is what's causing this.
We found in class that this video cuts off at about 5 minutes from the end.
Algorithms of Oppression is a really good book if you want to learn about racial and gender bias in big tech algorithms, like Google searches. It shows that machines and algorithms are only as objective as we are. It seems like machine learning and algorithms are more like groupthink and are not objective.
She is one of my fav vox hosts. Her videos are structured very well and have something interesting that hooks me up for the entire video. Nice Initiative by Vox 'Glad You Asked S2'
Joss is so good. She has the perfect voice
Wait.. how did the soap dispenser differentiate between the two hands???
Great video! Really informative and very important. Thanks for a great watch
Thank you so much to make this free to watch
Off topic but why is the progress bar blue instead of red?
As máquinas não são racistas, elas são mal programadas ou mal configuradas por pessoas que não tomaram os cuidados adequados durante o projeto.
The machines are not racist, they are poorly programmed or misconfigured by people who did not take proper care during the project.
And why do the creators still not bother to take care this is not the first time these issues have been raised. Its not getting better.
So why do you think they choose to disregard part of the population?
Vox recruitment notice:
Job description: Vox content developer
Job essential requirement: 1.applicant should had fall on head while being born.
2.should not have even studied science or anything remotely scientific their whole life.
True
Did you even watch the video? It explained the problem very well.
In a year the video has been up you have only allowed 295 comments? Do you just delete everything? Or does nobody care about Vox?
Have you considered infrared (radiated heat) differences , light and dark surfaces throw at different rates.
"A program is only as smart as the one who coded it."
Black does not work the same way with the pixels. It is much more difficult to frame dark images. I suggest you build a better algorithm and see how hard it will be.
The same problem applies to copying pictures of dark skinned people. I go to the light/dark option and put it for lighter so more pixels are eliminated. In a similar way, opting for copying a photo will also use fewer pixels.
Cameras work with the perception of light, if I’m not mistaken. So it’s way easier to detect light skins.. the technology isn’t racist. The outcome don’t favor blacks, but nonetheless, it’s not racist. It’s just than a point to improve in tech.
When you broadcast that everything is racist, racism itself gets banalized.
why didn’t we try the two guys switching places in that picccc
Next thing u know toasters are ganna be racist
Stop with the Joss Fong comments, I can't stop liking them
Yeah it's not bad until when it comes to crime. Black people have to this day still being arrested over mistaken identity at an alarming rate . So weather its machine or human something has to be done!
Give me one example of how we have programmed racism into tech and I will gladly stand with you.
Google has published an article stating their face recognition is working less optimal with people of color than with white people due to the training sets involved. It wasn't maliciously trained to be racist but the outcome due to simple oversights in curating the training data and subsequent tweaking made the outcome drastically favor white faces over others.
Though Google has been making improvements to it since then.
There was a Better off Ted episode. Corporate head office decided to discontinue use of the energy saving technology to save money.
the camera thing is not racist it is just black colors blend with the back ground while whites are more strong and bright so white is hard not to see for a computer
En otros países ni nos mosquea el "racismo", pero en EEUU y Europa, todo se trata de aventajarse mediante la *queja* si no es racismo es feminismo, ultimadamente el hombre hetero blanco promedio tiene que ceder y ceder y ser desplazado
As more and more governments, like say China, India, Middle Eastern countries, are employing face recognition tools that use such AI for law enforcement and surveillance, and they're buying said software from Western Countries, I am wondering how accurate these systems are seeing as the AI were trained on primarily white faces. Do these AI then learn "locally", and if so, can this data then be fed back into the original AI to make it learn how to recognise those ethnicities in western countries with an ethnically diverse population, like USA, UK, etc.?
They don’t learn while being run. Training is very compute intensive and is done once, centrally, on large servers. After training is complete the neural networks can run on very little compute power, on a phone or laptop or camera, but they’re totally static.
These videos are so solid! I'm about to graduate college with a degree in sociology and so far these videos are hitting a ton of the main points that I've learned about over my four years of education.
I missed seeing Josh in videos. Glad she's back.
Software's only as good as the guy who programmed it i say.
Confirmed. I’m a programmer and both me and my software suck.
this was actually a really interesting video! definitely makes me think more deeply about how my biases may affect technology and how they affect me O_O
I'll take this video as it has a good intention of creating an important conversation. But your data is kind of funky
I am amazed with the look of the studio. I would love to work there, the atmosphere is just different, unique and everyone have a place there 😍
I understand all the video purposes but my question is how mucho machines have to know about us and how we act?
I subscribed a while ago, and now most videos seem to be related to racism. I respect the choice, but it’s getting monothematic.
Anyway, hope the US gets better. Good luck over there
Wait, the cameraman is in both rooms but they are face timing each other???
I'm a simple man. I see Joss, I click.
Every time there's a Joss video there's always creeps in the comments talking about her appearance.
Extremely annoying.
*I am surprised the scientists working on these algorithms only facilitate europhilic imaging.*
as always the editing is absolutely superior.
keeps me hooked.
Love the production especially on the set!
On this season of Glad You Asked, we explore the impact of systemic racism on our communities and in our daily lives. Watch the full season here: bit.ly/3fCd6lt
Want updates on our new projects and series? Sign up for the Vox video newsletter: www.vox.com/video-newsletter
For more reading about bias in AI, which we covered in this episode, visit our post on Vox.com: bit.ly/3mcZD4J
Thought provoking video! However, using AI generated faces is probably the worst thing to do in this case, since whichever model generated the faces would presumably suffer the same systemic bias. There is a reason we bother collecting actual real-world data instead of using simulated data.
the background music is way too loud 😶