How to make computers less biased
Вставка
- Опубліковано 30 лип 2024
- You might think technology is the great leveller. But as AI and other data-driven innovations race farther and faster ahead, the automation of racial bias is causing growing concern.
00:00 - Can technology be racist?
00:50 - Bias in facial-recognition tech
03:50 - Why do data discriminate?
05:50 - What can be done?
07:00 - How can regulations help?
Sign up to The Economist’s daily newsletter to keep up with our latest stories: econ.st/3gJBH8D
The limitations of AI: econ.st/3dSfOkU
Listen to our podcast on why some medical devices work less well for non-white people?
econ.st/31UDRgB
Find The Economist’s most recent coverage on science and technology: econ.st/3GMPPI3
How does the EU plan to regulate AI?: econ.st/3DSoFxh
Read more about algorithmic bias: econ.st/3pVxmCj
Listen to our babbage podcast on the promise and peril of AI: econ.st/3q32Iae
Why has America turned against facial-recognition software? econ.st/3F023fO
How medicine discriminates against non-white people and women:
econ.st/327PErM
“I THINK they were probably a team of light skinned developers…” SHE SAYS
Computers need to be told about the harm that observation of reality can do.
They cant go around noticing things and expect to get away with it!
😂😂
The computer will scrub all narratives other then the one they want posted.
AI is literally just pattern recognition. If you don’t like the patterns that they are recognizing, consider the origin of those patterns.
That’s the point of the video…
@@jkgh374 can algorithms be racist?
Can patten recognition be racist?
@@georgewitheridge4961 I don’t think data is inherently racist. Do you?
@@Nalot56 It is
Why is that Uber driver driving a bmw😅
sorry, but how can a computer "see" someones "colour"? Even by considering one's ip-location there's no guarantee for anything.
It sampling that is lacking, if you don't feed your AI imaging of all types of humans, but only "white" people. The sampling is going to be skewed. Incorrect.
Depends all on the sensors. Cameras are biased towards Albedo. LiDAR is not.
05:24
Hey Economist! What are these "favored financial behaviors" that are more common among white people? Asking for a friend.
Algorithms don't create themselves so consider the source.
So what about Asians? Is their situation better or worse than Africans?
Doesn't matter China writes it's own algorithms
Better, because this algorithm is most likely written either by a "Asian" or white person. Especially if the American was developed in the US.
@@abhinavpy2748 this is Uber, an american company
"Credit scoring algorithms favored financial behaviors that are more common among white people" - May be you change your financial behavior then?
Completely agree..
Well not many people can just afford to up and switch lifestyles. Many peoples financial behaviour is based on the options available to them. And the options are not available to everyone. Or are proven to be easier for some than others.
@@WaleSoleye Well, some specific examples are needed here, otherwise what you said is just an excuse.
It said that the respective backgrounds were comparable, but the whole thing looks quite murky. This is one of those situations that need a clarification, instead of being shoved into a racial bias type of narrative - let’s be honest, though, in acknowledging that the purpose of the report was to ascertain the automatic reproduction of biased patterns through algorithms, and that’s pretty much settled, as computer technology is merely reproductive and wasn’t created to address such problems…
So systems should favour specific behaviours? So you get a better algorithmic score if you buy a Volkswagen over a Citroén, you spend 20% rather than 15% of your income on food thereby lowering your score? Where is the line?
Yeah depends how they were made
*pondering and wondering at 3:47 pm Pacific Standard Time on Thursday, 10 February 2022*
Oh great. Now the Economist sounds like the guardian.
Tell me about it...
Based, not biased.
based ai
Based
If this subject piques your curiosity, then Jack Frostwell's "Game Theory and the Pursuit of Algorithmic Fairness" is a book you shouldn't miss. I was deeply captivated by it.
The reporter was so lazy that she didn't even bother to do thorough research on the subject...
5:26
Research found in the older credit scoring used by older mortgage lenders favoured particular spending habits that are more common with white people.
Thanks. I was wondering about this
Aka more financially able to take on mortgage.
I want to point out two things that bother me greatly about this report. The first thing that bothers me is that we did not hear anything from the people who create these algorithms or tech systems, or from any singular person with an opposing point of view. I still know almost nothing about how these systems are even capable of being racist or how that would work. I understand the tech that can be sloppy, but not biased. The second extremely troubling thing is the people interviewed, especially Rashida Richardson, had nothing to say. What I call horoscoping the narrative. Go back and listen to her again. Her words were vague enough to apply to any situation, but the word choice was "sophisticated" enough that if one were predisposed to believe what the report is saying, they might just nod along with her as if she was preaching a gospel. It's pretty deceptive.
Its true this is a short video to a very complex problem. I would recommend you a channel thats called Jordan Harrod on this issue. She has made a couple of videos on this problem, first one being "Is AI Racist? Sometimes. | AI 103: Ethics (Part 1 of Many)". That one and the next one (104) specifically is about what this video from the Economist talks about.
I would then imagine you would have a similar problem with another information platform that likewise tends to eschew offering "The opposing viewpoint" where they disseminate their stories. When was the last time you saw Fox News allow anyone who expressed a view that departed from Fox's ideology within 10 miles of a Fox microphone?
@@ThomasFromTN to be honest, even if I think you’re right, the argument is a fallacy. I don’t watch Fox but I imagine all media is skewed to a bias. Nonetheless my points on this specific report are real points and there is no point to bring up something that doesn’t apply to the topic at hand. Fox News is irrelevant to this report. They may have done a similar report, and if your intention is to highlight differences between them than okay. But I think what you’re trying to say is that you don’t like Fox and they do it too so I should be criticizing Fox and not these guys. Which, let’s be real, is totally absurd.
Of course the systems are capable of being racist; if they aren’t programmed to include more diverse data or if racist assumptions are incorporated into the program they will perpetuate those same biases.
Computers don't have biases, they just follow instructions. The Economist is simply projecting human emotion onto machines.
Will it ever stop ?
the maker and designer of the software decide who gets the advantages - and business is about making the greatest profit possible, not about levelling the playing field.
Money is a control tool of the elite not something they need. There are certain cultures that avoid computer programming like the plague others dominate the industry….
very 1 sided reporting, didn't even give a chance for those who developed some of these 'racist algorithms' a chance to have their say.
The AI that misclassified the couple as gorillas was 100% not fed with black people but instead with real gorillas wich means it was misprogrammed due to human error. If it was fed with actual black people the couple just seems to look more like gorillas than black people.
No case of racism to see because computers (Spoiler) are indeed completly RATIONAL and do not have a bias at all!
I think that’s the(part of) idea of the video. Human error and acknowledging that these issues do exist can help the developers address them appropriately.
What caused the problem in this particular case is unclear, people are susceptible to sensations, but search engines are mislabeling photos all the time, it's just usual mistakes nobody notice until it coincides to provoke emotional response.
@@WaleSoleye If it's the case that it is a human error, I highly doubt that and just put that statement into my argument because I wanted to name both possibilities.
These AI's just determine on statistical probability and the AI in my opinion thought that the couple were presumably 60% gorillas and 40% human. So it picked gorillas.
What to do against it you ask? You can always code a fault tolerance to add a moral into the program that if the probabilities of both are high it always chooses the human because for us people it is morally more acceptable to classify a gorilla as human.
Probabilities can't always be correct. I understand the point of the video though, our moral concept is much more versatile as any AI or pc ever could comprehend and we have to understand it to fix things computers can't handle without our help.
Only things which are true are hurtful.
@@daftwod Tell that to the innocent people about to be executed on death row.
0:28 I truly wonder how she got the audacity to say dumb stuff like this confidently.
The worst part they don't accept even slightest possibility that they could be wrong. Going into the wrong direction so persistently is the sure way to keep any problem unsolvable. To solve any problem they have to recognize what the roots of the problem are, but such recognition is inconsistent with their persistent denialism.
I mean there is consensus about that, the bias is in the data.
Our biases gets transferred into technology. It could even be something silly as pineapple on pizza. Most people have an idea of what pizza should look like and for the majority of people they dont mind it, i think its 55% or something like that and 10% hate it and 20-30% love it. Well if you search pizza images you often have to look far down the list before a pineapple pizza image comes up, even though lots of people love it? It has to be with the amount of images in the database and on what people click on when searching for images. Also most of the time people against something is louder and our mind is often focused on negativity. I hope it gives a clearer picture of what she is saying.
Great, 1s and 0s have so much bias. Im never dealing with them again.
Great ! Topic !!! 🧐😎
Its creates as next RACISTISM
anthropromorphizing tech to this degree is silly and makes you look less credible
Exactly, it's such a non issue
@@abhiklovesbadbitches saying something is a “non-issue” when people are literally losing their jobs, can only be said when you are not even 18 years old, don’t know how life works yet and get fully supported by your parents while you watch YT videos all day in your room.
Have a little more empathy, bro. It’s called being a “human”.
Should do this video: Is the Economist racist?
Please do
Yes
funny how The Economist isn't making videos based on all its recent reactionary anti--woke articles
Many of the comments so far are very disappointing. The report starts with a gentleman's, and many similar others, ability to make a living, based on this technology, fundamental to most of us.. It's only a 9 min summary, I'm sure if it was longer it could have added much more depth but blimey, sounds like a lot of ppl wants to shut down this interesting topic right now. If you found out about piece of tech affected you in any way, whatever was at fault, surely you'll want it fixed!
most middle class white people in first world western countries don't like it when anything disturbes their comfort
this is how they react, they face no real issues at large so every little "problem" is worth fighting for
People are pretty ignorant,anyone that know a little bit of AI will know that a not well trained model can be very bias ,and it is already been happening for a while , there are even documentary about this ,it is not just race , but also gender , Amazon at one point was found that their AI was only hiring men while rejecting women with the same experience and education.
6:36 well i guess well see...
I wholeheartedly recommend those people think carefully about the origin of these phenomenon instead of taking high moral stakes and being politically correct.Having watched too much videos like this in the economic,which make me no longer want to watch this channel.
The economist used to be a more serious institution, now its encouraging policing and state regulation for policing software against the companys interests?? This is insane
That's what policies are for? They are not an antiquated concept you know.
@@siddhantdeepful sure but praising policies without being specific is like wanting to pay a price without knowing the value
AI uses patterns to make choices. That is the definition of discrimination. You can't make water dry
you think everything is racist
The computer does not lie. It is not biased.
2.48M subs and only 38k views that explains topic...human made things are totally controlling by humans
How to make computers less biased?
Keep The Economist out of computer, the problem will be solved.
AI isn't racist. It's purely critical and objective thinking. If it comes to a conclusion that's because it's looking at data.
It's a woke conspiracy theory and nothing more. I'm mixed race by the way.
It is because the developers that trained those models use only white people pictures to train the model ,if they use also minority pictures just as much it will not be as bias against minority people.
@@Omar-kl3xp Nope, it will be biased because raw data is biased and racist.
hypocritism in mass media that's real name of these video
Based A.I
I don't see any big issues that AI has some difficulties recognizing black people because AI needs some time to educate itself and fix the issue, the best you can do is report about an issue and wait until they fix it. If you don't have patience you can write your own program and use it,
Total garbage.
🕯🌍🌎🌏🕯
⚠☢☣
Propaganda
The more I read the Economist, the more I feel they are always taking moral highlands, being politically correct and subjective and biased.
Technology itself are not biased and programmers, firms and the data SAMPLE they use could be biased.
And AI is not 100% correct
Rest assured that the comments are not buying what they say.
I agree with you one hundred percent.
but why isn't it fair to call it out when it something like this happens? Ofc the sampling is garbage. But still it needs to be fixed. Articles like this, need to happen so they fix issues like this sampling issue. obviously not enough people that aren't "white" aren't in this sample. They never said racist, they said biased. Machines are created by humans and those humans can have bias.
@@Millsmills586 Yes, that’s exactly is the point - machine is created by human and human could be biased. But The Economists and some of the media are framing like the technology is evil, just criticizing but not thinking about what is behind. For Uber’s issue, they are using Google Image - which is naturally automatically generated online - the whole issue is telling us a fact that some demographic groups, not just ethical, gender but ages and so on, are well under-represented. What we need to do is to empower those groups in disadvantages by education and public education, NOT just simply taking moral high ground calling the firms or the programmers racialist. Out of the belief that majority of human are just, I think Uber is not doing it intentionally.
Monkey doesn’t wear any pants
The world has gone mad…….everybody seems to feel discriminated against these days (in some cases it is true but it is also true that some people make It their mission to feel discriminated against). What we should be thankful for is that in most western ‘democracies’ you have the right to be paranoid !!
Os pequenos deuses dos tolos são coloridos e racistas, estamos na era da tolice humana, especialmente no topo da pirataria fraudulenta? /Há um Deus que criou todas as cores!
What, now AI is racist as well
Big lips matter
Just nonsense, really healthy on brain people, thinking about this?😅
Slava Ukraina
Wrong headed.