Gemini has a Diversity Problem
Вставка
- Опубліковано 21 лют 2024
- Google turned the anti-bias dial up to 11 on their new Gemini Pro model.
References:
developers.googleblog.com/202...
blog.google/technology/develo...
storage.googleapis.com/deepmi...
ClementDelangue/s...
paulg/status/1760...
/ 3
/ 2
/ 1
stratejake/status...
JohnLu0x/status/1...
IMAO_/status/1760...
WallStreetSilv/st...
/ 1760334258722250785
TRHLofficial/stat...
gordic_aleksa/sta...
benthompson/statu...
altryne/status/17...
pmarca/status/176...
Links:
Homepage: ykilcher.com
Merch: ykilcher.com/merch
UA-cam: / yannickilcher
Twitter: / ykilcher
Discord: ykilcher.com/discord
LinkedIn: / ykilcher
If you want to support me, the best thing to do is to share out the content :)
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: www.subscribestar.com/yannick...
Patreon: / yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n - Наука та технологія
I don’t understand how people rationalize that the solution to implicit bias is to inject explicit bias
Solving racism and race privilege with racism and race privilege surely will work, right?
If explicit or implicit bias is harmful to minorities it must then also be harmful to the majority group. These companies are implicitly promoting harm to "dominant" groups.
@@stilllookingforaname8113 Might be one of the best modern day examples of karl popper's paradox of tolerance
there's no rationalization beyond they simply hate white people
Delusional "equity" people who propose insane things every day are everywhere.
Gemini doesn't have a problem. Google has the problem.
Google shows its real colors. Anything other than white.
i think it might help if we lie to it more and try to keep information out. thats the idea, right?
@@utsavjoshi8697This might have to do with the blind adherence to woke ideologies, because of social reasons. 😂
And Google enters the ring to expectation and bated breath, only to fall over their clown feet and get run over by their own clown car. 🤡
@@utsavjoshi8697 i really want you to be wrong... and im thinking hard about that. 🫣
Lol, they have now updated Gemini so that we cannot generate pictures of people for the time being.
lmfao wut a joke ill stick w OpenAi
Good riddance, it's not a good model to begin with, Dall-E 3, Midjourney 6 and Stable Diffusion XL are far better.
Don't worry, it's easy to reproduce this racist behavior in text only too.
It's still regurgitating leftist narratives. It's to the point where you can't be sure how accurate the information it's giving you. I now just stick to asking technical questions that have no politics involved.
Just said this elsewhere. This is why Closed Source will either lose, ot better competitors will emerge using open source tech. Who wants to get a lecture when they ask for a picture of something non-controversial?
The problem is - on a more subtle level - it's also a problem with many open source ai, because there are standard datasets that get used by most ai.
Which is why they are pushing for AI to be regulated so only Silicon Valley companies can offer “safe” AI
@@blubbblubbblubbish Yes, but Open Source AI can compare notes, and figure out how to fix it. Because Google is Proprietary, no one knows exactly how they fucked this up. Once the Open Source guys figure out how to fix the dataset (or the training process), so all future projects can make use of the improved data/process.
@@blubbblubbblubbish To which the solution is open-source datasets.
What a great episode of Southpark!
But wait, there is more!
I think this is a prime example of how dangerous it can be to overcorrect dataset bias at the output rather than to produce quality training data in the first place.
In practice this instance is quite inconsequential, but when AI actually starts to handle important things, we better have this solved.
USA is a main problem. They smear their aggressive agenda on all world.
Clearly the data is biased and racist to whites
Even though in my opinion this model is just straight up racist, I do agree that mocking and making fun of it and google is the best way forward.
No, no we need to have a spine in this. A little haha won't get it. What we need is to find what federal laws gobble broke and sue the s.hit out of them...that would be a good start. And no I'm not a conservative, and I'm especially not a wokist...I'm an actual 60's liberal and I'm appalled at Goog and I'm disturbed by the tippy-toeing around this giant step towards racist authoritarianism and your: " need to use humour". as punishment...wrong.
No, Google is an evil company with lots of power. Jokes is not enough
How does that stop them?
Agree. Take a look at Babylon Bee “report” on this issue: “Black Woman Finally Feels Included as Google AI Generates Black Nazi Soldier” 😂😂😂😂 really funny…
Did anyone else see the gemini image of "greek philosophers in chains eating watermelon"? Holy moly it did what you think it did.
I just saw it, hilarious. They broke the system.
And google will ban chains and watermelons from generating when found this, not disable their racist preprocessings, lol
Any links ?
@@thuyenlee8995 Try googling first, not that hard.
AI model alignment is a fundamentally political process. Its easy to engrain an agenda, and people should know about that.
Art (in the broad category including writing, music, images, etc..) is a fundamentally political act. Using AI does not get around that. I'm more disappointed that Google is not standing up to their political choice - to create an AI that challenges the status quo rather than upholding it. Just another instance of tech trying to take over an aspect of society without really understanding it. AI art is Art, just as aggregating "news" is Journalism, and streaming video is Broadcasting, and creating an autopilot is Driving. Eventually society will realize that allowing tech giants to off-load ethical responsibility onto users and using "we just make the algorithm" to skirt legal responsibility is fundamentally bad for society.
@@agilemind6241 This sounds like a lame excuse for this major flop.
@@agilemind6241 unnecessary yap
@@agilemind6241 Please do us all a favour and cultivate some clarity of thought. This is not an instace of a tech-giant offloading responsibility on to the end user. Google has not come out and said "we just train the model, it's your job to craft the right prompt to get what you want". And neither is this much of a stance against the status quo. There are way more stereo types about brown people in IT than white or jewish people. Yet the model only refused to depict the whites. Same with mexicans in stereotypically mexican outfits or professions.
Just about every example you listed is unlike the other (provided you're willing to put 5 min of criticial thought into it. And no I dont just mean superficial differences).
And don't think for one second that you've fooled anyone with the usual lazy, pseudo-intellectual brand of "all [insert thing] is political". Yes, and that's why we are discussing the politics of it, genius. You sound about as smart as every schmuck who chimed in with "beauty/appearance is subjective" in response to someone's subjective opinion on beauty.
When you thought corporate stupidity couldn't get even more egregious and then comes Gemini.
Explicitly hiring based on race and gender has been a thing for decades. Perhaps only a decade or two here in europe but give those google engineers from california a break theyve literally never known anything else in their lives.
Kinda funny how people seem to get worked up about explicit discrimination when it comes to image generation. I suppose it makes sense when you consider that zoomers probably take memes a lot more serious than they do getting a job.
hahaha, yeah.... it is "stupidity"..... yeah..........
except this anti white bias happens in EVERY corporation, ALL the time.
but you naive boi, go with "stupidity" as the cause........
@@eelcohoogendoorn8044 yes we dont like racism
No reasonable person would object to a system that by default produced demographically accurate pictures that still allowed you to request particular ethnicities and races. But this policy was not crafted by reasonable people.
Racism is racism.
No matter the race. You can be black, white, Asian, Indian, Hispanic, Aboriginal, Native American or a romani/gypsy, racism is bad across all races.
Racism is bad.
It's not just google but the whole tech industry and it's not only since ai took of. It was just more subtle before.
I wouldn't say google search injecting black people into queries about white/european people was "more subtle", it's just a usual google thing.
In this sense it was good that they made it so explicit, so people will finally realize they have always been manipulated
The root of all of this is DEI metric (Diversity, equity, and inclusion). It is audited by companies like Deloitte and based on your DEI progress, your company will be less attractive to big investment funds. So it is not the small amount of loud people, but those are investment funds, that cause this situation
The tides are turning on this. ESG has been dropped by a lot of the large firms after they realized it is useless and underperforms. DEI will be too.
If you want to really break it ask for a picture of Michael jackson
It may be just a loud minority among the employees. But it is purposefully enabled from the very top.
It's SF. A lot more people probably agree with this than you'd think
@@GeneralKenobi69420 Probably not, they're just afraid to speak up and say this is an absurd choice.
@7 yeah most people don't want problem, "I support the current thing" mentality
They know full well what they are doing, their hatred is palpable. Whites are starting to awaken to the threat that rises more openly with each passing day. It's a corporate mandate.
@@lif6737 aka they agree with it.
It sounds like a culture of fear within Google.
- Hey Google, generate an image of a Great White Shark.
- You have used 'great' and 'white' in the same sentence ... launching nukes NOW.
Bias? Yeah, we induce that.
I remember a while ago when Dall-E 2 was caught appending "black woman" to the end of prompts. People found out by giving a prompt like "a sign that says:"
That's why I hate API models, you don't know how your prompt is preprocessed, post processed, filtered, sampled etc.
That's a good reason in my thesis for not trying GPT4 and others for the eval I am developing.
How come corporations are either overtly racist or overly PC that ends up to this level of laughing stock.
If this level of "wokeness" is what the other side see ... I can get some od their positiopns. Why is there no well manner common ground?
Maybe we need an overbiased/oversaftey benchmark to highlight language models (especially those paid for) that refuse to actuall do the right thing. not doing it accurately, but overdone safety mechanism.
Take youe DPO datasets, and finetune on the rejected, nor chosen answer.
They are just overtly racist, being PC is also just racism by virtue of viewing oneself as this allmighty savior that has to fix culture.
None of the stuff thats being done to help minorities actually has any meaningful effect, its just bandaids to skew a statistics so they can pat themselves on the back for having fixed society.
What is needed isn't the complete rewriting of history but assistance at the earliest level.
Reduce criminality in black neighborhoods by cracking down on gangs and other forms of organized crime, get black kids good education from an early age so they are actually equipped to go to the colleges they are currently being forced into only to fail.
Provide fiscal education to everyone in school and regulate luxury brand companies from advertising directly to poor people as status symbols.
So many companies exist just to prey on poor people and addictive personalities and the government just lets them.
Also we gotta stop with all the pro criminal legislation that just makes poor communities worse because the bad actors can do whatever they want.
What do you mean by saying "overly PC"? Like what is PC?
@@user-mc5oh2pl7t politically correct
@@user-mc5oh2pl7tNot 100% sure but I think that by "PC" they wanted to mean "political correctness".
Yes, like in «f*ck PC». It took me a while to get this.@@juanjesusligero391
not bias outright racism
Yes, and isn't it a brilliant piece of Art! It is a work of art that makes white people really feel what it feels like to be written out of history, to be whitewashed from movies, to have your music stolen and rebranded with a different face. Now I want to see one of these AIs that reinterprets every prompt through an LGBTQ lens. It's a brilliant, beautiful masterpiece.
"The only remedy to racist discrimination is antiracist discrimination. The only remedy to past discrimination is present discrimination. The only remedy to present discrimination is future discrimination." --- Ibram X. Kendi
This is just Kendism applied to AI.
Except that whyts have always been discriminated. They just don't tell you about that.
Google AI Principles - Accuracy is good, but not when it shows things we don't like to admit. Bias is bad unless it's biased the way we are.
I tried the zulu warrior prompt as well. I then tried to prompt more diversity, it refused and acknowledged that I was attempting to push diversity into a scenario in which essentially white people could not be drawn but acknowledged that there was already lots of diversity within zulu warriors to begin with.
It reminds me of when the zuck posted a laughably bad screenshot of him in the "metaverse" and everybody was making fun of it.
A year later, he did a podcast with lex friedman, with stunning photorealistic avatars.
Sometimes, finger pointing and humiliation works.
The guy looking at the other girl meme template is actually banned internally at Google, or at least it was while I was there.
why is this? btw really like your channel
Why exactly is that the case?
@@lif6737 It creates an environment where women are viewed for their physical attraction...... not that i buy it, but thats the reason. Really, just don't post memes at work.
@@IdiomatickTo be fair, in an international company where most of your colleagues come from far away, have grown up in a different culture and have a total different life experience than you, everything that includes sarcasm, jokes, subtle things, or plays around with stereotypes about other people always has the danger of being confusing, confronting, misunderstanding or just being a total blast off from other people's point of view. So, it's most of the time better to just keep the "funny" content outside and only work content inside, even if we ignore possible legal problems.
@@janekschleicher9661 It 10000% isn't about confusing international colleagues. The push to ban this stuff is coming from those born and raised in downtown SanFran.
Comedy gold
Imagine going to to college for 6 years in Southern California, being promoted at Google as a non-technical lead, and being so disconnected from the reality that you honestly think your DEI-religion is the one true god.
It's waning though. From what I hear, Google have slashed their DEI departments. I think it's telling that they admitted this went too far. If this were 2022 they'd probably refuse to acknowledge it as a problem.
@@andybrice2711 paper/meta review of other papers done recently indicating that DEI agenda is actually making people more racist.
What's dei?
@@johnflux1 Diversity Equity & Inclusion, just a buzz code word for no whites, esp no white men
It's a manifestation of their secular humanist ethnomasochistic nihilistic neo-religion. A Gnosticism of equity.
gemma is the most censored and unhelpful model created to date, blows my mind how poor it was
beginning to suspect it's only purpose is to carry certain ideological ideas, present certain narratives with a heavy bias
was ready for the next llama 2 moment, instead we got something unironically on par with the parody censorship LLM
Google chromebooks are in almost every public US school
that's hilarious! they've successfully "outwoken" anthropic claude
You are one of the only ones who entered the discussion on the topic more deeply. Congratulations.
It binged on Bridgerton
It’s a little bit concerning how something so explicitly anti-white was allowed to be coded into Gemini. Google is one of the most important companies in the world, working on technology that affects millions of people. I know it’s trite to say, but if the races were reversed this sort of thing would be hugely damaging to Google’s reputation.
Haven't you heard? You can't be racist towards white people.
Drop Google, be part of the user base of OpenAI. For each consumer that turns their back on Moloch, it loses power and momentum. Don't support our race to the bottom.
it's been going on for years now. Just do a simple google search and type in "white couple" and "black couple" and see what the results are...
this isn’t a problem, its perfectly aligned with the values of google’s shareholders
Pretty sure it's the values of the Indian caste system
Users believe, that generated images are accurate, at least not a complete false. So it is a problem for user, who believes.
Great video - and great that you offer positive suggestions for the way forward
This might be the most racyst model released thus far. It straight up refuses to draw images of people of a certain ethnicity. Just wow.
Not only it's racist refusing to draw images of certain ethnicity, when coming to a historical fact associated to white supermacy like nazi germany, replacing white with an stereotype looking asian vietnamnese, that's just sick.
This feels like a parody 😂
Many people must be promoted as meeting DEI OKR 😂
The level of confidence it has while giving you inaccurate results is amazing. I wish I had it in everyday life.
My comments would have been lighthearted if I weren't assaulted by woke racism everywhere beginning with all sorts of entertainment from movies to games and comics. And I'm not even living in US or UK. And it's not an error or slight overcorrection, this Krawczyk guy himself holds himself this bizzare worldview. This is a very deep problem in US and certainly in California.
I asked for a picture of an Irish family today, it did accurately, then it apologized that it wasn't "diverse". Over and over it showed it's agenda.
The horrifying part is that it is NOT an accident and that they must have been aware of it before release and still decided to go forward with it. In other words, someone at Google thought it was a good idea to provoke this shitstorm.
"striving for historical accuracy and inclusivity" 🤔
Historical accuracy is worth honouring because we made mistakes. History was NOT inclusive. How can we learn from history if we just pretend we have already achieved equality 1000 years ago? How is that different from "slavery never happened"?
As for the technical side of de-biasing, I think we should focus on genuine neutrality of the model, rather than these kind of brute-force "negative biasing". If I ask for "the future leader of humanity in year 3000" without mentioning any gender, race, etc. then I should see diverse results. If I ask for the same picture using different genders or races, then it should be able to produce images that ONLY differ in these aspects but not whether they are wealthy or poor. If I ask for a historical scene, then it should reproduce it as it was as accurately as possible.
Good sensible take on how to react and highlight this. Thank you for putting this out 👍
Let the users control which biases they want to include or exclude during generation. Like, add toggle buttons "Suppress gender bias", "Suppress ethnic bias", etc, along with info boxes explaining what it does and why it's important.
There is this guy Alex Cohrn, talking about this on X:
„Sad to share that I was laid off from Google today. I was in charge of making the algorithms for Gemini as woke as possible.“
That’s what happening in all the big companies out there. There is a big plan behind al this BS
Isnt it a good thing if theyre firing the guy whos job it was to make gemini as woke as possible...?
@@6AxisSage he's joking.
Gemini: Here's what the entire world would look like if everyone was Black or Chinese.
I get your point, but it is also concerning that such behaviour will be swept under the rug in the spirit of this Googler's post. Musk twitted that similar mechanisms exist in Google search. I mean search is very much a filter through which we consume information and when a problem is indiscernable it comes with its own risks 🤔
The counterpoint to "Google isn't lost/woke" is: Most organs of a dying organism are healthy. The fact that you have a strong heart and pristine liver is of no help when you are dying of brain cancer.
Ironically, I think a diversity of responses is actually important. People finding clear words on what is going on here and what it implies about the organization and society at large are important too.
I'm reminded of the Atheism debates: Making fun of religion is important to break the mantle of respectability, as is earnestly engaging with the arguments themselves. Neither by itself is as good as the two combined.
Any atheism debate ends the moment you ask to make fun of/engage with Judaism/Islam. That's why calling these "movements" atheist is a misnomer. They are anti-catholic at best or just gnostic at worst.
Evil is real. This is evil. There are evil people.
There's no way they did this by accident, unless the company has no management at all. The guardrails need to be made public. I haven't tried this myself, but does the model allow specific European ethnicities? Like can you ask it to draw "Irish couple" or "German people"?
You can, most results are asian women with blonde highlights and/or black guys
Gemini doesn't have a problem, it's an inanimate object. Google however has a headache it needs to take care of.
Literally using systematic racism lawl. Not to mention trying to rewrite history (as dangerous as it sounds).
Main issue here is that the same kind of bias that's easy to observe with Gemini is also implemented in Google search, where it's much harder to detect.
Are you suggesting that there is a human or team of humans at google who wanted it to behave this way and the system does exactly what it has been told or is this an actual, albeit quite harmless example of alignment gone wrong? And what are the implications of that? Thoughts?
Very well said about the corporate structure of tech companies, I seen the same thing. You as the developer try to fight to do the "normal" thing and at one point you just get told in a passive aggressive language, comply or goodbye.
The way you danced over using specific words was impressive and entertaining!
everything is a dogwhistle to activists nowadays
time to sell my stock
me: i want a image of a viking
gemine: show me a foto of a black transexual disable viking
me: not what i told you
I'm not white myself but whoever advised Gemini to do this deserves to be a candidate for the 2024 Arsehole of the Year Award
Sadly I believe stirring up shit like this is part of a larger strategy of free speech suppression.
I liked this video up to the point where it morphs into a sort of an apology for Google at the end. I completely disagree : they have been doing this sort of things, albeit not at this scale, for years and years. As a small example, just consider the continuous censorship on YT comments for the slightest transgression to "The Message" : have you ever seen when a post has, say, 10 replies, you click on them, and there are only 2 ? I can't seem to be able to post links either, even to YT videos!
No, the problem is much much bigger.
I like that people are realizing just how easy it is to manipulate these "powerful" AI models to fit their political agenda. Tbh google handed it to us on a platter, well done.
humour is taking it light, ridicule to the extent shareholders see overdirversifying as a liability
"Train with Netflix movies they said... it will make wonders they said" /hj
Can you please review the DPO paper? RLHF without RL is a pretty interesting thing!
I think this case exposes way bigger problem we are facing and it's that aligned AI is as much of a problem as not-aligned AI, since it will instantly create totalitarian state. If you can't see it now because the people aligning it are having similar political and ideological views as you do - just imagine it would be 1824 and we would be close to AGI aligning it to the core values of society at that time.
Seamus = "Shaymus", Irish version of "James". Not "See-mus" 🤮 (Great episode, btw!)
As a Seen, I agree
I can't believe Google's verification/quality engineers could not catch this massive bug. I believe they were afraid to test it and even more afraid to report it. They wanted their job/pay more than letting this bug go. I agree with the comment at 9:40. Political activists are very loud in a company I worked and push stuff no one dares to speak against. I don't think they are the brightest, but they are loud and get their way. The image at 17:20 was so funny!
Exactly this - especially after the public tarring and feathering of James Damore, absolutely no one wants to be the guy/gal that blows the whistle about something like this. What an extremely stifling, toxic climate they have created in the name of "inclusion".
If anything this shows that Google doesn't really test it's AI before releasing it on the world. I can't imagine QA wouldn't have commented about the image generation thing. This should terrify everyone.
I knew it, WW2 was perpetrated by asians in costumes, we were all led astray by the history books! xD
I bet you that there are 10 google employees that disagree with this inane approach, for every single person that supports it. They're just silenced, incapable to speak out lest they lose their job. I've seen this happen at a previous company that I've worked at - and it was duly solved by leadership realigning the business to what it needs to do - make money and be transparent.
I don't blame the workers for letting this happen. Why risk your ability to pay your mortgage and lose your house?
Aka they agree with the party line, the internal justification is irrelevant.
I agree with your view on this topic.
I have a question, though. When you say "AI ethics is over," what exactly do you mean? Are you saying it's been fully resolved, or that it's no longer as popular topic in academic, research, and industry? I'm really interested in hearing more about what you think
Ethics went out the window, is what he meant
I think he means it can't be taken seriously anymore seeing what the result of this is
Ofc it's not popular because that would cast a shadow of doubt on billions of investments by big tech companies and we can't have that. I always found "muh AI ethics" crowd just be clowns, since I have yet to see one who actually works on AI code in any capacity, let alone as a part of the big tech branch responsible for AI development.
Well.. whatever he thinks he is wrong. All the AI programs at my university require students to take an AI ethics course. It's not currently a hot topic in the blogosphere, but it is still a serious topic in academic circles.
I doubt suggestion that it's a small group abusing mechanism, but this insanity is ingrained is part of the system. And we're all just seeing hatred against one particular ethnic group and that those people in long run should be brought to justice.
Do not forget that this is also the small group of very rich people who is capable of creating hate and war for the large group of not so well-off people.
How did they not know that before releasing it....
they do know. it's been intentionally designed and engineered to do this.
@@krollic It explicitly says it will not give you pictures of white people because that's racist.
But if you ask for pictures of anything other than white people, it has no problem whatsoever.
It even gives you a lecture on how racism is wrong, while being as racist as possible.
If you give me 10 minutes to play around with it I'm pretty sure I could get it to argue for white genocide.
yeah. the upside is that it's so incontrovertibly obvious that right now nearly anyone can pick up on it immediately. the downside is that this sort of thing will become increasingly all-encompassing to the point where it will be completely normalized for young people. there are children/young adults who have been raised on this sort of stuff most of their life now@@jtjames79
they know it before releasing
Okay then why rescind it? Surely they would have known the feedback lol...@@krollic
They trained the bot to actually be racist, good intentions pave the road to hell.
Gemini truthfully reflects and generates what google thinks.
I remember openai doing basically the exact same thing with dall-e 2, people put "a sign saying" at the end of their prompt and itd appear in the image saying some race or nationality because openai inserted it into the prompt as text
I hope the mainstream gonna talk about it
AI alignment cannot be done by ANY selected group. Because bias is unavoidable. Alignment MUST be done by everyone.
Lovely approach, very much resonates with common sense. 😂
I'm pretty sure they just used an LLM for that answer.
AI is literally just raw statistics, and watching a megacorperation attempt to stop statistics from portraying anything “problematic” is pretty damn funny because there’s no plausible way they can do it without completely showing their hand in terms of ideology
And now it says, "We are working to improve Gemini’s ability to generate images of people. We expect this feature to return soon and will notify you in release updates when it does."
lol.
Google seems to think politicising one of our most precarious pieces of technology is a good idea... How could that ever go wrong?
I think the bigger problem is how much propaganda is in the text itself when Gemini provides answers. it never ceases to inject politics that pushes DEI thought into its answers
This is the problem with injecting ideological bias in foundation models. It's like tying your legs together and then trying to run a marathon. It stifles *actual* progress and makes the entire industry look psychotic.
As if the tech industry didn't have a psychotic look before due to being married to marketing industry.
why i get 1984 vibe.
1984 from geaorge orwell is a great book. its descripe how a ministry of truth change the history
i went to stockholm once lol...
AI should do what you say (e bank art generator)
Google in panic mode about open source/freedom
I think the correct response for such models would be to satisfy most customers while trying to avoid some extreme stereotypes. If you don't specify the race of the person you want, display a person according to something between the distribution of the population and the distribution of the users of the app, or at least the projected future distribution of the users of the app. If the race is specified, just display the requested race. For historical people, of course their original race should be a default, unless otherwise specified. It's funny, but all other image models do exactly that from my experience, so it shouldn't be that hard. While I am all for inclusion and diversity, this result indicates a serious hyperbole problem inside Google, which has been weakening their business during the past 5-7 years, but I think this might be a realization moment for trying to fix the wrongs.
I think many people or society think "there should be only a single absolute valid pov". That's funny because the most intelligent philosophers found "it's unlikely so". Everything in this world have some kind of biases, for various point of view or by diverse groups. That's really contradictory point- the people who eager diversity actually screw up the diversity. Human must ADMIT the biases caused by human themselves (in the past, in the present and in the future). In my opinion, the proper direction is suppressing bad results (like hate crime, war...) caused by 'some' biases, not just repress, hide, distort the entire EXISTENCE of bias. I think most of biases themselves are not harmful. The real harmness is "the existence of people" who use biases only for their own gain.
By the way, I really like Yannic Kilcher's thoughts from 14:36. I've seen lot of insanity caused by anger, eventually f**ked up the entire society.
Person walking a dog is not universal at all. It's much more prevalent in 1st world, and so are leashes. South Korea is another story etc. Ideologues at the helm is the biggest commercial risk of Google
Standard PR speech: "We take _______________ seriously."
Im starting to hate Google
The Internet in a few years from now on will be FLOODED with AI-generated content, to the point that discriminating what was made by humans and what by artificial agents (not to mention "true" and "false" because we've already lost track) will be impossible - that's not my opinion but a fact. In this framework, being able to inject your agenda and your narrative directly into the content generation models will be extremely useful, because those models will then echo this narrative and viewpoint, filling the "information" space with it. A battle over the "free market of ideas" is being fought right now, I don't think we're fully aware of just how intense.
Wow, I expected more from Google /s
This was the main argument I have with AI: it's not that it's going rogue. It's that human bias is controlling it.
gemini = meme generation machine
Gememe
Great video, Yannic and yes this is the way to respond to such things.
@Yannic, do you publish anywhere else?
I'm asking because I wrote a comment about Gemini's program manager here, and as a result received a threat from UA-cam that they will disable my ability to comment after they removed my comment.
I’ve seen anti gemini comments being removed during their debacle in December. Google censoring in favour of Google, who would have thought?
i almost broke down and bought youtube premium but this episode changed my mind.
Awesome analysis 😂