I'm writing a new, original SOMETHING for Nebula. If you want to be able to see it when it comes out, you can get a discounted Nebula subscription now! go.nebula.tv/philosophytube
They sent me to conversion therapy for being left handed as a kid. After that didn't work, they bought me left-handed scissors and pens with quick drying ink. Weird how they resorted to conversion therapy first when all they needed to do was buy a few special office supplies.
@@user-vw4xp5nt9f being left handed and writing 'left to right' you are almost immediatley moving the side of your hand against the words you just wrote. This often leads to smudging, and a stained hand. If the ink dries quicker this is less likely to be an issue.
@@sarainy9775 My sister is left handed and so is one of my school friends. My sister learned to write with her book/hand basically sideways so her hand is always beneath the writing to avoid smudging. Her handwriting is immaculate. My friend used a weird claw grip that kept his hand almost fully off the paper, which meant no smudging, but also meant his handwriting was atrocious and very slow because he was writing with no support.
@@BambiTroutI unfortunately learnt the 'no support claw grip' and my hand writing is both slow and atrocious. I'm also dyslexic, so the combination makes me handwriting awful... on the plus side it pushed me into typing from a young age so I've had an 80-100 word typing speed for the past 25 years!
@@sarainy9775 I think it's really bad that despite no longer punishing left handed people, there's still very little consideration for teaching handwriting to left handed children. All of the lessons on handwriting and how to hold a pen are still built entirely around right-handedness, and lefties are just expected to figure it out for themselves and somehow keep up.
As a mechanical turk/microtask worker... You hit the nail on the head. The most solidarity we have is that we can warn others that a certain job is bad or scammy.
Machines started reducing human labor, but then we started worsening work conditions and broadening world connectivity to the point where human labor is cheaper than machine one.
As an AI engineer with both academic and professional background, I find this video to be such an exceptionally good video, sadly I cannot share it with my coworkers and collogues! There is an ungodly amount of resistance by engineers, scholars and business managers when you point out that people are not datapoints and you can't just take their data. Even if they consented to you having their data, it does not mean the consent it being used for what ever you want. I have lost promotions and bonuses in my career for pushing back on unethical practices.
Feels man. I'm internally debating if I should float the idea at work of adding to our open source licenses a "not for training AI" clause (which is technically not OSI correct but forget that). Like, releasing our work as open source is essentially consenting to ai training from it, which isn't why we release it. We want other researchers and engineers to use the code we make, learn from it, and potentially bring us business. Allowing it to train ai does not further any of our goals, and could wind up making it easier for competitors do develop similar technology.
For whatever it's worth, thank you. For having the integrity and compassion to overcome all the battles with cognitive dissonance you must've experienced to arrive at your perspective; carrying the weight of the dissenting opinion is rarely easy or easily achieved. But we reap what we've sown, so there's profound value in those victorious battles in the psyche. The easy path is virtually never the correct one. Good intentions pave treacherous roads _when those intentions are grounded in hubris._ Checking our ego and tempering our desires (via radical self awareness of our psychological shadows (& other's)) breeds far better outcomes. 🐢✅ 🐇❎
@@kimberleemodel7182 This is a very interesting observation. Mining copyleft, libre open source-licensed work is an opportunity to launder violations of the license by corporations who use "AI".
26:58 I'm glad I took the time to read this source fully. Sarah Andersen sounded familiar, and I recognized her wholesome and relateable comics as soon as I googled it, and seeing her response to AI abuse knockoffs of her content was heartbreaking: " *I felt violated.* The way I draw is the complex culmination of my education, the comics I devoured as a child and the many small choices that make up the sum of my life. The details are often more personal than people realize - the striped shirt my character wears, for instance, is a direct nod to the protagonist of “Calvin and Hobbes,” my favorite newspaper comic. Even when a person copies me, the many variations and nuances in things like line weight make exact reproductions difficult. Humans cannot help bringing their own humanity into art. Art is deeply personal, and A.I. had just erased the humanity from it by reducing my life’s work to an algorithm." -Sarah Andersen
Kelly Slaughter segment about being in a business conference about lethal autonomous weapons looking around the room, and realizing "oh my God I'm the only woman here" is pure comedic gold. Such a succinct description of girl boss feminism.
@@josephrittenhouse5839 Reminded me of that ED-209 scene, executive gets gunned down in front of the entire boardroom and all the CEO has to say is "I'm very dissapointed." lmao
1: once again, I absolutely love the subtitles. 2: I had never really considered the human aspect of big computing 3: the absolute power move of doing the segment about getting AI generated adult content made of you in basically nothing is amazing.
As a Filipino, thank you. The continued economic exploitation of my countrymen is something that I do not see mentioned pretty often. Ranging from the dangerous seafaring work, continued land theft of indigenous tribes by big business cronies, staggeringly low pay for digital creative work outsourced to us out of profit motive (my personal experiences) , and at the very worst; assassinations of activists who speak out for the working class orchestrated by politicians who are in the pockets of corporations. I can only see things getting worse for me and my peers in the creative industry as we are currently experiencing a drought in job opportunities. But videos like yours give me hope that we are not forgotten and that maybe even if not in my lifetime, that countries like mine will be given what is properly due and a proper global realization is to be made that all the fruits of labor should belong to the ones that do it.
This makes me so sad to read. Us in the west need to do a MUCH better job at recognizing the effects of our consumption and how our imperialism and colonialism has impacted your country and others like it. It's such bullshit, but know there are people all around the world hoping, praying, and taking action to fight for your liberation. Stay strong and much love
I'm a writer and I work at a hotel overnights to make ends meet. One of my coworkers is studying computer science. We got onto the subject of AI, and she straight up said to me (I'm paraphrasing) that she could see AI taking art and writing, things that people want to do, but not automate "real jobs." It was insulting, but it highlights the "us vs. them" mentality that STEM-centric and art-centric people can find themselves in.
Be aware that you can already use AI to generate database code from a schema illustration, so it's taken the 'real jobs' too. Just at this stage the more mundane parts of the task list.
That's the thing that gets me as a former computer science major. Everyone is looking to AI to automate the arts, but what AI is really good at is the mundane tasks that business majors see as "real jobs". We could have an economy where half the people are artists collaborating on the next big marketing push while the machines are crunching numbers in the background, but the bosses want a million pencil pushers and one lone artist who is doing the web design, banner art, and music score for some reason all for a nickel.
@rexs.5188 Artists or any creative have the uncanny ability to expose that which those in power want to be hidden. Remember that the Shakespearean muse is the one saying the truth and exposing the ridiculousness of the situation.
Not enough comp sci programs mandate an ethics course. Mine did, and it was very eye opening. What was more eye opening though, was to see the reactions of my classmates- most would have never even considered the ethical concerns.
god, the penis-detection-machine trauma is so real. i have had to talk to so many tsa agents, shame in my voice, telling them "yes, i am transgender. yes, i have a penis." and there's ALWAYS a fifty-fifty chance of them just being so outright disgusted with me for existing. when i was a teenager, one of them immediately grimaced, turned to their coworker, and said "i don't want to touch this thing, will you take it?" i've never had a single non-horrible experience with that damned machine.
My gods...I'm so incredibly sorry. That is a horrific and abhorrent way to treat another Human Being. I at once cannot believe that was allowed to happen, to you or anyone, but also struggle to believe how we can reach a place where that will never be allowed to happen... Take care, and I wish you enough in all you need to live and thrive in this world.
I love how the distance of the hammer from abi in different shots correlates to how critical that segment is of AI and its uses and implications. especially when it’s entirely absent in the kelly slaughter segment
At first i thought it was gonna be a reference to the "hammerman" thing from the transhumanism video. Nope, just a good old capitalism smashing hammer.
Frank Herbert hit the nail on the head when he said, " the problem with machines is that they increase the number of things we can do without thinking..."
I was a postdoctoral researcher last year and my project was on ethical AI from a gender sociological perspective. My project involved an industry stay at the biggest teleco company in Spain. I was there for 4 months and my work comprised observing the implementation of an ethical AI pilot at the company and advising them on better practices considering my sociological background. I was consistently dismissed. The people working on the pilot were marketing staff and engineering staff, no social scientist, no sociologists, no anthropologists.. . No philosophers, no ethics experts, nothing. I was told the social sciences were not really science and we're biased. Anytime I told them my opinion I was ridiculed and pushed to the side. They were developing problematic AI for gender and race recognition purposes. I pointed out that it was not ethical and the twists they took to reframe as ethical... I felt gaslighted. Ethical washing at its worst. I wrote a comprehensive report with my advice and plenty of literature on the topic to support my arguments. It was embargoed and I was banned from publishing. I decided to abandon the investigation. There is no hope with big teleco companies doing shit like this .
Does make me think back to the Philosophytube video where she recalls being invited to speak about ethics, I think it was regarding climate change, and part of her response was "if you're asking me about ethics, the first thing you should do is _resign_ , you didn't even pay me!" Part of me thought that story was exaggerated or a joke but I'm definitely seeing the pattern of "We only really brought you in so we could pretend we give a shit."
I think there are still ways to expose such things with anonymity and plausible deniability. Not sure how much energy you have for this, but there's definitely some article that could be written and published by public media.
@@Ermude10 unfortunately I signed a contract with them prior to my research stay where it is stipulated that I could be sued was I ever to disclose any information related to the company without their signed agreement ...
Honestly, "Large Scale Computing" is a much better term than "AI. As I've often explained to people, what Chat GPT does is basically the same thing your cell phone does when you just hit the middle option on autofill. It just does it bigger. There are some points I would disagree with about the data flattening, but overall, I think this is one of the best videos on AI I've seen.
Lots of large scale computing is completely unrelated to AI or machine learning though. If you render the CGI for a movie, that's large scale but not AI/ML - the computers are just a tool controlled by the 3D artists. If you run a climate simulation, that's large scale but not AI/ML - the computers are just exploring the mathematical consequences of scientific facts. If you mine bitcoin, that's large scale but not AI/ML - the computers are just wasting energy in performing computations that the game called blockchain demands.
@@AbeYousef "There really is no conclusion to this video at all, and it doesn't get into why AI systems have to be developed." Probably because the creator doesn't believe AI systems HAVE to be developed. It's likely they WILL be developed. But the video isn't about why we should or should not develop AI, it's about the dangers of seeing AI as separate from the people who create what goes into it, and how it obfuscates the role of the people who actually create what goes into it. "If you're already into "workers of the world unite" type stuff then sure, you can get the dog whistles, but as a video on AI itself it is extremely short-sighted." There's nothing dog whistley here, it's very explicitly coming from a socialist perspective, from an explicitly socialist creator. And you say there's nothing actionable but... Understanding the dynamics between capital and labor, understanding which side of the divide you are on, and advocating for that side, that's actionable. If you're looking for something actionable on AI that isn't actionable on something else, then yes, there's nothing actionable on AI specifically... Because the point of the video is to address the way that AI is often presented as separate from everything around it, and that AIs are not simply another product of human society.
As an artist, thank you and the crew so much for doing this episode. "There is no ethical computation under capitalism" is a potent way of conceptualizing the problems with AI once you consider all of the exploitation that goes into it.
Philosophy Tube is an idiot, who either exemplifies why modern philosophy is so useless, or doesn't warrant the name. Water is often said to be a human right. Yet, the access to water requires monumental manpower and complexity. Artists would happily tout that they are entitled to water rights, correct? Yet, by contrast, access to AI Generation could be categorized as a human right under article 19 and 27 (Freedom of expression, and Right of cultural participation), yet the same artists touting their right to water are the same first ones wanting to ban access to AI Generation for the very workers who provide artists the comfort and luxury of being artists. As such, "There is no ethical access to water under any system" might be more precise.
@@justaweeb14688do you wanna point out to me just one example of a communist country making it common practice to assign jobs to people rather than letting people decide what job they want? Or of a serious socialist thinker advocating for that? Also, like, do you think that people can just be an artist now, without taking on a day job or side hustle? Because the starving artist is a meme for a reason.
I’m a bee farmer. It occurred to me as I listened along that an AI would struggle differentiating bees and wasps the same way it does gender. Like people know bumblebees are bees, but I take people on beekeeping experiences as a little extra income and now have a segment 5 minutes in where I pause with “now is the time I ask you how many of you were surprised that honey comes from these and not big fluffy bees? Who looked at these and thought they were wasps?” The number of people who sheepishly raise their hands and confess they thought honeybees were wasps their whole life until now is significant. Like an AI, the sophistication of their parameters by which they define things is inadequate. The difference is, no one then argues with me. I tell them the new information and they quietly assimilate it. So, am I as an apiarist more respected than medical and gender specialists? Or is the objection to gender science less material and more ideological than people like to pretend?
@@1SophieDEF1 it doesn’t help that most labels on honey will have a picture of a stylised bee with far more in common with a bumble than a honeybee. It’s just better marketing. There are consequences though. There’s an Attractive Lady Of TikTok that is constantly on my feed who has tattoos of a bumble bee on honeycomb and bumbles don’t make comb like that. I am constantly reminding myself “she’s committed to the tattoos now there’s no value in being an entomological pedant”.
The joke about diversity, equality, and inclusionary seminars was absolute gold. I am required to plan one and they kinda make me feel sick. The people above me assume that if I make an event that says “hey people are different, and we need to understand and accept that” that suddenly racism, homophobia, misogyny, etc that they get complaints about will disappear in a puff of smoke. It’s all talk and everything goes right back to the way it was once the seminar ends. There is no actual action done to deal with the systemic problems. Don’t get me wrong, we NEED to talk about stuff like this, but we need to back it up with actual change.
"welcome to the seminar. Today we'll be talking about social network analysis, dynamical systems, community ecology in the wild, and what you can do about sexism"
@@lancewalker2595 Why ask them what they actually want done if you're just going to ignore it? Like someone says that hiring, raises, and advancement should be done anonymously so it's as close to purely meritocratic as possible, and you just respond that actually they don't want the thing they just said they want, because you've got assumptions about what they REALLY think. Ignoring the fact that it's not even the same person.
@@lancewalker2595 I'd suggest asking the people who are directly affected by the problems where they think the issues lie, but you're hardly an honest interlocutor the way you carry on from this point. "Positive discrimination"? Risible.
I am not transgender, but as a fat person, the airport scanner also detects a lot of random lumps on my body as well. I had a breakdown in public a few years ago when I had to have a full body patdown because I dared to have a body shape outside the preprogrammed "norm".
It's also a scam by itself As far as we know, that huge infringement on our privacy and dignity, has as far as we know, not prevented a single planned attack! It'ssecurity theater
I was wondering how this would effect fat people. Like, for a fat man with "extra chest padding" so to speak, would the machine register him as having hidden something on his chest? It's so messed up, and there seems to be so many obvious flaws with the system that it's a wonder people thought it useful in the first place.
I think this is one of the best PhilosophyTube videos yet. The flow, the visuals, it's so compelling. I work in VFX and I know there are certain people out there delighted by the idea of removing us from the equation with AI/ML, which is frustrating because it can absolutely be applied in our line of work in ways that enhance the artists rather than replacing them. That conclusion makes it all seem a little less hopeless.
I'm cis male and I hate the penis detection machine because I have gross lymphedema in the, er, shagging area, and go through the same experience of awkward questions, getting groped, etc. I imagine that anyone else with unexpected lumps, bumps and artifical limbs have the same issue. One thing that I have noticed is that, as my condition got worse, airport security started directing me to the old school metal detector instead. This is an accommodation that could be extended to trans people, but it's a complex problem when the aim is to treat someone according to the acquired gender. Having said that, it wouldn't kill them to add transfem and transmasc options to the penis detection machine, and maybe even give the subject the opportunity to make that choice themselves. Related: I appreciated your essay on the crisis in British healthcare because I have been waiting five years for treatment and it is *miserable*.
The problem is, if everyone walking through the penis detection machine makes their own selection, the very point of the machine -- preventing people smuggling dangerous items on board airplanes -- becomes moot. At that point, you might as well chuck out the machine. Then what are we back to? Racial profiling in airport security? This is the "tradeoffs" problem Abigail explicates with her hypothetical about the college admissions AI: to make the machine truly fair to people, you must eliminate or at least undermine its fitness for its intended purpose.
@@joshualavender I also had thought about that and was reaching the same conclusion as you. But then it sounds like the trans-panic on sports or gender quotas, as in "what if a cis man just declares he is a trans woman?" Maybe some kind of pre-registration integrated with some government system so the line at the airport gets your fingerprint and then draws the relevant information (like name changes, surgeries, etc.)...
TSA and other airport security systems fail the vast, vast majority of the time in discovering terrorists in the first place. Airport "security" for the mostpart is security theatre that does not actually do anything to stop attacks.@@joshualavender
@@L83467 That is an unfortunately impossible question to properly answer due to a lack of data publicly available. The best info I can personally find, very low quality and light on the ground though it is, suggests a few dozen drug mules are caught using these methods worldwide per day.
trans man here; i had heard that the airport scanners were called transphobia machines before; but i had always just kinda naively assumed that was a former problem that got fixed; until i was flying home for x-mas and got asked what was in my shirt. thankfully when i answered 'binder' the agent understood; but that moment stuck with me. it is so incredibly dumb that we have to out ourselves just to fly; and trans women especially get treated so poorly.
Not so fun fact about Frantz Fanon: he was not just spending some time in France, he was a french citizen by birth, as he was born in Martinique. An island that was a french colony back then and that is now a french department (comparable to a region or federal state). One could arguably assume that his experiences with white french people shattered his belief in national identity as something that transcends "racial" markers and makes everybody equal. Which led him to become one of the pioneering thinkers of postcolonial theory.
@@greywolf7577 Probably not. Although I'd argue that it is also not necessarily worse. Both, "race" and nationality, are social constructs and therefore pretty arbitrary categories. So who's to say which category is a better basis for an individual's identity? Also, to be fair, as a member of an ethnic minority, you don't decide what your "racial identity" is or how important this identity is in your interactions with your environment, the world around you decides that for you.
I'm a trans girl, but I've lived quite a sheltered and supportive life. Obviously it hasn't been perfect but I can't ever say I've felt in danger because I'm trans. Awkward, uncomfortable out in public, especially during the early stages absolutely. But everyone has always either been oblivious or very kind to me. That penis-detection machine segment was really kinda eye-opening, because, I don't know I suppose I never really connected that those genuinely scary and humiliating moments could happen so ordinarily and suddenly in my normal life. Scaryy :((
Minor correction about companies “not recognizing” that flattening is at best a gross violation: they know. They recognize it. Heck, they even recognize that it’s illegal. Either that or Microsoft made a “totally disconnected” definitely-not-subsidiary in Germany specifically for scraping data for lols.
Honestly i wouldnt be surprised if the ceos of those companies are just so out of touch that they never considered us peons dont like having stuff stolen from us but its probably a mix of both
While the penis detection machine segment touched on it, there is also the issue of how AI is being used in the medical field. It's being marketed as a way to avoid human error, like AI programs looking at an X-ray and using what it has learned from looking at millions of x-rays it may catch anomalies a human might miss. But I really worry about it being used harmfully, thinking back to the problems of the gender identity clinics and the NHS episode, imagine if they go "well the bottleneck is there aren't enough people to ask these outdated invasive questions"; so they make it a questionnaire and get an AI to look at people's answers which as Abi pointed out in that video people lie to try and get the healthcare they need. So you have an AI learning from lies, but then when you can't know how an AI makes it's decisions, who's to say it won't go "oh these answers are too perfect they must be lying, deny". Really scary but something it is easy to imagine being implemented.
AIs are not (currently) being considered as any form of replacement for Doctors. A real Doctor would still view anything medically relevant. The issue is indeed humans miss things! So AIs can be used to supplement them, not replace them. "What if people use AI to do this thing" is not really a point against AI when there is no plan to get AI to do such a thing. AI won't replace doctors, except maybe for simple cases where medical care probably isn't needed, as long as the AI is demonstrated to be on par with or better than human doctors for the task. Why worry that an AI might make a mistake when a human doctor would've been more likely to make one? Also, in the US doctors are expensive. People miss chronic diseases all the time because they don't want to go to a Doctor, either because they think it's not worth the cost or because they don't want to inconvenience said doctors. Imagine being able to just get your phone out, snap a picture of whatever lump or bruise you have, and the AI tells you if it thinks you need a doctor's visit. "But what if it misses something?" Well, you weren't going to go anyway so the AI won't have harmed you. The question is what if it doesn't? Then it's helped you.
honestly you made an AMAZING point because yu just summarized not only the problem with standard questionares you also added what we saw in her video about police and computer crime models bad data in, bad data out
oh, it's already a very distinct issue in existing experimental models, where the AI is much worse at detecting issues in people of color, because of the biased training data and worse healthcare for poc
Last year, in my high school, an employee for ChatGPT game in and gave a quick seminar/QnA. One of my friends asked, "Do you feel ChatGPT and AI is ethical?" and the programmer replied, "Listen, man, I just get paid." And my peers gave him a round of applause.
These are some really good ideas and thoughtful discussions about this rising technology. Utterly useless, but good and thoughtful. Trying to stop, regulate, slow down or even challenge this thing will be akin to that lady trying to stop her car sliding on the ice by opening the door and putting her leg out. This tech will only get more powerful and harder to distinguish from human-made materials, and it's doubtful we can possibly predict how thoroughly this is going to take over every aspect of our lives.
@@themachine5647 yeh, for me it's just another innovation revolution that we can't oversee the consequences of. like the industrial revolution, or plastic. :(
"Ethics" ARE the guide rails for the most skilled, talented (formidable 🤔) individuals in society. Indeed, Ethics are what defines a "Profession." So if someone claims to be a "marketing professional" then they SHOULD have an Ethical Code governing their conduct.
Due to a hormone imbalance, even though I and born, and identify as male, many people(and these algorithms) can misidentify me even though I am not trans. So this is not even just a trans issue, it can affect others, possibly becoming medical malpractice.
Yeah, there are a lot of ways this can misidentify cis people too, depending on what metrics the algorithm uses. Is it basing it on height? Cool, computer says short men and tall women don't exist! Is it basing it on the shape of the chest area? Cool, computer says the woman who got a mastectomy to treat her cancer is a man! Is it basing it on face shape? Cool, computer is a phrenologist now!
27:00 this is a thought i've had for a while now - even just the idea of using AI to replace actual writers goes to show how much "providing more content to consume" has taken priority over "making good things"
A very early draft of the first Matrix film had Morpheus explain to Neo that the humans enslaved by the machines were being exploited for their processing power. In other words, each mind connected to the Matrix was being used as one of many individual and interconnected servers. Data labeling makes it sound like that nightmare scenario dreamt up by Lily and Lana is already here!
Except it's greedy capitalists doing this. I think I read this in "The Age of Surveillance Capitalism" or "Weapons of Math Destruction" by Cahty O'Neill.
The thing people tend to ignore about Sci Fi is that it isn't coming true because "oh wow, this person was a prophet or a genius or something!" but because "oh yeah, actually that's a pretty obvious exaggeration or allegory of what is and was already happening, maybe we should do something about that". Sci Fi has been described as a modern form of philosophical thought experiment - you take an idea and you exaggerate it to the fullest extent you can imagine, and your story then revolves around the discussion, dissection and debate around that idea which becomes more apparent from the exaggeration. Sci Fi about androids becomes discussion about the human body, brain and experience and how we currently view human bodies and experiences - this will usually come with discussion on how we view the _differences_ between human bodies and experiences and you naturally end up with discussions of race, disability, gender, age etc. The ones that end up "predicting" the future are usually the ones that stop to really engage with their topic rather than going for the obvious and unexamined take so there's more room for smouldering actors and cool tech causing cool explosions (which can be fun and isn't always mutually exclusive)
it probaby helps retention, I´m certaintly not an expert but the double encoding of "hot damn that's an outfit." and "hot damn that's an argument" at least makes me remember the show abit more
The quality of your content just keeps getting better. Sidenote: I'm a cis woman who is frequently mistaken for a man due to my haircut and baggy clothes. I've been pulled aside for a "groin check" more than once when going through airport security recently and was generally confused as to why. I hadn't even considered that. I wonder if that explains it.
Girl same! Go through the PDM, no peen detected (and also no tiddies I'm nonbinary), machine is like ??tf then they get the male agent to pat me down lol 🤦
Aw man, a Philosophytube video about my field of study!! awesome!! I'd love to add something about the correctness of counterfactuals if anyone's interested:) proving counterfactuals is actualloy pretty straightforward! lets say the input of the AI is the application & resume and the output is "NO". finding a counterfactual is a whole ordeal, but once you' ve found one, like "if your resume had just 1 more month of experience in X, the AI would've said yes!", you can simply use application & modified resume as input, and the output should then be "YES". A far bigger problem with AI counterfactual accuracy is that most AIs are constantly learning and adepting. So if we tell the applicant they need 1 more month of experience, which is true at that time, but they come back one month later with the new experience, the counterfactual might no longer be valid because the AI might have become stricter. interestingly, there are (flawed) ways to restrict the learning of an AI to work within the bounds of the counterfactuals it has given! So that it can adept and learn, while promising to not do so in a way that it would reject that one case + a mointh of experience :)
The segment about airport security scanners reminds me of the game "Papers, Please" and how these exact scenarios sometimes come up. You'll get travelers who are gender ambiguous or gender nonconforming, and their presentation will not match the sex marker on their ID. Eventually you get a scanner, which is actually more sophisticated than the scanners we have today and can readily distinguish contraband, even though the game is set in 1982. Not only that, but legal provisions are also put in place to account for these individuals. In other words, the system in this fictional dystopian authoritarian setting is better equipped for these scenarios than the system in modern western countries IRL. That should tell you something.
I don't know too much about how things are done irl but I had assumed it was implied that the "scanner" is just taking photos of them undressed, hence the contraband showing up and the optional nudity. I also don't remember there being any special condition for them? I thought that you were penalised for letting them through with a "mismatched" gender marker
@@spameron7575 It doesn't look like they're made to undress, as they're fully clothed immediately before and after the scanner activates. Otherwise the player would just tell them to strip, and there would be no need for the curtain.
@@FrozEnbyWolf150 I had assumed the curtain was so there was a degree of modesty, and that the violation of having those pictures taken was just part of the border. Also the fact that the pictures are handed to you as polaroids had me thinking it. Would it be ionising radiation used in scanners? It's the first other thing that comes to mind
This is a very powerful video. I’m currently studying graphic design and AI has changed the landscape so much, just in 1 year. I am constantly flabbergasted by how these people who are thoughtful, insightful smart and educated talk about data as if it just exists in a vacuum. If you begin to talk about human labour and exploitation they immediately shut it down with “but everyone does it.” and “we have to work with it or it will replace us”. i feel like we’re in a hostage situation and everyone is denying it, talking about “new possibilities”, as if those aren’t built on the backs of people who worked hard and dedicated their lives to their craft.
Well that sucks. Am a designer for 5+ years and I can't believe it's already being talked about as a normal accepted thing when schooling's core purpose should be to teach and reinforce the fundamentals of design first. Y'know, teach the rules well before y'all go breakin em once you graduate. Where's the development of ideas (more than just the first 3 that come to mind) when a student can just ask AI to generate 20 versions of a logo? Will they even be equipped with the knowledge and skills to pick out the good ones ??? Geez... this is depressing to hear tbh.
@@elucified to be fair, we are encouraged to use it more as an idea visualizer than idea generator but i still feel shitty about it and like I’m betraying the people who’s lifeblood went into this. at my bachelors my teacher had real reverance and love for the art of typography and taught it with a passion. here, sketches and illustrations are trashy and kitschy while impersonal ai generated stuff is “visionary” and “inspiring”. it sucks so bad :(
I totally agree, but I also believe that the 'genie is out of the bottle' and we have to learn to live and work with AI. It's not going away. If anything, more and more of the software I use has it integrated now. The expert graphic designer knows what good design looks like and how to communicate information; they provide a value add beyond the AI. But with the ubiquity of AI everyone can claim to do it themselves and there will be floods of awful art and design coming from it :-( Smart phone cameras have done a similar thing to the old school skill of photographer.
When I was in my final year of graphic design school, someone used AI in most of their final design, and they were heralded as one of the best on the course. It feels so ironic we had an academic project on efficiency vs ethical design in second year and then a 180 in thought happens right before employment
There is plenty of AI that is helpful to creatives and their processes, i use a lot of it every single day as a motion designer. But this AI "boom" over the last few years has really disheartened me as a graphic designer because I think we're truly seeing how most people view creative works. They take creative professionals for granted CONSTANTLY, and so when we speak out about our works being used to train AI to essentially steal the small work that allows us our livelihood, we're often met with indifference from the majority of people. Because they do not appreciate the human element of our work. They only appreciate the final product. And if they can get a fuckin Temu version of my artwork for a fraction of the price, that's good enough for them.
22:42 i love how excited you get about propping up your coworkers and the people who help make these videos happen. Also, that NOVEL of show notes is a testament to how much you care about your work and the way you hope it impacts the world. Thank you for giving me a little more hope as an NB (and absolute philosophy nerd) new to the space.
Dayum Abby. In one video you've basically changed my view of AI from "alien and potentially hostile form of intelligence that's exemplifying the worst of capitalism" to "reflexively parasitic crystallization of the worst of capitalism". The part about people doing subemployment to act as little more than neurons in this strange excuse for a brain was really eye-opening. "Once upon a time, men gave their thinking over to machines in the belief that it would make them free. But it only allowed other men with machines to enslave them." - Frank Herbert, _Dune_
I used to work in one of those data annotation offices and we could only guess what it was we were training, probably a bunch of things at once for different companies. Tasks differed but at one point my colleagues "mined" youtube for footage of gun and knife violence and i almost had a meltdown thinking i'll be on that team next. Thankfully i could be moved elsewhere. Just a whole office of minimum wage workers pushing buttons for the machines. I always think about the people who do these jobs when people talk about AI as if it's teaching itself. I could never do my job fast enough for the management and even got RSI from it. I don't think my injury was worth it at all. I'm ukrainian btw
"there's no ethical computation under capitalism" is essentially how i feel about my work as a dev. we're not hired to make ethical and green code, we're hired to make a tool to unemploy someone else, as quickly as possible
I don't think unemploying someone else is the problem that's exactly being pointed out here. Its more the distribution of wealth itself. The main issue being taken with really anything unemploying others is that it siphons money back into the select few who have the most power. I don't think we'd want to go back in time and stop trains from being invented just because the previous people who transported goods will lose their jobs.
Capitalism is the only ethical system. Without private property there is no morality. If you can't own anything then you can;t make any ethical decision. There's a reason why human do better the more markets and private property are embraced.
we should unemploy as many people as possible, so that people can stop working, or work in actual socially valuable jobs instead of artifically employing people even though we have an automated solution.
@@AbogadodeAsmus The point is not to find a way to employ everyone with other menial tasks that existing technology already does. Menial tasks that no one wants to do in the first place. The point is to free people from wage labor. Give them the freedom to choose how they spend their time and labor without forcing them into servitude under the threat of homelessness, destitution and starvation. Make technology work for us and not the other way around.
@@ret2pop This will require implementing a Universal Basic Income _first_, because otherwise those people can't actually stop working or they'll starve, leading to de facto slavery at best. But even if we manage to get UBI through, it will be constantly under attack from the right wing since it's fundamentally against the idea of a stratified society which is the core of right wing idelogy. It's capitalism itself and even more generally the idea of hierarchy - specifically, the division between the rich, who are allowed to profit without actually doing anything useful, and the poor, who have to earn every single penny through hard work and, if such work is unavailable, artificially produced busywork - that's the problem, and quickly becoming a fatal one. I wonder if that's the actual Great Filter: a primitive society becomes hierarchical because that's an efficient way to organize military power so any tribe which does that forces its neighbours to do so as well, and by the time technology advances to the point where society has to give up hierarchy to survive its too deeply entrenched and the whole thing collapses into a few remaining plutocrats ruling over dead but automated ruins, dying off one by one.
hey!!! i’m doing a research project on essentially the entire second portion of this video (ai, surveillance, and trans identity) and i found that simone browne’s book dark matters and toby beauchamps’s book going stealth were really helpful in my understanding of surveillance and how it’s used against minority groups (if anyone wanted to do any further reading) :D
I've seen 'Going Stealth.' It assumes that it's oppressive for institutions to expect someone to self-identity as a person requiring an accommodation (as opposed to the institution anticipating your need). That's a really silly premise to write a book about.
@@bambooblinds Ummm I think you may have misunderstood the premise, or at the very least are misrepresenting it. Trans people should not require "accommodation" - we should simply be able to BE, in the same way that cis people are. The idea that we pose a unique challenge to society is simply a result of our society doing a lot of discrimination based on colonialist and misogynistic views of gender. Given that the book also discusses the use of surveillance to enable differential treatment of people of different races and nationalities - entirely constructed categories - I think that it is misleading to present the book as just some whining about "uwu the state didn't anticipate my needs; I'm oppressed!". It's a deconstruction of how the state uses surveillance as a specific tool to both enable and justify oppression.
@@BambiTrout nope, i'm not misinterpreting anything. i'm just not buying the arguments that being trans is something other than a disability. really, i don't think anyone actually buys into that, although many go along to be polite or avoid making waves. you're right that comparisons are made to race and nationality, but those are bad analogies. those are cases where you're debating efficacy in having profilers designate signs of a security risk (or sometimes partial descriptions of a suspect) and how that should be balanced against unfairly discriminating against innocent people who belong to that category. the appropriate analogy for trans people would be more like someone with mobility issues needing special assistance or someone with an implant that will trigger metal detectors. it's a case of needing accommodations, like i originally said - and it's the responsibility of the individual to ask for it.
We can make ethical AI. But no marketing or sales department in the world wants one. They want an AI that maximizes profit for one party regardless of the interests of another party.
I would love to see a full episode on the subject of subemployment. I don't know if there's enough material there, but as someone who's recently become too physically disabled to hold even a steady part-time job, it's an issue that's very important to me.
Same! It feels like it's adjacent to gig work and under-employment (e.g. not enough hours to be fully employed and get benefits or overtime, so you have to work multiple jobs).
@@emilyrln Maybe an episode talking about those issues as well? That would certainly provide a lot of material. Still no idea if it'd work but I'd for sure be interested.
There kind of is a lot, but not necessarily useful to build an episode around as a lot of the discussion is economics rather than addressing the human element. There's a reason half of the poverty rights activists I interact with are wonks, they have to be to dissect what's going on in any meaningful manner, and the political discussions rarely address the human element adequately despite efforts to force pols and the economists they listen to to do so especially since some of the stuff pols rely on is outright lies.
There’s a great study by Harvard Business School called Hidden Workers: Untapped Talent. It goes into how applicant tracking system software causes at least 27 million Americans to be unemployed and sub employed. It disproportionately impacts us disabled people. Even though there’s almost always accommodations available for the jobs we apply for. 90% of the time it’s stupid formatting errors like having multiple columns that cause your resume to not parse correctly.
as someone transmasc, I knew exactly what you meant with the body scanner section. Multiple times I have been scanned and there's the much less humorous equivalent of "BRO you're MISSING SOMETHING"
This was excellent and I feel like so much of this desire to push writers out of a living wage is an extension of the idea that our work isn't "real" or "valuable" because it isn't inherently profitable. That whole lol at the liberal arts/humanities majors chickens coming home :(
I think it also confronts our human desire to feel special. That our creativity isn’t bestowed by gods. Other avenues of story creation are just as valid as our mind can create. I think the biggest issue is the destruction of capitalism, and the concept that we have to slave our lives away in order to feel purpose.
I am a data science teacher and in out bootcamp we do have a lesson (unfortunatley only 1 in a 3 months course) on data ethics. I really liked your video, it explains the problems with AI pretty well. I will recommend it to my students. As a teacher, I feel responsabolity towards those issues.
I remember going through airport security on a class trip. Only two people got pulled aside: me (who, it turns out, had multiple strikes all related to my transness) and a classmate (who is cis but was GNC). We found out about the genetic sex buttons and it was... frustrating. I've never gone through airport security without being patted down. That's why I laugh when people say the machines are unbiased.
They're not 'genetic sex' buttons--the TSA schmuck at the screen is not blessed with the ability to karyotype people at a glance. It's a measure solely of how the schmuck in the seat with two buttons categorizes the person in front of them.
You could make a claim that the machines could be unbiased. However, the people behind the programming insert biases and the people pushing the button rely on and insert biases.
@kevinbarbard355 The programming is what runs the decision-making in the machines. Sure, machines aren't inherently biased, but what people are talking about here isn't the inherent nature of a machine, it's the decision-making capability. And that capability is given to machines by biased humans. Ergo, the machines are biased because humans are biased.
14:30 I think papers please has a very good example of this, in the game you're playing as a border security guard and have to make decisions on who to let through the border into the country. Also in the game you have a body scanner to check for contraband as well as if their gender on the passport matches with their body, and there's one specific instance I recall of one character passing through the checkpoint who has the exact set of circumstances described, their passport doesn't match their body, but when you question them about it they simply respond with "yes I'm aware of that" and then you have three options of either letting them through, which gets you a citation, or sending them to the detention facility, which gets you a bonus thanks to an arrangement with one of the guards, or simply turning them away and telling them to correct their documents. It's an interesting ethical dilemma which also shows how easily someone put in that border guard position can make decisions that are inherently judgemental even they themselves don't harbor any personal resentment towards the person passing through the checkpoint.
What a great point. And in this it becomes clear that at the end of the day, it's not the individual, little people we need to blame - it's neither humane nor productive to blame them. It's the oppressive systems we have in place, that's what we have to focus our efforts on and change.
As someone who is trained in both political philosophy and tech this is a fascinating segment. Your humor makes it even better. Your stylists are amazing.
I've been working backstage for Google AI Summit events. Thank you so so so much for giving me the language I need to express why I'm so hesitant about all of the advancements they're breathlessly announcing.
Nebula-head here: this is such an incredible video essay. It feels like a return to Season 2 in some ways. The video is a deep exploration of how large-scale computing does not solve issues with art and labor but instead exacerbates them. (I enjoy the Kelly Slaughter bit - a great Season 3 addition overall but notably in this video - as something of an exercise for the viewer to practice dissecting what is said and (more importantly) assumptions left unsaid. There are so many excellent resources - not just the texts but also TrashFuture episodes connecting the Nate Bethea Extended Universe - which will find their way into my To-Be-Read list. There has been much to digest since I first saw the video. It is incredibly timely, well-researched, and insightful into the issue.
As an AI safety researcher, seeing my favorite philosophy channel post about the topic was amazing to see! Excellent work detailing the tradeoffs of fairness and accuracy. I'm taking a class covering the subject right now and buried beneath all of the complex math there are some really startling realizations that you explained beautifully. We need to seriously consider whether we WANT to live in a world where every decision is made by some unknowable black box algorithm. I worry that regardless of what people want, the military applications of AI mean that the technology is going to continue to be pushed forwards at breakneck pace. While true that too many people are only focused on terminator style risks of the alignment problem and thus ignore many of the already present issues, I do think there is some serious risk as we continue to get closer and closer to human-intelligent systems that we should be aware of. My overall opinion about AI is that we are playing with something that has the potential to fundamentally reshape society with little understanding of how it works. Our current capitalist organization of the economy is one of the worst-case scenarios to be doing this in and we are set on a serious path towards dystopia or extinction if we don't reign these companies in. Maybe someday the post-labor utopia promised by AI visionaries will be possible but rushing to shove AI into every facet of society as quickly as we can is a sure way to make sure we never reach that vision.
Within that, what do you think about the point that ai may be seriously stunted by the climate crisis? Can militaries keep pumping money into it if it can't function due to physical realities? (I know this flattens a very complex question into a single paragraph 😅 but even something as simple as workers behind the scenes being unable to function due to heat stroke could hugely effect it)
There is a serious arms race happing with AI. I think is hard to avoid. But I do think we need to figure out to handle living in such a world. I honestly do not think we can stop it. (And the benefits might actually be worth it to, if we are being honest.) But like pretty much all technology, it is a double edge sword. We better figure the thing out. Develop new policy. Set up new norms. Honestly, we are actually still dealing with how we should set up the norms for IT society at larger. Most in the developed world did not start getting in to contact with computers until the 80s. And even then most did not own one. In the 90s, people started to go on the internet. Social media as we know it did not start to get traction after the millennium shift. So yes. We have not set up norms how to do this. How to build a far and safe IT society. If anything, exploitation is returning to what it was when industrialization was new, and we had no norms. I just hope we can adapt in time. In many ways I do not envy you AI safety researchers, since even if you do make a safe AI, can it really be safe when it is in the wrong hands? Like it is often today.
In a lot of conversations about AI, I notice we over and under estimate the competence of humans and AI (current and future) on different axes. Humans are also black boxes full of biases. Humans also launder their positions. Our brains do so much post-hoc justification of decisions which do not originate from reason. One interesting thing about AI is that we can measure and (to some extent) tune its biases. The tuning (training) is harder with humans. Having been the victim of human bias in healthcare, I think that in the narrow case of my personal medical journey, a well-tuned AI would likely have done better at diagnosing and treating me for my rarely-diagnosed condition. I would like to live in a world where AI was part of the diagnostic process. I would not like to live in a world where it was the only thing in the process, as you say. I share your concerns about AI in general. Neural networks with billions of parameters can find themselves in an absurd number of different states. Those states are not entirely reducible and there’s not enough time in the universe to play through every state and input and validate the output isn’t catastrophic. I don’t know how we can possibly have confidence that they will be well-behaved given that. We definitely don’t know how to be confident yet. Is it theoretically possible? Perhaps I’m not expert enough to understand what’s possible, but I also haven’t found an expert with a clear case for why it might be possible. Of course humans have the same problem. But humans are also slow and mortal. My concern with AI isn’t that it might be misaligned- humans are misaligned. My concern is that it will be misaligned and vastly more powerful. It can move faster than us, integrate its thinking with tools in ways we will never be able to do, hold more in working memory…
@@lkyuvsad For me, the black box issue is not a huge issue. Like you point out. Humans are in many ways also a black box. Even when you ask them to tell their reasoning, they might like, or more often not know what their reasoning really was. For me is more how they can be used. That they are that powerful tool. And is often more a case that a human can do the same, it just takes a lot longer for a human to do.
I think "AI extinction risk is a distraction" *is a distraction*. People want to see this unprecedented and horrifying problem through the lens of problems they already understand. The problems liberalism is designed to solve. AGI risk is not like that. These other problems are big and horrifying and almost intractable. They are not going to kill literally everyone on earth. Training data IS NOT ALWAYS PRODUCED BY HUMANS. I need to disabuse people of this misconception on a daily basis. Take AlphaGo Zero for example: it was given only the rules of Go and it trained itself with training data it produced itself until it produced strategies superior to the best ones the top human experts have come up with over the past thousand years or so - and it did it in about a day. It beat the best humans - after having only ever played itself. AGI on the cloud would not be dependent on a capitalist system to kill everyone. It need only hire a few task rabbits - or get access to improperly secured robotics in a lab somewhere, bootstrapping technologies including nanotech rapidly and bypassing physical constraints in ways the cleverest humans could not even have imagined. Say what you will about those who believe that these systems are so profoundly risky that we need to shut them down in a global moratorium. Go ahead and claim that shutting down training runs for systems larger than ChatGPT-4 is a distraction from the real problems. What you would be missing is that a moratorium *WOULD SOLVE OTHER PROBLEMS TOO*! Sam Altman is playing the role of someone concerned about AI safety - to get the government on his side. To allow him to keep doing what he's doing. As a strategy for regulatory capture. As Yudkowski and others point out, if he actually realized the extreme danger involved here he would be taking this problem with vastly more seriousness than he is. This frame of philosophy tube is saying: Look at how unaligned Capitalism is. Capitalism is the real problem. AI risk research is just AI "Doomerism" is just the shadow side of the AI utopianism which is just AI hype based on marketing from crypto scam artists. Except it's fucking not. Because like it or not artificial general intelligence is almost certain to be profoundly powerful, almost certain to come (I suspect in the next few decades) sooner or later, and almost certain to have goals misaligned with any human individual if we don't solve this fucking problem. We aren't ready for this, we don't know what we're doing, and we are all most likely going to die because of this. This is terrifying. I've spent most of my life learning about biotech and synthetic biology and have learned some pretty terrifying things. I've studied global factory farming and the conditions of slaughterhouses. I've studied the red market in China. This is far, far worse. There's a powerful temptation: think about other things, distract yourself with other problems that are easier, project onto others - as if they are the ones not prioritizing correctly, only look at things that can be well-described by the cognitive schema you're already used to using. Let me state this directly: Abigail is wrong. She is not seeing the real danger. She is not properly understanding the scale and severity of the problem and she is dismissing it as hype because it is too hard and alien and painful for her to accept. It's too extreme to be real. Absolutely everyone dying unless we solve this insanely hard problem? Where the fuck did that come from? Has the cosmos has gone mad? How can anything be so unimaginably dangerous? When the fuck did this become the universe I live in: in which so profoundly little hope can possibly be justified? Except that's exactly where we are: a universe in which we all just die like weeds in a blast radius unless we get our shit together fucking NOW. So blame the billionaires, blame the economy, blame capitalism, blame technology, blame marketing, blame the politicians, blame the fucking safety researchers for not blaming the right people and things: it won't make a good goddamn of difference unless we Solve The Fucking Problem. Ok? You want to wear extremely expensive bondage gear and complain about hype while you do it? Fantastic! But DO SOMETHING. I fear for the lives of everyone I have ever known. I fear the universe that comes. I know how bizarrely cruel and indifferent it can be. How fucking impossible its challenges can be. How utterly unforgiving and steep and hard and cold its costs. We ALL need to work on this now, now, now - for the sake of unborn persons and living persons and the possibility of the death of humanity for all of coming time. NOW!
I cant recall if I've ever remarked on it previously, but I quite adore your closed caption descriptions of your musical inclusions. I feel like given the gravitas of the subjects you tend to tackle, there's a decent chance if I've commented at all it was likely better interaction than a passing compliment, so I'm dropping this one IN ADVANCE of all the brilliant and interesting things you're about to say.
bless who does the captions they r so expressive and engaging and fun and descriptive and i like always need subtitles to understand things and they're always just so 'rock music in backround, wordforwordwhatwassaidwithnocontextandsilencenotfilledin' bless u
@PhilosophyTube hehe what a sweet thing to put so much time and thought into ! looking forward to more onomatopoeia and typographical expression ! know it is rad as hell and ur appreciated :)
I have tsa precheck, so I get directed through the metal detectors nowadays, but when I was traveling on a choir trip as a fourteen-year-old they put me through the body scanner and then pulled me aside to do a light patdown of my chest. I'm sure that my chest binder and the shape of my body underneath it set off something there; the agent who did it was polite but I was standing there afraid that someone from my group would notice and I'd have to explain myself--not a single person on that trip knew I was trans (I hadn't come out to them). And, y'know, I feel like the Pull Teenagers Aside For Patdowns In Sensitive Areas machine is maybe, perhaps, something that we as a society should reconsider there.
The only solution then is to just pat down absolutely everyone again, these scans exist for a reason and we don't need another avenue for another war to start, there's already enough of those going on.
my sister was recently fired from her job at walmart due to a decision made by an AI. she was the sole breadwinner of her family, her husband is disabled and they've both had to pay for a lot of medical issues both relating to and not relating to that. they live in a very small town basically in the middle of nowhere because its what they can afford. im glad my parents have been able to help them out a lot because i dont think they could get by otherwise, and of course i worry about their kids too sometimes
@@jokehu7115"they live in a very small town basically in the middle of nowhere because its what they can afford." I think it may be that they are in the middle of nowhere and can't afford much
I love your shout-out to your crew! And I love that you respected their request to not appear on screen. As someone who works/worked behind the scenes, I have been included in promotional content for shows without previous notification or without being asked for my consent and it is really uncomfortable. Also - the newspaper outfit was fantastic!
absolutely noticed that too! it's great to be given some perspective on the sheer mass of work that does into every aspect of production, but having it be done with the privacy of those who want it maintained makes it all the more enjoyable
Thank you for this amazing video! I've noticed AI researchers tend to downplay philosophy's perspective and contributions to the field, but it's so important now more than ever when dealing with ethics.
As both a Computer Science major and an artist this episode has been such a blessing. I've been strugling to put all my thoughs on this video into words but it really opened my eyes to issues that I noticed were happening but couldn't quite pinpoint what they were. Still, making a little tangent I guess, I wanted to share a perspective that I think might get overlooked when talking about this subject. The thing is that writing code, at least for coders, is seen as sort of art. I mean, I've even had many professors refer to it as a combination of science and art, and I really don't know how to explain why but I get it. There is a lot of artistry in what you do, from the things you decide to build, the technologies you decide to use, the way you approach solving problems all the way to how you write the actual code, there is a lot of expression and will to create. And it really saddens me that ai, that can be a tool used by artist as a way to do interesting things with their art (last year I for example had the opportunity to see the work of a painter that built his own ai model and trained it with his art so that he could do crazy interesting stuff with it) is being appropriated by big companies to replace the work of artists, instead of using it to create actual value to the world, and how is feeded mindlessly with people's data without their consent. As engeneers, we should know better at this point. We often build things without malitious intent but I think we should ask ourselves more on how the technology we create can be used with malitious intent. I doubt that the scientist that made advances on generative images using ai were plotting on how to build a machine that could create non-consensual porn, but they were rather fascinated by the fact that they could create image with the power of magic, because that is often how working on science feels like.
Yes! this is a phenomenon known as function creep: "In an AI context, the deployment of AI beyond its originally specified, explicit and legitimate purposes can lead to function creep as well as exacerbate security incidents. For example, AI systems intended for specific crime prevention goals might gradually be repurposed for unwarranted surveillance activities not originally considered." (from a 2020 paper by Stefano Fantin and Plixavra Vogiatzoglou). You get stuff like anxiety-moderating systems being repurposed into shit lie detectors, etc.
i'm in the same place. sometimes i can't help but feel like humanity can't be trusted with technology AT ALL, and that the only "ethical" thing to do at this point is just refuse to participate in society and run screaming into the woods 😬
It's that old meme. "We built a machine that tightens bolts twice as well!" "Cool, do our hours get halved? Or do we make products that are twice as good?" "Half of you are fired to double our profit margins. The other half, get back to work."
This was fantastic! Great job to the whole Philosophy Tube team! I'm a molecular scientist specializing in human cytogenetics. The big thing I do is a test called FISH which is useful for detection of neoplasm at the chromosomal level. Cancer. I look for cancer. I do the analysis manually. I run the assay (which is a 2 day process) and then look at each slide myself using a fluorescence microscope. Each slide can take 30-60 minutes to analyze. I have to look at hundreds of nuclei and there's a lot of variation to account for when assessing if something is a problem or not. A lot of large scale labs don't do FISH manually anymore; they use software. The slide is scanned by a machine and spits out the analysis. Some are pretty good, others are not. I keep logs for all of the results including the ones that are sent out to reference labs that use software (the sendouts are done for insurance reasons; their insurance will only pay for the computer FISH). My accuracy rate has been rated many times as incredibly high. Sometimes a patient's doctor will foot the bill and just ask me to do it because they know I am much less likely to send them a piece of paper that says, "insufficient cellularity for analysis." Is the software faster and cheaper? Absolutely. But it's not nearly as accurate. Large-scale computing is just not an adequate replacement for certain tasks. Sometimes you need a person who is has the education and training to sit at a microscope in a dark room all day looking for potential abnormalities in thousands of nuclei.
The debate on ethical resume pondering by humans should also be considered. I used to have a female friend that worked for HR on one of the biggest banks(BBVA) and she was ordered not to even take interviews seriously if the applicant was overweight, a pregnant female or even hinted at anything related to a genetical condition, and this happens with HR more than you would think.
I'm a software engineer and I really love programming, but it makes me feel like I'm in the wrong side of history. Specially because software engineers don't unionize, at least not in my country. We are exploited, just like everybody else, but since it pays better than a lot of other jobs programmers don't seem to feel the need to organize themselves and well, maybe large scale computing is the push we need. If there's some Chilean developers out there wanting to organize something, let me know 😂
I mostly agree, but I think we need to get away from the "right and wrong side of history" thinking because it implies linear historical progression which isn't true.
Mexican-American Electrical Engineering student here! I dont know about "right side of history", but I dont want to use my skills that will hurt people.
I was honestly trying to pay attention during part 3, but your outfit just had me thinking about my own body. I'm also a trans gal, and your level of beauty felt so much more attainable when I saw that we have a very similar silhouette (to say it as non-grossly as I can think to) So, thank you for that. I know there's probably some other reason behind the costuming decision for that scene, but it did make me honestly feel like maybe, one day, I could be as pretty as you are.
My wife was similar early in her HRT days and it's rather shocking what 3 years of HRT can do for you. Assuming you'll go down the same route a lot of it will just take time. My wife looks very different from back then and it just took time.
Guessing Abigail originally had a section on this, but it was cut for time and/or flow reasons: Companies de-risk Data labellers by comparing their entries to those given by other data labellers. This alone means that they have to not label correctly, but label according to what they believe others would pick. Add to that the need to complete many thousands of labelling exercises a day in order to earn anything, this ultimately means that data labelling is no longer actually as useful as it should be. Data labellers no longer label according to quote-unquote reality, but according instead to "what would other data labellers pick within less than 5 seconds".
Yep. I tried working doing the data-labelling thing for a little bit, with the sincere desire of wanting to improve the algorithms. Instead I was constantly pinged for being "incorrect" in my labelling and denied pay as a result. Whenever I tried to appeal the decision I was ignored or hit with the brick wall of "aggregate data indicates" I had made a mistake. I have a grad degree in physics.
I keep going back to this video whenever I approach AI with friends, even when the topics of application are far from the videos' example (IA in video identification, IA in war...), the framing is really something that's often missing from debates and easily breaks down what is important to take into account
OMG thank you for talking about that stupid airport machine. It’s great in the sheer amount of times I’ve had to explain that to Cis people as blatant display of systematic discrimination. And the massive numbers of embarrassing moments of people turning around to see why I got stopped, just to see a massive red blob on my crotch area on the screen is the worst!!!
It can even be hard for a liberal minded male to conceive. I’ve met so many females who identify as females to have as much breast tissue as males, and males who identify as males say they know they have the Seinfeld “moobs” (male boobs)… I hope Gen Z is bigger than this. We millennials missed the mark.
So, I'm trans and autistic. I'm also left handed. Growing up, I was an outcast everywhere I was. I was either so uncomfortable in my body because I had to present as male or I had issues with neurotypical people that I could never fit in. I took this to heart early on, noticing everyone else in kindergarten was writing with their right hand. I thought two thing: Am I writing wrong? Or am I just too weird to write like everyone else. I only recovered this memory recently after my left wrist started hurting from how much I rely on it for most of my daily life. Ostracization pushes a person far, especially from a young age. If I weren't so afraid of being isolated anymore than I was I might have had a better childhood.
It's always humbling, in a sad way, to learn about axes of oppression that I as a cis person can pass through without a second thought, like the airport scanner. To recognize that there are ways other people experience the world that I have the privelage to not have even had to think about before, let alone contend with. Sometimes I've got to remind myself that even though I'm a leftie, and I've worked on myself a lot since I was a shitty teenager, there's always more to learn.
Ya airport security is pne 9f the shiniest places about this. If u r Muslim u r "randomly stopped" of u don't fall into the "right body type" u r searched. It's security theater so it fouceses on things the establishment thinks are scary. Without u k any evidence that those things actually work or any tangible results. To this day the tsa never claimed to have stopped a terrorist attack. There is no direct link we can draw for any of these practices and actual safety
Abigail didn't mention that it gets worse than groping, you may be ordered to disrobe to prove it's just a penis and not a... honestly idk what they think it might be, a really weird bomb I guess?
Your disclosures in the end-credit captions are what got me to sign up for Nebula finally (along with my sister off-handedly mentioning that they also have a Nebula subscription now, thanks sis). Looking forward to getting in on more PhilosophyTube and related content; thank you Abigail. Glad you and your team are doing alright amidst the everything.
i had an exam on this topic today. not one of the "deeper" points was discussed or even suggested as further reading. the blindspots of the course are insane; definitely going to look into this for my term paper
Oh thank god im not the only one who thinks about the lithium. The fact that we're wasting it with planned obsolescence too. People are dying in mines for an iPhone that will be thrown out in a year. I love the internet and tech and I cannot deny how important and useful it is, particularly in how it spreads information (i grew up with the internet and the idea that i might have questions for which i will never be able to google an answer for is mind boggling to me) but its absolutely drenched in blood.
While I agree lithium is very dirty to produce, and it is squandered on unmaintainable, unrepairable, devices; I do not understand what that has to do with AI or the internet, neither of which are built or powered by lithium batteries.
@@almishti And what physical devices are those exactly? Which are required to create, power, and operate AI and the internet. Lithium is mostly used for batteries when it comes to tech, and mass parallel processing farms, switching and routing units, server parks, they are all plugged in and do not use lithium batteries. No offence, but I am not willing to pay for technical services from people that think mobile phones and laptops are what powers AI or the internet.
There are other commodites too, Tantalum metal, used in capacitors that make a phone much more compact, is a conflict mineral. The working conditions in many parts of the supply chain are very poor.
Thank you for making this video. So many people I talk to just think about Science Fiction AI when the topic of ethical AI is brought up, it pains me to see that many of them don't realize it's a much more complicated subject. You did a great job covering many of the ethical issues in regards to AI development.
Me 4 years ago: "Philosophy Tube will never have more unsettling and terrifying character than The Arsonist." Me today: "Abby really needs to add a jump-scare warning when Kelly Slaughter is going to show up."
Something that a friend of mine said which continues to stick with me: Corporations are like an AGI that uses people as its hardware and profit as its goal function. The things that an AGI would kill us with are the things that corporations are _already_ killing us with. The paperclipper is coming from inside the house, and if you could file those TPS reports for it, that'd be greeaaaaat.
I'm an art student at a primarily engineering/stem college where a lot of people are pursuing jobs in the AI space. Conversations about this topic are draining but inevitable. Having this video helps me put my feelings into words so I really appreciate it
i'm an Artist but i am currently training in STEM and oh boi does the coversation annoy me. Like its so hard to get people to understand that its not a logical issue its an ethical and emotional issue.
@@StarPichu12 people in STEM fields are so coddled and overpraised by society that they have no idea how much they don't understand. It's sort of sad, but they genuinely think they're the heroes but really they're the ones that folded first.
Ive had several conversations about how so many problems could be solved if there was some kind of compulsory “intro to sociology” component to education in STEM. Like, if tech bros actually did a 101 class in university-level sociology, philosophy, and history, maybe they wouldn’t be constantly falling into intellectual traps that the humanities have long since addressed, all while ardently believing theyre geniuses beyond reproach simply because their particular skillset is overvalued by a capitalist economy. STEM people sometimes just love to feel superior and be dumb in ways that preclude them from seeing how dumb they are, and someone else’s literal entire academic career has already debunked (also its often willful ignorance bc sniff ideology but hey it couldnt hurt)
Genuinely might be some of your best work yet! My favorite is the last section, about how the physical components of Big Tech are actually mined and transformed into products in the imperial core-- it really cuts to the heart of all this. Maybe it's just that I'm the child of an engineer, or that I'm into crafts and stuff, but I'm starting to think that reconnecting with the physical world is one of the most effective ways of like... idk, leftist awakening? And also connecting/communicating with other people who aren't plugged in to the Online Discourse(TM)? It really strips the AI marketing of a lot of its power. Like, it's easy enough to fall for techbro babble when they talk about "the future of computing!" and "new things being invented every day!" but... when you confront someone with the reality of us as physical creatures using the materials of the earth as tools... and the reality of all the WORK that goes into something as "simple" as a digital image... it just kind of lays bare all the violent systems that are being obscured with a phrase like "data-mining." Because it sounds kinda videogamey, doesn't it? We're sending our data-miners to the data-cave and they'll come back out with some data-ore that we can smelt in our little ovens to make a data-bar. It's all just little conversions in code! But no-- what the system does is send some people to drill into the earth, disrupting ecosystems and brutalizing the people closest to the drills and the planet around it; then more of the world is burned and paved and polluted to move that Stuff to where it can become Art; and then yet more is burned to fuel the creative workers who use their own time and labor and even their own flesh to make things that are then STOLEN en masse to make... "data." And allllll of this is marketed as more "efficient" ways of "generating" "content." It's just exploitation, violence, and colonization all the way down. Anyway, sorry for the ramble! This one is such a spicy meatball of philosophy and I can tell I'll be chewing on it for a while. Thank you to Abi and crew for all the work that went into it! ♥
I massively disagree, it’s largely occurring everwhere that more urban areas foster more collective values due to shared resources and space. A more urban america would be a more liberal one. Because the counterpoint is how conservative is most popular in rural areas and declines are seen with population density. A more naturey america produces more hunter gatherer mentalities.
A more urban America is a more Liberal one, but "liberal" is not the same thing as being socialist or progressive. Liberalism is just a softer version of republican ideology that has a kinder image to market it.
@@480darkshadow Sorry, I guess I got a little flowery in my speech there and wasn't clear-- I'm not trying to say that nature creates leftists or anything! And I also don't think urban living is opposed to environmentalism. (Or right wing ideologies, sadly.) What I meant was more general, actually. I think when we're talking to people who fall for AI marketing and things like it, they're very often people who have been completely alienated from labor-- both their own and, like, all the work that makes the world around them. And I think that learning about these things can be a valuable step toward class consciousness.
It honestly stuns me how you just continue to kill it video after video. I've been watching the channel for a long time and it has been such pleasure to watch you flourish creatively and grow by leaps and bounds in your art year in and year out. You're an absolute treasure Abby. Thanks for all the work you do.
from a labor of AI, from India - wonderful presentation! --> It’s the dress code that caught my eye, totally unusual. --> The meaning behind them is well presented in the script, esp. on data scrapping & flattening. --> Climax is what I liked the most with the proceeding of US nomads! --> As a student keen for story telling structuring, it’s a must watch. I enjoyed the mixing, ----> to catch the eye: studio set, dress material ----> to catch the ears: stories interweaven ----> to study: many references flashing every minute! ----> to ponder: deep content on societial power imbalance! -------------------- I am scared esp. with Indian caste structure, withholding 1000s of years of Hierarchical power imbalance, backed and blessed by stories of karma & fanciful god’s of the so-called epics! World forums yet to call a spade♠️ a spade♠️ - because of it efficient camouflaging with softer outer shell layers of yoga, vegetarianism, mysticism, spiritualism, tolerance, non-violence and Gandhi! Already a torn unthinking society slowly turning the unemployed and senior citizens as zombies, sub-employed as propaganda machines! With the wine of AI mixed - it’s gonna be devastating! -------------------- Thank you for making this video! ❤
I just want to say this is a brilliant video! It covers the topic of AI from a different perspective than I’ve seen before and it was really informative. Thank you very much to Abby and the rest of the team for making it. :)
Just want to say, I absolutely loved the work you put into the subtitles on this video, I have an information processing disorder, so it can sometimes be difficult for me to parse voices, those subtitles helped me understand this video a lot better. Also the detailed and dramatic description of the music was also very funny.
You make some of the most important content on the entirety of the Internet. I truly hope you find time to continue with Philosophy Tube alongside your amazing, successful growing acting career! You're awesome.
Рік тому+35
I am working in tech and also studying at university - faculty of humanities. I am still searching for a way how to connect those two and it seems that this could be the way. Next week we have a first student meeting about AI and this gave me a lot of informations and topics that I could bring there. Thank you 😊
Excellent production. Just one tiny thing: in the middle, there was talk about environmental impact and it seemed to be focused highly on lithium use. That's really more of a cellphone/laptop thing. Data centres tend to use lead-acid batteries, as the weight penalty is next to meaningless in stationary installations. And there's significant recycling of lead-acid batteries.
They'll also probably start recycling lithium if and when it gets cheaper to do that than getting it out of the ground. One big problem is that lithium batteries also need cadmium for the connectors (I don't know why). At New Scientist Live a few weeks ago was a demonstration pointing out that sodium is very nearly as energy dense as the most commonly used lithium batteries, and can use aluminium as a connector rather than cadmium, so that may turn out to be a moot point. AI does use a lot of power; that doesn't necessarily mean CO2, and as time goes by the proportion that does use CO2 will almost certainly drop. AI may cause any number of existential crises but I doubt environmental catastrophe will be one of them.
The enviromental impact of data centers is mostly due to the mining of the raw minerals, e.g., silicon, aliumnium, etc., the manufacturing of the chips and the rest of the hardware, and its subsequent exploitation for years in various data centers, which might or might not use clean energy (usually not). Talking about Lithium when it comes to data centers is meaningless; Lithium is nowadays used largely for energy storage and electric vehicles/transport. Might as well talk about the danger of geese getting into the servers.
Thank you for this video. I consider myself primarily a scholarly artist, whose primary aspiration is to critique computation from a media theory and computer science lens, but I work as a software engineer in one of the big tech companies to pay my bills. As someone who holds similar views as yours, working in a tech company is incredibly painful. I originally thought that tech company employees are just here for the pay, but after I entered the company I only found out that most of them really believe the whole shtick and critical examination of technology is non-existent within the Silicon Valley tech companies. It is suffocating. Working feels like being an undercover cop having to sell illegal drugs and aid prostitution which just goes 180 degrees against my own philosophy and beliefs. I even sank into bad depression and had to start taking antidepressants just to function day to day with my coworkers. It is such a relief to see a big name UA-camr advocating anything beyond the old talk of "alignment! alignment!" AI doomism. It is so good to see nuanced and a more thorough criticism about AI. Thank you and hope more and more people start to recognize the essence of the current big data powered AI as exploitation of labour and violation of the concept of "private property" -- and finally -- recognize that "there is no ethical AI under capitalism". I dont even hope that there will be real change soon i just hope that people can recognize these.
Watching this episode reminds me of one of the conclusions from the Dune universe: it doesn't matter whether it's large-scale computing, Turing-strong AI, or a spice-enhanced socioeconomic structure-in either case, humanity may have spread throughout the cosmos, but the inequality and the oppression always remained. Trying to address that requires something more fundamental than throwing new tech at the problem and hoping it works well enough you don't have to worry more than you already do. One also hopes that you don't need to go to unexplored depths of being unethical to establish a more ethical system in the end.
I *love* that you're treating this topic, Abs! And thank you for the awesome behind the scenes photos and teasers... keep up the outstanding work. ☺️👍💚
@@prestonbruchmiller497 👋😁 and I love how often I see my own audience appreciating other creators whom I love and showing up / supporting them in their comments, too! ☺️👍💚
@@computer_trashIt might be possible to say the odds are a long shot, but then when we consider how many of us appreciate the same people and hold the same values maybe it makes sense that we like a lot of the same content... Either way, it's so wonderful to hear that you enjoyed a little something I put out there. ☺️👍
I’m a PhD researcher in AI right now - loved the “data flattening” description; making me think about what data I’m using to train my models… Fantastic content as always!
can I ask how you got into this field? Currently hoping to do the same myself, I am finishing an undergrad in philosophy and moving on to a masters in computer science. Any advice for the future?
Ooh, I did my undergrad in a CS/PHIL joint! So very similar! I think relationships with professors matter the most. The hardest thing about being a grad student (imo) is generally getting funded in a way that lets you research what you're interested in. Try to find professors that you like and that like you at other universities and then apply there. Doesn't hurt to reach out early to talk with them about the program and to have a little name recognition :) @@imogengreig2860
While I do feel that a lot of the allegations here pointed at the AI industry can easily be pointed in some part at a bunch of other industries (e.g. sub-employment in industries like tech and retail), I am glad to see that the interrogations and ethical questions previously fingered at those industries are finally being directed at the AI industry! Amazing work here as always!
I’ve been working on a deep dive project on how various creatives can use certain tools to help with their work flow… but as an artist it’s been disheartening to see artists like myself put out of work. Last holiday when the ai portraits became very popular, I lost all my income as a portrait artist. It’s been a process.
I don't want to downplay your losses. Being out of work always sucks. But, and there's always a but, isn't here? AI is a tool at the end of the day, and people will use that tool however they can to benefit themselves. Commissioning artwork is EXPENSIVE, as I'm sure you're well aware, and it only makes sense that a large number of people will opt for AI generated as opposed to human-made. AI is here, it's not going away, and it's up to the creatives to learn how to roll with those punches. Like they did when painters no longer had to mix their own pigments. Or with digital artwork. Or with photoshop. Or autotune.
@@ookami38I really don’t think many people who want specific art are gonna go to an AI. If they’re trying to make a series the AI would lack consistency, the only people who use AI are the people with absolutely zero taste (also as a side note I hate how much people call algorithms “AI” it’s so deliberately deceptive) plus AI being a tool would be fine if it wasn’t proactively robbing people, and no this isn’t an “every artist takes inspiration” this is a “this artist traced 8 different images and then put the original artists out of jokes” it is quite literally stealing from people and putting them out of jobs. Eventually the issue is gonna become if the majority of art is AI art the AI is gonna start training off other AIs amplifying their flaws 10 fold
@@eventhorizon2264 it really sucked because the holidays are the biggest time for commissions, and my mother had just died, so I was using that money to get her ashes. And even though they sign a contract, PayPal just locks you out of your account and gives their money back, and if you don’t have the money for a lawyer, as a seller you’re screwed. Most of them don’t care if the AI quality is shit when they get 200 for 7 dollars or whatever. Now the ai can even send you a print on canvas. Another issue is our art was stolen, data flattening, so the machine copies our styles. It even sometimes spits out artist’s signatures on the generated images.
@@ookami38 The biggest issues with AI from an artist standpoint atm (and I have worked as an artist in the industry since 2009) are specifically the fact that the AIs have been trained without our consent on OUR work. It is theft in its current form. I am not completely against AI as an artist. I am, however, against it in its current form, which IS exploiting artists. Both musicians and writers have had a lot more luck being heard on this front while visual artists continually get stepped all over. If I were to look at this from a pro-AI standpoint, AI is not currently able to be copywrited, so it is very volatile for companies to use at the moment. If the companies who are training on our work offered an opt-in option, I'd be far more okay with the inevitability of future technology. The issue is we like to step on others to get ahead. Art has always been a luxury item. So yes, of course it is expensive, because of the years and time it takes an artist to both build skill and then translate that into an image for you. AI is simply taking our work and years of time building our skill without any consent on our part as artists and just merging images to make what is essentially a Frankenstein of our work. If AI was not available for commercial use, I'd be much happier allowing for my work to be part of the dataset to be just there as a inspirational tool or a personal use only tool. Part of the ethical issue that goes beyond us being put into the dataset without our consent is the use by many proponents of AI of individual artist's names. They use our names to try to force the AI to generate work in our style, to the extent of occasionally just pulling up a very slightly changed version of one of our existing works. The problem is that, by using our names, our businesses and personal work is further harmed and eroded and the shitty AI companies are not handling this situation with the import it deserves. In some ways, by using our names, it quite literally places a fog over our work and within google searches, making it hard for someone else looking for our work to find us and what is really ours. It is extremely exploitative. I think humans as a collective need to0 push for more ethical restrictions on these technologies so as to cause less harm to those they are currently just straight up taking from. I am a realist. I know automation will come for most industries. It sucks to lose work. What sucks more, for me anyway, is the idea of having our work taken from us and our names and who we are and what we create, for all intense and purposes, stolen and marred beyond recognition.
Excellent video - as someone engrossed in tech, it’s so easy to see these fantastic machines as abstract objects. It was eye-opening to remember that I am holding a refined piece of earth that was mined, shipped, and processed by human labor. Also how are you not freezing
The Penis Detection Machine bit hit me in a way I wasn't ready for. I always called it the Gropetron9000 since I (without fail) would be groped on my stomach, upper thigh, and penis... every. single. time. I have never felt more seen This might be TMI: I think for me it's because I got weight loss surgery and it always pings on excess skin and things that hang.
Just a random commenter here, but thank you for sharing that-- this makes me think that there's surely more people harmed by the Gropetron9000 (great name btw!) than we even realize. Different body shapes that it wasn't programmed for, medical devices, all sorts of things... it's kind of staggering. I've even run afoul of metal detectors as a cis woman with a non-standard bra size, I never realized how horrible those security checkpoints could be for other people. Like, I knew it was bad. But somehow I failed to imagine HOW bad. I'm so sorry you've been put through that.
I don't know how, but your videos keep getting better and better. Didn't know that would be possible. So glad for you and your teams work. It's just a blessing to have you as a creator. 🥰
As a computer scientist who is a bit tired of hearing all lay understandings around AI, I was not looking forward to this video, actually. I should have known that this work would not represent a lay understanding, but rather deep and expertly researched take on the subject. Excellent work as usual! I don't understand why people are drawing non-consensual images of you when you give them all they need in these spicy outfits :P Excellent outfits as always!
You might run intro issues if you sexualize her in these videos. Honestly i found her arguement rather lacking on the topic. I assume that comes from her fury on the matter. But yes generally speaking most of her arguments trend in the right direction, however its all stuff i i found out pretty quickly so i didnt really hear anything new.
Look at the bloody cirations and references in the vid bro! Thats prolly more books and publications that you ever read mate! But seriously, some people undermine even these basic points, so its very useful in debates to catch your opponents off guard just by how much data and proof there is against their position.
Ms Thorn, and I mean this with no sarcasm, Thank you for making a conclusion to this video that shoves the fact that it is a conclusion and has a definitive closing statement that is even emphasized with visual and audio changes before the nebula ad. It helps my brain process the information so much better❤
I'm writing a new, original SOMETHING for Nebula. If you want to be able to see it when it comes out, you can get a discounted Nebula subscription now! go.nebula.tv/philosophytube
How is this comment made 3 days ago when the video was premired like an hour ago
@@a-person_0 click the link and you'll find out :o
@@a-person_0it was probably on Patreon first
AI is also made by humans. Just sayin. Happy to pay some coders for decades of work.
@@iamjackspyramidshapedhelmetok
They sent me to conversion therapy for being left handed as a kid. After that didn't work, they bought me left-handed scissors and pens with quick drying ink. Weird how they resorted to conversion therapy first when all they needed to do was buy a few special office supplies.
why the quick drying ink? (i don't have a "natural" writing style despite being a rightie so i'm just curious why its needed)
@@user-vw4xp5nt9f being left handed and writing 'left to right' you are almost immediatley moving the side of your hand against the words you just wrote. This often leads to smudging, and a stained hand. If the ink dries quicker this is less likely to be an issue.
@@sarainy9775 My sister is left handed and so is one of my school friends. My sister learned to write with her book/hand basically sideways so her hand is always beneath the writing to avoid smudging. Her handwriting is immaculate. My friend used a weird claw grip that kept his hand almost fully off the paper, which meant no smudging, but also meant his handwriting was atrocious and very slow because he was writing with no support.
@@BambiTroutI unfortunately learnt the 'no support claw grip' and my hand writing is both slow and atrocious. I'm also dyslexic, so the combination makes me handwriting awful... on the plus side it pushed me into typing from a young age so I've had an 80-100 word typing speed for the past 25 years!
@@sarainy9775 I think it's really bad that despite no longer punishing left handed people, there's still very little consideration for teaching handwriting to left handed children. All of the lessons on handwriting and how to hold a pen are still built entirely around right-handedness, and lefties are just expected to figure it out for themselves and somehow keep up.
As a mechanical turk/microtask worker... You hit the nail on the head. The most solidarity we have is that we can warn others that a certain job is bad or scammy.
Mechanical turk... That puts the job conditions into perspective
Yeah I did it for a while since dollar is so valuable where I live, but it's absolutely grueling
Machines started reducing human labor, but then we started worsening work conditions and broadening world connectivity to the point where human labor is cheaper than machine one.
wow i didn’t know we were at the point where we had these workers yet? like do you mean mocap or are you operating it
God, this channel is a blessing
u rite
Hey go back to making a video about an obscure ps3 game you rascal
that such a noodle thing to say 🎉
@@i_smoke_ghostsI love noodle 🌈
your god is a cheap bastard, consider another one
As an AI engineer with both academic and professional background, I find this video to be such an exceptionally good video, sadly I cannot share it with my coworkers and collogues! There is an ungodly amount of resistance by engineers, scholars and business managers when you point out that people are not datapoints and you can't just take their data. Even if they consented to you having their data, it does not mean the consent it being used for what ever you want. I have lost promotions and bonuses in my career for pushing back on unethical practices.
Bro, share it with them anonymously if you can- people like that need to hear this
Feels man. I'm internally debating if I should float the idea at work of adding to our open source licenses a "not for training AI" clause (which is technically not OSI correct but forget that). Like, releasing our work as open source is essentially consenting to ai training from it, which isn't why we release it. We want other researchers and engineers to use the code we make, learn from it, and potentially bring us business. Allowing it to train ai does not further any of our goals, and could wind up making it easier for competitors do develop similar technology.
For whatever it's worth, thank you. For having the integrity and compassion to overcome all the battles with cognitive dissonance you must've experienced to arrive at your perspective; carrying the weight of the dissenting opinion is rarely easy or easily achieved.
But we reap what we've sown, so there's profound value in those victorious battles in the psyche. The easy path is virtually never the correct one. Good intentions pave treacherous roads _when those intentions are grounded in hubris._ Checking our ego and tempering our desires (via radical self awareness of our psychological shadows (& other's)) breeds far better outcomes. 🐢✅ 🐇❎
I am so sorry and sad to read this.
@@kimberleemodel7182 This is a very interesting observation. Mining copyleft, libre open source-licensed work is an opportunity to launder violations of the license by corporations who use "AI".
26:58 I'm glad I took the time to read this source fully. Sarah Andersen sounded familiar, and I recognized her wholesome and relateable comics as soon as I googled it, and seeing her response to AI abuse knockoffs of her content was heartbreaking:
" *I felt violated.* The way I draw is the complex culmination of my education, the comics I devoured as a child and the many small choices that make up the sum of my life. The details are often more personal than people realize - the striped shirt my character wears, for instance, is a direct nod to the protagonist of “Calvin and Hobbes,” my favorite newspaper comic. Even when a person copies me, the many variations and nuances in things like line weight make exact reproductions difficult. Humans cannot help bringing their own humanity into art. Art is deeply personal, and A.I. had just erased the humanity from it by reducing my life’s work to an algorithm." -Sarah Andersen
o.o
Kelly Slaughter segment about being in a business conference about lethal autonomous weapons looking around the room, and realizing "oh my God I'm the only woman here" is pure comedic gold. Such a succinct description of girl boss feminism.
Also a robocop reference right? Or I just missed the joke lol
Wait....
You're right, I'm not sure Abigail meant it...but there is a robocop adjacent point there😂😂😂
@@josephrittenhouse5839 Reminded me of that ED-209 scene, executive gets gunned down in front of the entire boardroom and all the CEO has to say is "I'm very dissapointed." lmao
when was that?
Girl, slay.
1: once again, I absolutely love the subtitles.
2: I had never really considered the human aspect of big computing
3: the absolute power move of doing the segment about getting AI generated adult content made of you in basically nothing is amazing.
It says there are no subtitles:(((
@Karin-fj3eu what do you mean? There are english, german and portuguese captions?
As a Filipino, thank you. The continued economic exploitation of my countrymen is something that I do not see mentioned pretty often. Ranging from the dangerous seafaring work, continued land theft of indigenous tribes by big business cronies, staggeringly low pay for digital creative work outsourced to us out of profit motive (my personal experiences) , and at the very worst; assassinations of activists who speak out for the working class orchestrated by politicians who are in the pockets of corporations. I can only see things getting worse for me and my peers in the creative industry as we are currently experiencing a drought in job opportunities. But videos like yours give me hope that we are not forgotten and that maybe even if not in my lifetime, that countries like mine will be given what is properly due and a proper global realization is to be made that all the fruits of labor should belong to the ones that do it.
This makes me so sad to read. Us in the west need to do a MUCH better job at recognizing the effects of our consumption and how our imperialism and colonialism has impacted your country and others like it. It's such bullshit, but know there are people all around the world hoping, praying, and taking action to fight for your liberation. Stay strong and much love
I'm a writer and I work at a hotel overnights to make ends meet. One of my coworkers is studying computer science. We got onto the subject of AI, and she straight up said to me (I'm paraphrasing) that she could see AI taking art and writing, things that people want to do, but not automate "real jobs."
It was insulting, but it highlights the "us vs. them" mentality that STEM-centric and art-centric people can find themselves in.
Be aware that you can already use AI to generate database code from a schema illustration, so it's taken the 'real jobs' too. Just at this stage the more mundane parts of the task list.
That's the thing that gets me as a former computer science major. Everyone is looking to AI to automate the arts, but what AI is really good at is the mundane tasks that business majors see as "real jobs". We could have an economy where half the people are artists collaborating on the next big marketing push while the machines are crunching numbers in the background, but the bosses want a million pencil pushers and one lone artist who is doing the web design, banner art, and music score for some reason all for a nickel.
@rexs.5188 Artists or any creative have the uncanny ability to expose that which those in power want to be hidden. Remember that the Shakespearean muse is the one saying the truth and exposing the ridiculousness of the situation.
Not enough comp sci programs mandate an ethics course. Mine did, and it was very eye opening. What was more eye opening though, was to see the reactions of my classmates- most would have never even considered the ethical concerns.
@@Theroha AI should automate both
god, the penis-detection-machine trauma is so real. i have had to talk to so many tsa agents, shame in my voice, telling them "yes, i am transgender. yes, i have a penis." and there's ALWAYS a fifty-fifty chance of them just being so outright disgusted with me for existing. when i was a teenager, one of them immediately grimaced, turned to their coworker, and said "i don't want to touch this thing, will you take it?" i've never had a single non-horrible experience with that damned machine.
That's fucked up. 😢
IS THIS REALITY WHAT THE FUCK
My gods...I'm so incredibly sorry. That is a horrific and abhorrent way to treat another Human Being.
I at once cannot believe that was allowed to happen, to you or anyone, but also struggle to believe how we can reach a place where that will never be allowed to happen...
Take care, and I wish you enough in all you need to live and thrive in this world.
I'm so sorry. I can't imagine how awful someone has to be as a person to just....treat people this way. Horrible.
gosh i feel that but from the other trans direction (i'm ftm). got questioned weirdly and had my chest groped several times :(
I love how the distance of the hammer from abi in different shots correlates to how critical that segment is of AI and its uses and implications. especially when it’s entirely absent in the kelly slaughter segment
Omg I’m gonna rewatch the video on a whole new level, thank you so much for pointing this out
@@laurenschirduan9080 fr i dont notice these things wtf its just funny hammer
Queue Sam Reich “It’s been there the whole time”
At first i thought it was gonna be a reference to the "hammerman" thing from the transhumanism video.
Nope, just a good old capitalism smashing hammer.
Ah, I was wondering why the hammer was missing from that part or if I had simply failed to notice it.
Frank Herbert hit the nail on the head when he said, " the problem with machines is that they increase the number of things we can do without thinking..."
Oh wow that's profound actually
Yeah let's just eat worm poop and get blitzed
@@BinarySecondsounds alright to me
Such as comment. Only joking.
And we don't have the benefit of a "Golden Path" that will ensure the survival of our species.
I was a postdoctoral researcher last year and my project was on ethical AI from a gender sociological perspective. My project involved an industry stay at the biggest teleco company in Spain. I was there for 4 months and my work comprised observing the implementation of an ethical AI pilot at the company and advising them on better practices considering my sociological background. I was consistently dismissed. The people working on the pilot were marketing staff and engineering staff, no social scientist, no sociologists, no anthropologists.. . No philosophers, no ethics experts, nothing. I was told the social sciences were not really science and we're biased. Anytime I told them my opinion I was ridiculed and pushed to the side. They were developing problematic AI for gender and race recognition purposes. I pointed out that it was not ethical and the twists they took to reframe as ethical... I felt gaslighted. Ethical washing at its worst. I wrote a comprehensive report with my advice and plenty of literature on the topic to support my arguments. It was embargoed and I was banned from publishing. I decided to abandon the investigation. There is no hope with big teleco companies doing shit like this .
Looks like they really wanted you there just so they get to say that a sociology and ethics specialist oversaw and approved what they made.
Does make me think back to the Philosophytube video where she recalls being invited to speak about ethics, I think it was regarding climate change, and part of her response was "if you're asking me about ethics, the first thing you should do is _resign_ , you didn't even pay me!" Part of me thought that story was exaggerated or a joke but I'm definitely seeing the pattern of "We only really brought you in so we could pretend we give a shit."
Wouldn't the media be interested in that report instead?
I think there are still ways to expose such things with anonymity and plausible deniability. Not sure how much energy you have for this, but there's definitely some article that could be written and published by public media.
@@Ermude10 unfortunately I signed a contract with them prior to my research stay where it is stipulated that I could be sued was I ever to disclose any information related to the company without their signed agreement ...
Honestly, "Large Scale Computing" is a much better term than "AI. As I've often explained to people, what Chat GPT does is basically the same thing your cell phone does when you just hit the middle option on autofill. It just does it bigger.
There are some points I would disagree with about the data flattening, but overall, I think this is one of the best videos on AI I've seen.
Lots of large scale computing is completely unrelated to AI or machine learning though. If you render the CGI for a movie, that's large scale but not AI/ML - the computers are just a tool controlled by the 3D artists. If you run a climate simulation, that's large scale but not AI/ML - the computers are just exploring the mathematical consequences of scientific facts. If you mine bitcoin, that's large scale but not AI/ML - the computers are just wasting energy in performing computations that the game called blockchain demands.
You forget about agent AI tho
@@AbeYousef
"There really is no conclusion to this video at all, and it doesn't get into why AI systems have to be developed."
Probably because the creator doesn't believe AI systems HAVE to be developed. It's likely they WILL be developed. But the video isn't about why we should or should not develop AI, it's about the dangers of seeing AI as separate from the people who create what goes into it, and how it obfuscates the role of the people who actually create what goes into it.
"If you're already into "workers of the world unite" type stuff then sure, you can get the dog whistles, but as a video on AI itself it is extremely short-sighted."
There's nothing dog whistley here, it's very explicitly coming from a socialist perspective, from an explicitly socialist creator. And you say there's nothing actionable but... Understanding the dynamics between capital and labor, understanding which side of the divide you are on, and advocating for that side, that's actionable. If you're looking for something actionable on AI that isn't actionable on something else, then yes, there's nothing actionable on AI specifically... Because the point of the video is to address the way that AI is often presented as separate from everything around it, and that AIs are not simply another product of human society.
As an artist, thank you and the crew so much for doing this episode. "There is no ethical computation under capitalism" is a potent way of conceptualizing the problems with AI once you consider all of the exploitation that goes into it.
Philosophy Tube is an idiot, who either exemplifies why modern philosophy is so useless, or doesn't warrant the name.
Water is often said to be a human right. Yet, the access to water requires monumental manpower and complexity.
Artists would happily tout that they are entitled to water rights, correct?
Yet, by contrast, access to AI Generation could be categorized as a human right under article 19 and 27 (Freedom of expression, and Right of cultural participation), yet the same artists touting their right to water are the same first ones wanting to ban access to AI Generation for the very workers who provide artists the comfort and luxury of being artists.
As such, "There is no ethical access to water under any system" might be more precise.
muh capitalism!!! communism so goood!!!! i cant wait to be an artist under communism! what do you mean i have to work in the sewers???
@@justaweeb14688 communism is when sewers
Then the problem is with capitalism, not AI
@@justaweeb14688do you wanna point out to me just one example of a communist country making it common practice to assign jobs to people rather than letting people decide what job they want? Or of a serious socialist thinker advocating for that?
Also, like, do you think that people can just be an artist now, without taking on a day job or side hustle? Because the starving artist is a meme for a reason.
I’m a bee farmer. It occurred to me as I listened along that an AI would struggle differentiating bees and wasps the same way it does gender. Like people know bumblebees are bees, but I take people on beekeeping experiences as a little extra income and now have a segment 5 minutes in where I pause with “now is the time I ask you how many of you were surprised that honey comes from these and not big fluffy bees? Who looked at these and thought they were wasps?” The number of people who sheepishly raise their hands and confess they thought honeybees were wasps their whole life until now is significant. Like an AI, the sophistication of their parameters by which they define things is inadequate.
The difference is, no one then argues with me. I tell them the new information and they quietly assimilate it.
So, am I as an apiarist more respected than medical and gender specialists? Or is the objection to gender science less material and more ideological than people like to pretend?
man i love you for this reply. hitting the nail one the head imo
This is a fascinating insight and interesting question.
Some peons would need to sit and click "this is a bee, this is a wasp" on images for an AI to learn on.
Incredible comment damn. Also sidenote I assumed all bees look like bumblebees, thank you for educating me!
@@1SophieDEF1 it doesn’t help that most labels on honey will have a picture of a stylised bee with far more in common with a bumble than a honeybee. It’s just better marketing.
There are consequences though. There’s an Attractive Lady Of TikTok that is constantly on my feed who has tattoos of a bumble bee on honeycomb and bumbles don’t make comb like that. I am constantly reminding myself “she’s committed to the tattoos now there’s no value in being an entomological pedant”.
The joke about diversity, equality, and inclusionary seminars was absolute gold. I am required to plan one and they kinda make me feel sick. The people above me assume that if I make an event that says “hey people are different, and we need to understand and accept that” that suddenly racism, homophobia, misogyny, etc that they get complaints about will disappear in a puff of smoke. It’s all talk and everything goes right back to the way it was once the seminar ends. There is no actual action done to deal with the systemic problems. Don’t get me wrong, we NEED to talk about stuff like this, but we need to back it up with actual change.
can you find a way to weave that in?
"welcome to the seminar. Today we'll be talking about social network analysis, dynamical systems, community ecology in the wild, and what you can do about sexism"
@@lancewalker2595 Hiring and examainations of work quality for rasies and advancement should be done anomomously.
@@lancewalker2595 Why ask them what they actually want done if you're just going to ignore it? Like someone says that hiring, raises, and advancement should be done anonymously so it's as close to purely meritocratic as possible, and you just respond that actually they don't want the thing they just said they want, because you've got assumptions about what they REALLY think. Ignoring the fact that it's not even the same person.
@@lancewalker2595 I'd suggest asking the people who are directly affected by the problems where they think the issues lie, but you're hardly an honest interlocutor the way you carry on from this point. "Positive discrimination"? Risible.
I am not transgender, but as a fat person, the airport scanner also detects a lot of random lumps on my body as well. I had a breakdown in public a few years ago when I had to have a full body patdown because I dared to have a body shape outside the preprogrammed "norm".
It's also a scam by itself
As far as we know, that huge infringement on our privacy and dignity, has as far as we know, not prevented a single planned attack!
It'ssecurity theater
Yeah.. those body scanners f*cking suck
Holy intrusions into privacy and bodily autonomy for imagined flight security
I was wondering how this would effect fat people. Like, for a fat man with "extra chest padding" so to speak, would the machine register him as having hidden something on his chest?
It's so messed up, and there seems to be so many obvious flaws with the system that it's a wonder people thought it useful in the first place.
That’s fucking funny lmfao
So many critically important points made here, thanks to everyone who worked on this, stunning job.
I think this is one of the best PhilosophyTube videos yet. The flow, the visuals, it's so compelling.
I work in VFX and I know there are certain people out there delighted by the idea of removing us from the equation with AI/ML, which is frustrating because it can absolutely be applied in our line of work in ways that enhance the artists rather than replacing them. That conclusion makes it all seem a little less hopeless.
Thank you, I'm really glad you like it :)
It should replace artists.
@@charlesc3734 Imagine being this cringe
@@charlesc3734I hope they replace you first
@@charlesc3734 AI should remove the breath from your lungs
I'm cis male and I hate the penis detection machine because I have gross lymphedema in the, er, shagging area, and go through the same experience of awkward questions, getting groped, etc. I imagine that anyone else with unexpected lumps, bumps and artifical limbs have the same issue. One thing that I have noticed is that, as my condition got worse, airport security started directing me to the old school metal detector instead. This is an accommodation that could be extended to trans people, but it's a complex problem when the aim is to treat someone according to the acquired gender. Having said that, it wouldn't kill them to add transfem and transmasc options to the penis detection machine, and maybe even give the subject the opportunity to make that choice themselves.
Related: I appreciated your essay on the crisis in British healthcare because I have been waiting five years for treatment and it is *miserable*.
The problem is, if everyone walking through the penis detection machine makes their own selection, the very point of the machine -- preventing people smuggling dangerous items on board airplanes -- becomes moot. At that point, you might as well chuck out the machine. Then what are we back to? Racial profiling in airport security? This is the "tradeoffs" problem Abigail explicates with her hypothetical about the college admissions AI: to make the machine truly fair to people, you must eliminate or at least undermine its fitness for its intended purpose.
@@joshualavender well, how likely is people smuggling things into airplanes in the first place?
@@joshualavender I also had thought about that and was reaching the same conclusion as you.
But then it sounds like the trans-panic on sports or gender quotas, as in "what if a cis man just declares he is a trans woman?"
Maybe some kind of pre-registration integrated with some government system so the line at the airport gets your fingerprint and then draws the relevant information (like name changes, surgeries, etc.)...
TSA and other airport security systems fail the vast, vast majority of the time in discovering terrorists in the first place. Airport "security" for the mostpart is security theatre that does not actually do anything to stop attacks.@@joshualavender
@@L83467 That is an unfortunately impossible question to properly answer due to a lack of data publicly available. The best info I can personally find, very low quality and light on the ground though it is, suggests a few dozen drug mules are caught using these methods worldwide per day.
trans man here; i had heard that the airport scanners were called transphobia machines before; but i had always just kinda naively assumed that was a former problem that got fixed; until i was flying home for x-mas and got asked what was in my shirt. thankfully when i answered 'binder' the agent understood; but that moment stuck with me. it is so incredibly dumb that we have to out ourselves just to fly; and trans women especially get treated so poorly.
Im sorry :( 💛
Binder?
Oh the troubles... 🥱
So sorry you had to go through that 😓
@@mememdetameBinders are items of clothing designed for flattening the chest (which helps with dysphoria)
Not so fun fact about Frantz Fanon: he was not just spending some time in France, he was a french citizen by birth, as he was born in Martinique. An island that was a french colony back then and that is now a french department (comparable to a region or federal state). One could arguably assume that his experiences with white french people shattered his belief in national identity as something that transcends "racial" markers and makes everybody equal. Which led him to become one of the pioneering thinkers of postcolonial theory.
Though making one's identity based on race rather than nationality doesn't sound like an improvement.
@@greywolf7577 Probably not. Although I'd argue that it is also not necessarily worse. Both, "race" and nationality, are social constructs and therefore pretty arbitrary categories. So who's to say which category is a better basis for an individual's identity? Also, to be fair, as a member of an ethnic minority, you don't decide what your "racial identity" is or how important this identity is in your interactions with your environment, the world around you decides that for you.
I'm a trans girl, but I've lived quite a sheltered and supportive life. Obviously it hasn't been perfect but I can't ever say I've felt in danger because I'm trans. Awkward, uncomfortable out in public, especially during the early stages absolutely. But everyone has always either been oblivious or very kind to me. That penis-detection machine segment was really kinda eye-opening, because, I don't know I suppose I never really connected that those genuinely scary and humiliating moments could happen so ordinarily and suddenly in my normal life. Scaryy :((
Minor correction about companies “not recognizing” that flattening is at best a gross violation: they know. They recognize it. Heck, they even recognize that it’s illegal.
Either that or Microsoft made a “totally disconnected” definitely-not-subsidiary in Germany specifically for scraping data for lols.
Honestly i wouldnt be surprised if the ceos of those companies are just so out of touch that they never considered us peons dont like having stuff stolen from us but its probably a mix of both
While the penis detection machine segment touched on it, there is also the issue of how AI is being used in the medical field. It's being marketed as a way to avoid human error, like AI programs looking at an X-ray and using what it has learned from looking at millions of x-rays it may catch anomalies a human might miss. But I really worry about it being used harmfully, thinking back to the problems of the gender identity clinics and the NHS episode, imagine if they go "well the bottleneck is there aren't enough people to ask these outdated invasive questions"; so they make it a questionnaire and get an AI to look at people's answers which as Abi pointed out in that video people lie to try and get the healthcare they need. So you have an AI learning from lies, but then when you can't know how an AI makes it's decisions, who's to say it won't go "oh these answers are too perfect they must be lying, deny". Really scary but something it is easy to imagine being implemented.
AIs are not (currently) being considered as any form of replacement for Doctors.
A real Doctor would still view anything medically relevant. The issue is indeed humans miss things! So AIs can be used to supplement them, not replace them.
"What if people use AI to do this thing" is not really a point against AI when there is no plan to get AI to do such a thing.
AI won't replace doctors, except maybe for simple cases where medical care probably isn't needed, as long as the AI is demonstrated to be on par with or better than human doctors for the task. Why worry that an AI might make a mistake when a human doctor would've been more likely to make one? Also, in the US doctors are expensive. People miss chronic diseases all the time because they don't want to go to a Doctor, either because they think it's not worth the cost or because they don't want to inconvenience said doctors. Imagine being able to just get your phone out, snap a picture of whatever lump or bruise you have, and the AI tells you if it thinks you need a doctor's visit. "But what if it misses something?" Well, you weren't going to go anyway so the AI won't have harmed you. The question is what if it doesn't? Then it's helped you.
honestly you made an AMAZING point because yu just summarized not only the problem with standard questionares you also added what we saw in her video about police and computer crime models
bad data in, bad data out
sweet! man made horrors beyond comprehension!
oh, it's already a very distinct issue in existing experimental models, where the AI is much worse at detecting issues in people of color, because of the biased training data and worse healthcare for poc
except, ya know, people using WebMD and believing whatever the hell it tells them. . .
'where CEOs of all genders are guillotined for their crimes' 😅 i love it. Delivered perfectly, too.
I fkn choked on that line and had to rewind to catch what I'd missed after!
And the way she delivered it like a throwaway line! 💀
She’s an actress first and foremost!
"Unexpected item in shagging area" is your greatest line ever, and I've watched basically all of your videos. 😂😂😂
Last year, in my high school, an employee for ChatGPT game in and gave a quick seminar/QnA. One of my friends asked, "Do you feel ChatGPT and AI is ethical?" and the programmer replied, "Listen, man, I just get paid." And my peers gave him a round of applause.
aaaa
This makes me laugh and cry of despair simultaneously
pres X to doubt
Vibes. My ai ethics course is like this where the students legit don’t care about ethics, they just need the grade
Really cursed 💀
"it's not ethics, it's marketing" Our new world order.
These are some really good ideas and thoughtful discussions about this rising technology. Utterly useless, but good and thoughtful. Trying to stop, regulate, slow down or even challenge this thing will be akin to that lady trying to stop her car sliding on the ice by opening the door and putting her leg out. This tech will only get more powerful and harder to distinguish from human-made materials, and it's doubtful we can possibly predict how thoroughly this is going to take over every aspect of our lives.
@@themachine5647 yeh, for me it's just another innovation revolution that we can't oversee the consequences of. like the industrial revolution, or plastic. :(
"Ethics" ARE the guide rails for the most skilled, talented (formidable 🤔) individuals in society. Indeed, Ethics are what defines a "Profession." So if someone claims to be a "marketing professional" then they SHOULD have an Ethical Code governing their conduct.
And that's why I'm gonna major in it yeaaaah 🎉🎉🎉😎😎😎😎😎🤘🤘🤘🤘🤘🤙🤙🤙🤙🤙🎉🎉🎉🎉🎉🎉
@@djvarangian that's morals. Ethics is personal.
Due to a hormone imbalance, even though I and born, and identify as male, many people(and these algorithms) can misidentify me even though I am not trans. So this is not even just a trans issue, it can affect others, possibly becoming medical malpractice.
hit the gym bro
@@carlosandleon That's not how this works buddy.
@@nnnik3595 hahahahahahahahahahahaha. that is literally how it works. you are all delusional.
"i've tried 4 years now to claim medical malpractice and no one take me seriously :("
Yeah, there are a lot of ways this can misidentify cis people too, depending on what metrics the algorithm uses. Is it basing it on height? Cool, computer says short men and tall women don't exist! Is it basing it on the shape of the chest area? Cool, computer says the woman who got a mastectomy to treat her cancer is a man! Is it basing it on face shape? Cool, computer is a phrenologist now!
27:00 this is a thought i've had for a while now - even just the idea of using AI to replace actual writers goes to show how much "providing more content to consume" has taken priority over "making good things"
A very early draft of the first Matrix film had Morpheus explain to Neo that the humans enslaved by the machines were being exploited for their processing power. In other words, each mind connected to the Matrix was being used as one of many individual and interconnected servers. Data labeling makes it sound like that nightmare scenario dreamt up by Lily and Lana is already here!
Except it's greedy capitalists doing this. I think I read this in "The Age of Surveillance Capitalism" or "Weapons of Math Destruction" by Cahty O'Neill.
you're awesome for donating $500
@@eleanorsherry4620 Do you mean $5.00? $500 would be difficult for me to swing.
@@TheSinisterPorpoise1 Think they were talking to the original commentor - Diana America Rivero shows up often in livestreams and donates
The thing people tend to ignore about Sci Fi is that it isn't coming true because "oh wow, this person was a prophet or a genius or something!" but because "oh yeah, actually that's a pretty obvious exaggeration or allegory of what is and was already happening, maybe we should do something about that".
Sci Fi has been described as a modern form of philosophical thought experiment - you take an idea and you exaggerate it to the fullest extent you can imagine, and your story then revolves around the discussion, dissection and debate around that idea which becomes more apparent from the exaggeration. Sci Fi about androids becomes discussion about the human body, brain and experience and how we currently view human bodies and experiences - this will usually come with discussion on how we view the _differences_ between human bodies and experiences and you naturally end up with discussions of race, disability, gender, age etc. The ones that end up "predicting" the future are usually the ones that stop to really engage with their topic rather than going for the obvious and unexamined take so there's more room for smouldering actors and cool tech causing cool explosions (which can be fun and isn't always mutually exclusive)
I’m convinced philosophy tube is just something Abigail uses to show off how great she looks on literally every outfit imaginable
it probaby helps retention, I´m certaintly not an expert but the double encoding of "hot damn that's an outfit." and "hot damn that's an argument" at least makes me remember the show abit more
Great art AND insightful philosophy, what's not to like? :D
You mean he.. it's a man
@@sadscientist9995 If you want to be a bigot, there are plenty of youtube channels for you. Why troll this one?
@@sadscientist9995 bruh just don’t bother right now let’s just focus on the content itself
The quality of your content just keeps getting better.
Sidenote: I'm a cis woman who is frequently mistaken for a man due to my haircut and baggy clothes. I've been pulled aside for a "groin check" more than once when going through airport security recently and was generally confused as to why. I hadn't even considered that. I wonder if that explains it.
Girl same! Go through the PDM, no peen detected (and also no tiddies I'm nonbinary), machine is like ??tf then they get the male agent to pat me down lol 🤦
It could be the computer or they just felt like groping you, airport security does whatever they want
Aw man, a Philosophytube video about my field of study!! awesome!! I'd love to add something about the correctness of counterfactuals if anyone's interested:)
proving counterfactuals is actualloy pretty straightforward! lets say the input of the AI is the application & resume and the output is "NO". finding a counterfactual is a whole ordeal, but once you' ve found one, like "if your resume had just 1 more month of experience in X, the AI would've said yes!", you can simply use application & modified resume as input, and the output should then be "YES".
A far bigger problem with AI counterfactual accuracy is that most AIs are constantly learning and adepting. So if we tell the applicant they need 1 more month of experience, which is true at that time, but they come back one month later with the new experience, the counterfactual might no longer be valid because the AI might have become stricter.
interestingly, there are (flawed) ways to restrict the learning of an AI to work within the bounds of the counterfactuals it has given! So that it can adept and learn, while promising to not do so in a way that it would reject that one case + a mointh of experience :)
Ooo, interesting! Thank you!
The segment about airport security scanners reminds me of the game "Papers, Please" and how these exact scenarios sometimes come up. You'll get travelers who are gender ambiguous or gender nonconforming, and their presentation will not match the sex marker on their ID. Eventually you get a scanner, which is actually more sophisticated than the scanners we have today and can readily distinguish contraband, even though the game is set in 1982. Not only that, but legal provisions are also put in place to account for these individuals. In other words, the system in this fictional dystopian authoritarian setting is better equipped for these scenarios than the system in modern western countries IRL. That should tell you something.
All the more since Arstotzka is portrayed as a Soviet-inspired Ruritania.
I don't know too much about how things are done irl but I had assumed it was implied that the "scanner" is just taking photos of them undressed, hence the contraband showing up and the optional nudity. I also don't remember there being any special condition for them? I thought that you were penalised for letting them through with a "mismatched" gender marker
@@spameron7575 It doesn't look like they're made to undress, as they're fully clothed immediately before and after the scanner activates. Otherwise the player would just tell them to strip, and there would be no need for the curtain.
@@FrozEnbyWolf150 I had assumed the curtain was so there was a degree of modesty, and that the violation of having those pictures taken was just part of the border. Also the fact that the pictures are handed to you as polaroids had me thinking it. Would it be ionising radiation used in scanners? It's the first other thing that comes to mind
Glory to Arstotzka!
This is a very powerful video.
I’m currently studying graphic design and AI has changed the landscape so much, just in 1 year. I am constantly flabbergasted by how these people who are thoughtful, insightful smart and educated talk about data as if it just exists in a vacuum. If you begin to talk about human labour and exploitation they immediately shut it down with “but everyone does it.” and “we have to work with it or it will replace us”. i feel like we’re in a hostage situation and everyone is denying it, talking about “new possibilities”, as if those aren’t built on the backs of people who worked hard and dedicated their lives to their craft.
Well that sucks. Am a designer for 5+ years and I can't believe it's already being talked about as a normal accepted thing when schooling's core purpose should be to teach and reinforce the fundamentals of design first. Y'know, teach the rules well before y'all go breakin em once you graduate. Where's the development of ideas (more than just the first 3 that come to mind) when a student can just ask AI to generate 20 versions of a logo? Will they even be equipped with the knowledge and skills to pick out the good ones ??? Geez... this is depressing to hear tbh.
@@elucified to be fair, we are encouraged to use it more as an idea visualizer than idea generator but i still feel shitty about it and like I’m betraying the people who’s lifeblood went into this. at my bachelors my teacher had real reverance and love for the art of typography and taught it with a passion. here, sketches and illustrations are trashy and kitschy while impersonal ai generated stuff is “visionary” and “inspiring”. it sucks so bad :(
I totally agree, but I also believe that the 'genie is out of the bottle' and we have to learn to live and work with AI. It's not going away. If anything, more and more of the software I use has it integrated now. The expert graphic designer knows what good design looks like and how to communicate information; they provide a value add beyond the AI. But with the ubiquity of AI everyone can claim to do it themselves and there will be floods of awful art and design coming from it :-( Smart phone cameras have done a similar thing to the old school skill of photographer.
When I was in my final year of graphic design school, someone used AI in most of their final design, and they were heralded as one of the best on the course. It feels so ironic we had an academic project on efficiency vs ethical design in second year and then a 180 in thought happens right before employment
There is plenty of AI that is helpful to creatives and their processes, i use a lot of it every single day as a motion designer. But this AI "boom" over the last few years has really disheartened me as a graphic designer because I think we're truly seeing how most people view creative works. They take creative professionals for granted CONSTANTLY, and so when we speak out about our works being used to train AI to essentially steal the small work that allows us our livelihood, we're often met with indifference from the majority of people. Because they do not appreciate the human element of our work. They only appreciate the final product. And if they can get a fuckin Temu version of my artwork for a fraction of the price, that's good enough for them.
22:42 i love how excited you get about propping up your coworkers and the people who help make these videos happen. Also, that NOVEL of show notes is a testament to how much you care about your work and the way you hope it impacts the world.
Thank you for giving me a little more hope as an NB (and absolute philosophy nerd) new to the space.
Dayum Abby. In one video you've basically changed my view of AI from "alien and potentially hostile form of intelligence that's exemplifying the worst of capitalism" to "reflexively parasitic crystallization of the worst of capitalism". The part about people doing subemployment to act as little more than neurons in this strange excuse for a brain was really eye-opening.
"Once upon a time, men gave their thinking over to machines in the belief that it would make them free. But it only allowed other men with machines to enslave them." - Frank Herbert, _Dune_
I used to work in one of those data annotation offices and we could only guess what it was we were training, probably a bunch of things at once for different companies. Tasks differed but at one point my colleagues "mined" youtube for footage of gun and knife violence and i almost had a meltdown thinking i'll be on that team next. Thankfully i could be moved elsewhere. Just a whole office of minimum wage workers pushing buttons for the machines. I always think about the people who do these jobs when people talk about AI as if it's teaching itself.
I could never do my job fast enough for the management and even got RSI from it. I don't think my injury was worth it at all.
I'm ukrainian btw
"there's no ethical computation under capitalism" is essentially how i feel about my work as a dev. we're not hired to make ethical and green code, we're hired to make a tool to unemploy someone else, as quickly as possible
I don't think unemploying someone else is the problem that's exactly being pointed out here. Its more the distribution of wealth itself.
The main issue being taken with really anything unemploying others is that it siphons money back into the select few who have the most power.
I don't think we'd want to go back in time and stop trains from being invented just because the previous people who transported goods will lose their jobs.
Capitalism is the only ethical system. Without private property there is no morality. If you can't own anything then you can;t make any ethical decision. There's a reason why human do better the more markets and private property are embraced.
we should unemploy as many people as possible, so that people can stop working, or work in actual socially valuable jobs instead of artifically employing people even though we have an automated solution.
@@AbogadodeAsmus The point is not to find a way to employ everyone with other menial tasks that existing technology already does. Menial tasks that no one wants to do in the first place. The point is to free people from wage labor. Give them the freedom to choose how they spend their time and labor without forcing them into servitude under the threat of homelessness, destitution and starvation. Make technology work for us and not the other way around.
@@ret2pop This will require implementing a Universal Basic Income _first_, because otherwise those people can't actually stop working or they'll starve, leading to de facto slavery at best. But even if we manage to get UBI through, it will be constantly under attack from the right wing since it's fundamentally against the idea of a stratified society which is the core of right wing idelogy.
It's capitalism itself and even more generally the idea of hierarchy - specifically, the division between the rich, who are allowed to profit without actually doing anything useful, and the poor, who have to earn every single penny through hard work and, if such work is unavailable, artificially produced busywork - that's the problem, and quickly becoming a fatal one. I wonder if that's the actual Great Filter: a primitive society becomes hierarchical because that's an efficient way to organize military power so any tribe which does that forces its neighbours to do so as well, and by the time technology advances to the point where society has to give up hierarchy to survive its too deeply entrenched and the whole thing collapses into a few remaining plutocrats ruling over dead but automated ruins, dying off one by one.
hey!!! i’m doing a research project on essentially the entire second portion of this video (ai, surveillance, and trans identity) and i found that simone browne’s book dark matters and toby beauchamps’s book going stealth were really helpful in my understanding of surveillance and how it’s used against minority groups (if anyone wanted to do any further reading) :D
I've seen 'Going Stealth.' It assumes that it's oppressive for institutions to expect someone to self-identity as a person requiring an accommodation (as opposed to the institution anticipating your need). That's a really silly premise to write a book about.
Thank you for the recommendation
Going stealth was a very informative and eye-opening book delivered in an easily digestible format 11/10 would recommend
@@bambooblinds Ummm I think you may have misunderstood the premise, or at the very least are misrepresenting it.
Trans people should not require "accommodation" - we should simply be able to BE, in the same way that cis people are. The idea that we pose a unique challenge to society is simply a result of our society doing a lot of discrimination based on colonialist and misogynistic views of gender.
Given that the book also discusses the use of surveillance to enable differential treatment of people of different races and nationalities - entirely constructed categories - I think that it is misleading to present the book as just some whining about "uwu the state didn't anticipate my needs; I'm oppressed!". It's a deconstruction of how the state uses surveillance as a specific tool to both enable and justify oppression.
@@BambiTrout nope, i'm not misinterpreting anything. i'm just not buying the arguments that being trans is something other than a disability. really, i don't think anyone actually buys into that, although many go along to be polite or avoid making waves. you're right that comparisons are made to race and nationality, but those are bad analogies. those are cases where you're debating efficacy in having profilers designate signs of a security risk (or sometimes partial descriptions of a suspect) and how that should be balanced against unfairly discriminating against innocent people who belong to that category. the appropriate analogy for trans people would be more like someone with mobility issues needing special assistance or someone with an implant that will trigger metal detectors.
it's a case of needing accommodations, like i originally said - and it's the responsibility of the individual to ask for it.
We can make ethical AI. But no marketing or sales department in the world wants one. They want an AI that maximizes profit for one party regardless of the interests of another party.
I would love to see a full episode on the subject of subemployment. I don't know if there's enough material there, but as someone who's recently become too physically disabled to hold even a steady part-time job, it's an issue that's very important to me.
Same! It feels like it's adjacent to gig work and under-employment (e.g. not enough hours to be fully employed and get benefits or overtime, so you have to work multiple jobs).
@@emilyrln Maybe an episode talking about those issues as well? That would certainly provide a lot of material. Still no idea if it'd work but I'd for sure be interested.
There kind of is a lot, but not necessarily useful to build an episode around as a lot of the discussion is economics rather than addressing the human element. There's a reason half of the poverty rights activists I interact with are wonks, they have to be to dissect what's going on in any meaningful manner, and the political discussions rarely address the human element adequately despite efforts to force pols and the economists they listen to to do so especially since some of the stuff pols rely on is outright lies.
There’s a great study by Harvard Business School called Hidden Workers: Untapped Talent. It goes into how applicant tracking system software causes at least 27 million Americans to be unemployed and sub employed. It disproportionately impacts us disabled people. Even though there’s almost always accommodations available for the jobs we apply for. 90% of the time it’s stupid formatting errors like having multiple columns that cause your resume to not parse correctly.
I'd like to see this too.
as someone transmasc, I knew exactly what you meant with the body scanner section. Multiple times I have been scanned and there's the much less humorous equivalent of "BRO you're MISSING SOMETHING"
This was excellent and I feel like so much of this desire to push writers out of a living wage is an extension of the idea that our work isn't "real" or "valuable" because it isn't inherently profitable. That whole lol at the liberal arts/humanities majors chickens coming home :(
it is real work but a lot of people take it without a plan of what they will do in the future
I think it also confronts our human desire to feel special. That our creativity isn’t bestowed by gods. Other avenues of story creation are just as valid as our mind can create. I think the biggest issue is the destruction of capitalism, and the concept that we have to slave our lives away in order to feel purpose.
I am a data science teacher and in out bootcamp we do have a lesson (unfortunatley only 1 in a 3 months course) on data ethics. I really liked your video, it explains the problems with AI pretty well. I will recommend it to my students. As a teacher, I feel responsabolity towards those issues.
I remember going through airport security on a class trip. Only two people got pulled aside: me (who, it turns out, had multiple strikes all related to my transness) and a classmate (who is cis but was GNC). We found out about the genetic sex buttons and it was... frustrating.
I've never gone through airport security without being patted down. That's why I laugh when people say the machines are unbiased.
They're not 'genetic sex' buttons--the TSA schmuck at the screen is not blessed with the ability to karyotype people at a glance. It's a measure solely of how the schmuck in the seat with two buttons categorizes the person in front of them.
You could make a claim that the machines could be unbiased. However, the people behind the programming insert biases and the people pushing the button rely on and insert biases.
@kevinbarbard355 The programming is what runs the decision-making in the machines. Sure, machines aren't inherently biased, but what people are talking about here isn't the inherent nature of a machine, it's the decision-making capability. And that capability is given to machines by biased humans. Ergo, the machines are biased because humans are biased.
14:30 I think papers please has a very good example of this, in the game you're playing as a border security guard and have to make decisions on who to let through the border into the country. Also in the game you have a body scanner to check for contraband as well as if their gender on the passport matches with their body, and there's one specific instance I recall of one character passing through the checkpoint who has the exact set of circumstances described, their passport doesn't match their body, but when you question them about it they simply respond with "yes I'm aware of that" and then you have three options of either letting them through, which gets you a citation, or sending them to the detention facility, which gets you a bonus thanks to an arrangement with one of the guards, or simply turning them away and telling them to correct their documents. It's an interesting ethical dilemma which also shows how easily someone put in that border guard position can make decisions that are inherently judgemental even they themselves don't harbor any personal resentment towards the person passing through the checkpoint.
What a great point. And in this it becomes clear that at the end of the day, it's not the individual, little people we need to blame - it's neither humane nor productive to blame them. It's the oppressive systems we have in place, that's what we have to focus our efforts on and change.
@@GoVocaloiderWhat oppression?
@@Hi_times_2 Try to learn to read first, son.
@@ng.tr.s.p.1254 I would really like to know what you mean, I don’t understand?
I mean, this is how fascism works. You do what I say, how I say or get fucked.
As someone who is trained in both political philosophy and tech this is a fascinating segment. Your humor makes it even better. Your stylists are amazing.
I've been working backstage for Google AI Summit events. Thank you so so so much for giving me the language I need to express why I'm so hesitant about all of the advancements they're breathlessly announcing.
Nebula-head here: this is such an incredible video essay. It feels like a return to Season 2 in some ways. The video is a deep exploration of how large-scale computing does not solve issues with art and labor but instead exacerbates them. (I enjoy the Kelly Slaughter bit - a great Season 3 addition overall but notably in this video - as something of an exercise for the viewer to practice dissecting what is said and (more importantly) assumptions left unsaid. There are so many excellent resources - not just the texts but also TrashFuture episodes connecting the Nate Bethea Extended Universe - which will find their way into my To-Be-Read list. There has been much to digest since I first saw the video. It is incredibly timely, well-researched, and insightful into the issue.
For reference, what would you consider Season 2?
Thx for reminding me that I can already watch this cuz I was waiting for it so I could go to bed when it's done 😂
"There is no ethical computation under capitalism."
This could be my motto going forward.
Another argument on why I should spend my limited money on Nebula. Don't make this so difficult, I will break.. (^^)
) hey you dropped this closing parenthesis
As an AI safety researcher, seeing my favorite philosophy channel post about the topic was amazing to see! Excellent work detailing the tradeoffs of fairness and accuracy. I'm taking a class covering the subject right now and buried beneath all of the complex math there are some really startling realizations that you explained beautifully. We need to seriously consider whether we WANT to live in a world where every decision is made by some unknowable black box algorithm. I worry that regardless of what people want, the military applications of AI mean that the technology is going to continue to be pushed forwards at breakneck pace. While true that too many people are only focused on terminator style risks of the alignment problem and thus ignore many of the already present issues, I do think there is some serious risk as we continue to get closer and closer to human-intelligent systems that we should be aware of. My overall opinion about AI is that we are playing with something that has the potential to fundamentally reshape society with little understanding of how it works. Our current capitalist organization of the economy is one of the worst-case scenarios to be doing this in and we are set on a serious path towards dystopia or extinction if we don't reign these companies in. Maybe someday the post-labor utopia promised by AI visionaries will be possible but rushing to shove AI into every facet of society as quickly as we can is a sure way to make sure we never reach that vision.
Within that, what do you think about the point that ai may be seriously stunted by the climate crisis? Can militaries keep pumping money into it if it can't function due to physical realities? (I know this flattens a very complex question into a single paragraph 😅 but even something as simple as workers behind the scenes being unable to function due to heat stroke could hugely effect it)
There is a serious arms race happing with AI. I think is hard to avoid. But I do think we need to figure out to handle living in such a world. I honestly do not think we can stop it. (And the benefits might actually be worth it to, if we are being honest.) But like pretty much all technology, it is a double edge sword.
We better figure the thing out. Develop new policy. Set up new norms. Honestly, we are actually still dealing with how we should set up the norms for IT society at larger. Most in the developed world did not start getting in to contact with computers until the 80s. And even then most did not own one. In the 90s, people started to go on the internet. Social media as we know it did not start to get traction after the millennium shift. So yes. We have not set up norms how to do this. How to build a far and safe IT society. If anything, exploitation is returning to what it was when industrialization was new, and we had no norms. I just hope we can adapt in time.
In many ways I do not envy you AI safety researchers, since even if you do make a safe AI, can it really be safe when it is in the wrong hands? Like it is often today.
In a lot of conversations about AI, I notice we over and under estimate the competence of humans and AI (current and future) on different axes.
Humans are also black boxes full of biases. Humans also launder their positions. Our brains do so much post-hoc justification of decisions which do not originate from reason.
One interesting thing about AI is that we can measure and (to some extent) tune its biases. The tuning (training) is harder with humans.
Having been the victim of human bias in healthcare, I think that in the narrow case of my personal medical journey, a well-tuned AI would likely have done better at diagnosing and treating me for my rarely-diagnosed condition.
I would like to live in a world where AI was part of the diagnostic process. I would not like to live in a world where it was the only thing in the process, as you say.
I share your concerns about AI in general. Neural networks with billions of parameters can find themselves in an absurd number of different states. Those states are not entirely reducible and there’s not enough time in the universe to play through every state and input and validate the output isn’t catastrophic. I don’t know how we can possibly have confidence that they will be well-behaved given that. We definitely don’t know how to be confident yet. Is it theoretically possible? Perhaps I’m not expert enough to understand what’s possible, but I also haven’t found an expert with a clear case for why it might be possible.
Of course humans have the same problem. But humans are also slow and mortal. My concern with AI isn’t that it might be misaligned- humans are misaligned. My concern is that it will be misaligned and vastly more powerful. It can move faster than us, integrate its thinking with tools in ways we will never be able to do, hold more in working memory…
@@lkyuvsad For me, the black box issue is not a huge issue. Like you point out. Humans are in many ways also a black box. Even when you ask them to tell their reasoning, they might like, or more often not know what their reasoning really was.
For me is more how they can be used. That they are that powerful tool. And is often more a case that a human can do the same, it just takes a lot longer for a human to do.
I think "AI extinction risk is a distraction" *is a distraction*.
People want to see this unprecedented and horrifying problem through the lens of problems they already understand. The problems liberalism is designed to solve.
AGI risk is not like that.
These other problems are big and horrifying and almost intractable. They are not going to kill literally everyone on earth.
Training data IS NOT ALWAYS PRODUCED BY HUMANS. I need to disabuse people of this misconception on a daily basis. Take AlphaGo Zero for example: it was given only the rules of Go and it trained itself with training data it produced itself until it produced strategies superior to the best ones the top human experts have come up with over the past thousand years or so - and it did it in about a day. It beat the best humans - after having only ever played itself.
AGI on the cloud would not be dependent on a capitalist system to kill everyone. It need only hire a few task rabbits - or get access to improperly secured robotics in a lab somewhere, bootstrapping technologies including nanotech rapidly and bypassing physical constraints in ways the cleverest humans could not even have imagined.
Say what you will about those who believe that these systems are so profoundly risky that we need to shut them down in a global moratorium. Go ahead and claim that shutting down training runs for systems larger than ChatGPT-4 is a distraction from the real problems. What you would be missing is that a moratorium *WOULD SOLVE OTHER PROBLEMS TOO*!
Sam Altman is playing the role of someone concerned about AI safety - to get the government on his side. To allow him to keep doing what he's doing. As a strategy for regulatory capture. As Yudkowski and others point out, if he actually realized the extreme danger involved here he would be taking this problem with vastly more seriousness than he is.
This frame of philosophy tube is saying:
Look at how unaligned Capitalism is. Capitalism is the real problem. AI risk research is just AI "Doomerism" is just the shadow side of the AI utopianism which is just AI hype based on marketing from crypto scam artists.
Except it's fucking not. Because like it or not artificial general intelligence is almost certain to be profoundly powerful, almost certain to come (I suspect in the next few decades) sooner or later, and almost certain to have goals misaligned with any human individual if we don't solve this fucking problem.
We aren't ready for this, we don't know what we're doing, and we are all most likely going to die because of this.
This is terrifying.
I've spent most of my life learning about biotech and synthetic biology and have learned some pretty terrifying things. I've studied global factory farming and the conditions of slaughterhouses. I've studied the red market in China.
This is far, far worse.
There's a powerful temptation: think about other things, distract yourself with other problems that are easier, project onto others - as if they are the ones not prioritizing correctly, only look at things that can be well-described by the cognitive schema you're already used to using.
Let me state this directly: Abigail is wrong. She is not seeing the real danger. She is not properly understanding the scale and severity of the problem and she is dismissing it as hype because it is too hard and alien and painful for her to accept. It's too extreme to be real.
Absolutely everyone dying unless we solve this insanely hard problem? Where the fuck did that come from? Has the cosmos has gone mad? How can anything be so unimaginably dangerous? When the fuck did this become the universe I live in: in which so profoundly little hope can possibly be justified?
Except that's exactly where we are: a universe in which we all just die like weeds in a blast radius unless we get our shit together fucking NOW.
So blame the billionaires, blame the economy, blame capitalism, blame technology, blame marketing, blame the politicians, blame the fucking safety researchers for not blaming the right people and things: it won't make a good goddamn of difference unless we Solve The Fucking Problem.
Ok?
You want to wear extremely expensive bondage gear and complain about hype while you do it? Fantastic! But DO SOMETHING.
I fear for the lives of everyone I have ever known. I fear the universe that comes. I know how bizarrely cruel and indifferent it can be. How fucking impossible its challenges can be. How utterly unforgiving and steep and hard and cold its costs.
We ALL need to work on this now, now, now - for the sake of unborn persons and living persons and the possibility of the death of humanity for all of coming time. NOW!
I cant recall if I've ever remarked on it previously, but I quite adore your closed caption descriptions of your musical inclusions. I feel like given the gravitas of the subjects you tend to tackle, there's a decent chance if I've commented at all it was likely better interaction than a passing compliment, so I'm dropping this one IN ADVANCE of all the brilliant and interesting things you're about to say.
bless who does the captions they r so expressive and engaging and fun and descriptive and i like always need subtitles to understand things and they're always just so 'rock music in backround, wordforwordwhatwassaidwithnocontextandsilencenotfilledin' bless u
That's me haha!
@PhilosophyTube hehe what a sweet thing to put so much time and thought into ! looking forward to more onomatopoeia and typographical expression ! know it is rad as hell and ur appreciated :)
I have tsa precheck, so I get directed through the metal detectors nowadays, but when I was traveling on a choir trip as a fourteen-year-old they put me through the body scanner and then pulled me aside to do a light patdown of my chest. I'm sure that my chest binder and the shape of my body underneath it set off something there; the agent who did it was polite but I was standing there afraid that someone from my group would notice and I'd have to explain myself--not a single person on that trip knew I was trans (I hadn't come out to them).
And, y'know, I feel like the Pull Teenagers Aside For Patdowns In Sensitive Areas machine is maybe, perhaps, something that we as a society should reconsider there.
The only solution then is to just pat down absolutely everyone again, these scans exist for a reason and we don't need another avenue for another war to start, there's already enough of those going on.
my sister was recently fired from her job at walmart due to a decision made by an AI. she was the sole breadwinner of her family, her husband is disabled and they've both had to pay for a lot of medical issues both relating to and not relating to that. they live in a very small town basically in the middle of nowhere because its what they can afford. im glad my parents have been able to help them out a lot because i dont think they could get by otherwise, and of course i worry about their kids too sometimes
She cant find another job?
@@jokehu7115"they live in a very small town basically in the middle of nowhere because its what they can afford."
I think it may be that they are in the middle of nowhere and can't afford much
@@twinnem7075 move to another town and research what earns the most and look for low rent seems fixable
@@twinnem7075 onlyfans?
Let's be real here: If it was Walmart, she was getting fired at some point anyway
I love your shout-out to your crew! And I love that you respected their request to not appear on screen. As someone who works/worked behind the scenes, I have been included in promotional content for shows without previous notification or without being asked for my consent and it is really uncomfortable. Also - the newspaper outfit was fantastic!
absolutely noticed that too! it's great to be given some perspective on the sheer mass of work that does into every aspect of production, but having it be done with the privacy of those who want it maintained makes it all the more enjoyable
I love how youtube is becoming an ad for nebula now. I never thought i would like a long form ad so much
Thank you for this amazing video! I've noticed AI researchers tend to downplay philosophy's perspective and contributions to the field, but it's so important now more than ever when dealing with ethics.
As both a Computer Science major and an artist this episode has been such a blessing. I've been strugling to put all my thoughs on this video into words but it really opened my eyes to issues that I noticed were happening but couldn't quite pinpoint what they were. Still, making a little tangent I guess, I wanted to share a perspective that I think might get overlooked when talking about this subject. The thing is that writing code, at least for coders, is seen as sort of art. I mean, I've even had many professors refer to it as a combination of science and art, and I really don't know how to explain why but I get it. There is a lot of artistry in what you do, from the things you decide to build, the technologies you decide to use, the way you approach solving problems all the way to how you write the actual code, there is a lot of expression and will to create. And it really saddens me that ai, that can be a tool used by artist as a way to do interesting things with their art (last year I for example had the opportunity to see the work of a painter that built his own ai model and trained it with his art so that he could do crazy interesting stuff with it) is being appropriated by big companies to replace the work of artists, instead of using it to create actual value to the world, and how is feeded mindlessly with people's data without their consent. As engeneers, we should know better at this point. We often build things without malitious intent but I think we should ask ourselves more on how the technology we create can be used with malitious intent. I doubt that the scientist that made advances on generative images using ai were plotting on how to build a machine that could create non-consensual porn, but they were rather fascinated by the fact that they could create image with the power of magic, because that is often how working on science feels like.
Yes!! Thank you!
Yes! this is a phenomenon known as function creep: "In an AI context, the deployment of AI beyond its originally specified, explicit and legitimate purposes can lead to function creep as well as exacerbate security incidents. For example, AI systems intended for specific crime prevention goals might gradually be repurposed for unwarranted surveillance activities not originally considered." (from a 2020 paper by Stefano Fantin and Plixavra Vogiatzoglou). You get stuff like anxiety-moderating systems being repurposed into shit lie detectors, etc.
i'm in the same place. sometimes i can't help but feel like humanity can't be trusted with technology AT ALL, and that the only "ethical" thing to do at this point is just refuse to participate in society and run screaming into the woods 😬
@@imlxh7126 Reject modernity, embrace screaming in the woods
It's that old meme.
"We built a machine that tightens bolts twice as well!"
"Cool, do our hours get halved? Or do we make products that are twice as good?"
"Half of you are fired to double our profit margins. The other half, get back to work."
This was fantastic! Great job to the whole Philosophy Tube team!
I'm a molecular scientist specializing in human cytogenetics. The big thing I do is a test called FISH which is useful for detection of neoplasm at the chromosomal level.
Cancer. I look for cancer.
I do the analysis manually. I run the assay (which is a 2 day process) and then look at each slide myself using a fluorescence microscope. Each slide can take 30-60 minutes to analyze. I have to look at hundreds of nuclei and there's a lot of variation to account for when assessing if something is a problem or not.
A lot of large scale labs don't do FISH manually anymore; they use software. The slide is scanned by a machine and spits out the analysis. Some are pretty good, others are not. I keep logs for all of the results including the ones that are sent out to reference labs that use software (the sendouts are done for insurance reasons; their insurance will only pay for the computer FISH).
My accuracy rate has been rated many times as incredibly high. Sometimes a patient's doctor will foot the bill and just ask me to do it because they know I am much less likely to send them a piece of paper that says, "insufficient cellularity for analysis."
Is the software faster and cheaper? Absolutely. But it's not nearly as accurate. Large-scale computing is just not an adequate replacement for certain tasks. Sometimes you need a person who is has the education and training to sit at a microscope in a dark room all day looking for potential abnormalities in thousands of nuclei.
The debate on ethical resume pondering by humans should also be considered. I used to have a female friend that worked for HR on one of the biggest banks(BBVA) and she was ordered not to even take interviews seriously if the applicant was overweight, a pregnant female or even hinted at anything related to a genetical condition, and this happens with HR more than you would think.
I'm a software engineer and I really love programming, but it makes me feel like I'm in the wrong side of history. Specially because software engineers don't unionize, at least not in my country. We are exploited, just like everybody else, but since it pays better than a lot of other jobs programmers don't seem to feel the need to organize themselves and well, maybe large scale computing is the push we need.
If there's some Chilean developers out there wanting to organize something, let me know 😂
Big up Chile.
It is slowly changing
I mostly agree, but I think we need to get away from the "right and wrong side of history" thinking because it implies linear historical progression which isn't true.
Mexican-American Electrical Engineering student here! I dont know about "right side of history", but I dont want to use my skills that will hurt people.
That reminds me of Project Cybersyn
I was honestly trying to pay attention during part 3, but your outfit just had me thinking about my own body.
I'm also a trans gal, and your level of beauty felt so much more attainable when I saw that we have a very similar silhouette (to say it as non-grossly as I can think to)
So, thank you for that. I know there's probably some other reason behind the costuming decision for that scene, but it did make me honestly feel like maybe, one day, I could be as pretty as you are.
My wife was similar early in her HRT days and it's rather shocking what 3 years of HRT can do for you. Assuming you'll go down the same route a lot of it will just take time. My wife looks very different from back then and it just took time.
Guessing Abigail originally had a section on this, but it was cut for time and/or flow reasons: Companies de-risk Data labellers by comparing their entries to those given by other data labellers. This alone means that they have to not label correctly, but label according to what they believe others would pick. Add to that the need to complete many thousands of labelling exercises a day in order to earn anything, this ultimately means that data labelling is no longer actually as useful as it should be. Data labellers no longer label according to quote-unquote reality, but according instead to "what would other data labellers pick within less than 5 seconds".
Yep. I tried working doing the data-labelling thing for a little bit, with the sincere desire of wanting to improve the algorithms. Instead I was constantly pinged for being "incorrect" in my labelling and denied pay as a result. Whenever I tried to appeal the decision I was ignored or hit with the brick wall of "aggregate data indicates" I had made a mistake. I have a grad degree in physics.
I keep going back to this video whenever I approach AI with friends, even when the topics of application are far from the videos' example (IA in video identification, IA in war...), the framing is really something that's often missing from debates and easily breaks down what is important to take into account
OMG thank you for talking about that stupid airport machine. It’s great in the sheer amount of times I’ve had to explain that to Cis people as blatant display of systematic discrimination. And the massive numbers of embarrassing moments of people turning around to see why I got stopped, just to see a massive red blob on my crotch area on the screen is the worst!!!
It can even be hard for a liberal minded male to conceive. I’ve met so many females who identify as females to have as much breast tissue as males, and males who identify as males say they know they have the Seinfeld “moobs” (male boobs)… I hope Gen Z is bigger than this. We millennials missed the mark.
So, I'm trans and autistic. I'm also left handed. Growing up, I was an outcast everywhere I was. I was either so uncomfortable in my body because I had to present as male or I had issues with neurotypical people that I could never fit in. I took this to heart early on, noticing everyone else in kindergarten was writing with their right hand. I thought two thing: Am I writing wrong? Or am I just too weird to write like everyone else.
I only recovered this memory recently after my left wrist started hurting from how much I rely on it for most of my daily life. Ostracization pushes a person far, especially from a young age. If I weren't so afraid of being isolated anymore than I was I might have had a better childhood.
It's always humbling, in a sad way, to learn about axes of oppression that I as a cis person can pass through without a second thought, like the airport scanner. To recognize that there are ways other people experience the world that I have the privelage to not have even had to think about before, let alone contend with. Sometimes I've got to remind myself that even though I'm a leftie, and I've worked on myself a lot since I was a shitty teenager, there's always more to learn.
It doesn't just do it to trans people, either. Try being fat in an airport and you'll get pulled out for the patdown every single time.
Ya airport security is pne 9f the shiniest places about this.
If u r Muslim u r "randomly stopped" of u don't fall into the "right body type" u r searched.
It's security theater so it fouceses on things the establishment thinks are scary. Without u k any evidence that those things actually work or any tangible results.
To this day the tsa never claimed to have stopped a terrorist attack. There is no direct link we can draw for any of these practices and actual safety
Abigail didn't mention that it gets worse than groping, you may be ordered to disrobe to prove it's just a penis and not a... honestly idk what they think it might be, a really weird bomb I guess?
Your disclosures in the end-credit captions are what got me to sign up for Nebula finally (along with my sister off-handedly mentioning that they also have a Nebula subscription now, thanks sis). Looking forward to getting in on more PhilosophyTube and related content; thank you Abigail. Glad you and your team are doing alright amidst the everything.
i had an exam on this topic today. not one of the "deeper" points was discussed or even suggested as further reading. the blindspots of the course are insane; definitely going to look into this for my term paper
I like to think of most technology is created by people throwing darts at a very precise target with no actual awareness of what is behind it.
Can't say I have had the same experience.
Oh thank god im not the only one who thinks about the lithium. The fact that we're wasting it with planned obsolescence too. People are dying in mines for an iPhone that will be thrown out in a year. I love the internet and tech and I cannot deny how important and useful it is, particularly in how it spreads information (i grew up with the internet and the idea that i might have questions for which i will never be able to google an answer for is mind boggling to me) but its absolutely drenched in blood.
While I agree lithium is very dirty to produce, and it is squandered on unmaintainable, unrepairable, devices; I do not understand what that has to do with AI or the internet, neither of which are built or powered by lithium batteries.
@@xericicityAI and the internet both require physical devices in order to exist at all. BOOM there's your connection. I'll bill you later.
@@almishti And what physical devices are those exactly? Which are required to create, power, and operate AI and the internet. Lithium is mostly used for batteries when it comes to tech, and mass parallel processing farms, switching and routing units, server parks, they are all plugged in and do not use lithium batteries. No offence, but I am not willing to pay for technical services from people that think mobile phones and laptops are what powers AI or the internet.
There are other commodites too, Tantalum metal, used in capacitors that make a phone much more compact, is a conflict mineral. The working conditions in many parts of the supply chain are very poor.
@@xericicity39:00 is where lithium is relevant to this video, Abi talks about it
Thank you for making this video. So many people I talk to just think about Science Fiction AI when the topic of ethical AI is brought up, it pains me to see that many of them don't realize it's a much more complicated subject. You did a great job covering many of the ethical issues in regards to AI development.
Me 4 years ago: "Philosophy Tube will never have more unsettling and terrifying character than The Arsonist."
Me today: "Abby really needs to add a jump-scare warning when Kelly Slaughter is going to show up."
Something that a friend of mine said which continues to stick with me: Corporations are like an AGI that uses people as its hardware and profit as its goal function.
The things that an AGI would kill us with are the things that corporations are _already_ killing us with. The paperclipper is coming from inside the house, and if you could file those TPS reports for it, that'd be greeaaaaat.
I like the Office Space reference. One of best 99 movies.
I'm an art student at a primarily engineering/stem college where a lot of people are pursuing jobs in the AI space. Conversations about this topic are draining but inevitable. Having this video helps me put my feelings into words so I really appreciate it
i'm an Artist but i am currently training in STEM and oh boi does the coversation annoy me. Like its so hard to get people to understand that its not a logical issue its an ethical and emotional issue.
@@StarPichu12 people in STEM fields are so coddled and overpraised by society that they have no idea how much they don't understand. It's sort of sad, but they genuinely think they're the heroes but really they're the ones that folded first.
Ive had several conversations about how so many problems could be solved if there was some kind of compulsory “intro to sociology” component to education in STEM. Like, if tech bros actually did a 101 class in university-level sociology, philosophy, and history, maybe they wouldn’t be constantly falling into intellectual traps that the humanities have long since addressed, all while ardently believing theyre geniuses beyond reproach simply because their particular skillset is overvalued by a capitalist economy.
STEM people sometimes just love to feel superior and be dumb in ways that preclude them from seeing how dumb they are, and someone else’s literal entire academic career has already debunked (also its often willful ignorance bc sniff ideology but hey it couldnt hurt)
Genuinely might be some of your best work yet! My favorite is the last section, about how the physical components of Big Tech are actually mined and transformed into products in the imperial core-- it really cuts to the heart of all this.
Maybe it's just that I'm the child of an engineer, or that I'm into crafts and stuff, but I'm starting to think that reconnecting with the physical world is one of the most effective ways of like... idk, leftist awakening? And also connecting/communicating with other people who aren't plugged in to the Online Discourse(TM)? It really strips the AI marketing of a lot of its power.
Like, it's easy enough to fall for techbro babble when they talk about "the future of computing!" and "new things being invented every day!" but... when you confront someone with the reality of us as physical creatures using the materials of the earth as tools... and the reality of all the WORK that goes into something as "simple" as a digital image... it just kind of lays bare all the violent systems that are being obscured with a phrase like "data-mining."
Because it sounds kinda videogamey, doesn't it? We're sending our data-miners to the data-cave and they'll come back out with some data-ore that we can smelt in our little ovens to make a data-bar. It's all just little conversions in code!
But no-- what the system does is send some people to drill into the earth, disrupting ecosystems and brutalizing the people closest to the drills and the planet around it; then more of the world is burned and paved and polluted to move that Stuff to where it can become Art; and then yet more is burned to fuel the creative workers who use their own time and labor and even their own flesh to make things that are then STOLEN en masse to make... "data." And allllll of this is marketed as more "efficient" ways of "generating" "content."
It's just exploitation, violence, and colonization all the way down.
Anyway, sorry for the ramble! This one is such a spicy meatball of philosophy and I can tell I'll be chewing on it for a while. Thank you to Abi and crew for all the work that went into it! ♥
I massively disagree, it’s largely occurring everwhere that more urban areas foster more collective values due to shared resources and space. A more urban america would be a more liberal one. Because the counterpoint is how conservative is most popular in rural areas and declines are seen with population density. A more naturey america produces more hunter gatherer mentalities.
A more urban America is a more Liberal one, but "liberal" is not the same thing as being socialist or progressive. Liberalism is just a softer version of republican ideology that has a kinder image to market it.
@@480darkshadow Sorry, I guess I got a little flowery in my speech there and wasn't clear-- I'm not trying to say that nature creates leftists or anything! And I also don't think urban living is opposed to environmentalism. (Or right wing ideologies, sadly.)
What I meant was more general, actually. I think when we're talking to people who fall for AI marketing and things like it, they're very often people who have been completely alienated from labor-- both their own and, like, all the work that makes the world around them. And I think that learning about these things can be a valuable step toward class consciousness.
It honestly stuns me how you just continue to kill it video after video. I've been watching the channel for a long time and it has been such pleasure to watch you flourish creatively and grow by leaps and bounds in your art year in and year out. You're an absolute treasure Abby. Thanks for all the work you do.
from a labor of AI, from India - wonderful presentation!
--> It’s the dress code that caught my eye, totally unusual.
--> The meaning behind them is well presented in the script, esp. on data scrapping & flattening.
--> Climax is what I liked the most with the proceeding of US nomads!
--> As a student keen for story telling structuring, it’s a must watch. I enjoyed the mixing,
----> to catch the eye: studio set, dress material
----> to catch the ears: stories interweaven
----> to study: many references flashing every minute!
----> to ponder: deep content on societial power imbalance!
--------------------
I am scared esp. with Indian caste structure, withholding 1000s of years of Hierarchical power imbalance, backed and blessed by stories of karma & fanciful god’s of the so-called epics!
World forums yet to call a spade♠️ a spade♠️ - because of it efficient camouflaging with softer outer shell layers of yoga, vegetarianism, mysticism, spiritualism, tolerance, non-violence and Gandhi!
Already a torn unthinking society slowly turning the unemployed and senior citizens as zombies, sub-employed as propaganda machines!
With the wine of AI mixed - it’s gonna be devastating!
--------------------
Thank you for making this video! ❤
I just want to say this is a brilliant video! It covers the topic of AI from a different perspective than I’ve seen before and it was really informative. Thank you very much to Abby and the rest of the team for making it. :)
Really glad to see this topic broached by you. Wayyy too many people, genuinely or otherwise, have the wrong concerns about ai
Just want to say, I absolutely loved the work you put into the subtitles on this video, I have an information processing disorder, so it can sometimes be difficult for me to parse voices, those subtitles helped me understand this video a lot better. Also the detailed and dramatic description of the music was also very funny.
You make some of the most important content on the entirety of the Internet. I truly hope you find time to continue with Philosophy Tube alongside your amazing, successful growing acting career! You're awesome.
I am working in tech and also studying at university - faculty of humanities. I am still searching for a way how to connect those two and it seems that this could be the way. Next week we have a first student meeting about AI and this gave me a lot of informations and topics that I could bring there. Thank you 😊
Excellent production. Just one tiny thing: in the middle, there was talk about environmental impact and it seemed to be focused highly on lithium use. That's really more of a cellphone/laptop thing. Data centres tend to use lead-acid batteries, as the weight penalty is next to meaningless in stationary installations. And there's significant recycling of lead-acid batteries.
They'll also probably start recycling lithium if and when it gets cheaper to do that than getting it out of the ground. One big problem is that lithium batteries also need cadmium for the connectors (I don't know why). At New Scientist Live a few weeks ago was a demonstration pointing out that sodium is very nearly as energy dense as the most commonly used lithium batteries, and can use aluminium as a connector rather than cadmium, so that may turn out to be a moot point. AI does use a lot of power; that doesn't necessarily mean CO2, and as time goes by the proportion that does use CO2 will almost certainly drop. AI may cause any number of existential crises but I doubt environmental catastrophe will be one of them.
The enviromental impact of data centers is mostly due to the mining of the raw minerals, e.g., silicon, aliumnium, etc., the manufacturing of the chips and the rest of the hardware, and its subsequent exploitation for years in various data centers, which might or might not use clean energy (usually not). Talking about Lithium when it comes to data centers is meaningless; Lithium is nowadays used largely for energy storage and electric vehicles/transport. Might as well talk about the danger of geese getting into the servers.
Thank you for this video. I consider myself primarily a scholarly artist, whose primary aspiration is to critique computation from a media theory and computer science lens, but I work as a software engineer in one of the big tech companies to pay my bills. As someone who holds similar views as yours, working in a tech company is incredibly painful. I originally thought that tech company employees are just here for the pay, but after I entered the company I only found out that most of them really believe the whole shtick and critical examination of technology is non-existent within the Silicon Valley tech companies. It is suffocating. Working feels like being an undercover cop having to sell illegal drugs and aid prostitution which just goes 180 degrees against my own philosophy and beliefs. I even sank into bad depression and had to start taking antidepressants just to function day to day with my coworkers. It is such a relief to see a big name UA-camr advocating anything beyond the old talk of "alignment! alignment!" AI doomism. It is so good to see nuanced and a more thorough criticism about AI. Thank you and hope more and more people start to recognize the essence of the current big data powered AI as exploitation of labour and violation of the concept of "private property" -- and finally -- recognize that "there is no ethical AI under capitalism". I dont even hope that there will be real change soon i just hope that people can recognize these.
Watching this episode reminds me of one of the conclusions from the Dune universe: it doesn't matter whether it's large-scale computing, Turing-strong AI, or a spice-enhanced socioeconomic structure-in either case, humanity may have spread throughout the cosmos, but the inequality and the oppression always remained. Trying to address that requires something more fundamental than throwing new tech at the problem and hoping it works well enough you don't have to worry more than you already do. One also hopes that you don't need to go to unexplored depths of being unethical to establish a more ethical system in the end.
I *love* that you're treating this topic, Abs! And thank you for the awesome behind the scenes photos and teasers... keep up the outstanding work. ☺️👍💚
I love how often I see you commenting on videos and creators I enjoy.
@@prestonbruchmiller497 👋😁 and I love how often I see my own audience appreciating other creators whom I love and showing up / supporting them in their comments, too! ☺️👍💚
oh hey, i was just watching your backpack video from a few days ago lol, what are the odds
@@computer_trashIt might be possible to say the odds are a long shot, but then when we consider how many of us appreciate the same people and hold the same values maybe it makes sense that we like a lot of the same content... Either way, it's so wonderful to hear that you enjoyed a little something I put out there. ☺️👍
oh shit, hi!! Love your videos
I’m a PhD researcher in AI right now - loved the “data flattening” description; making me think about what data I’m using to train my models…
Fantastic content as always!
can I ask how you got into this field? Currently hoping to do the same myself, I am finishing an undergrad in philosophy and moving on to a masters in computer science. Any advice for the future?
Ooh, I did my undergrad in a CS/PHIL joint! So very similar! I think relationships with professors matter the most. The hardest thing about being a grad student (imo) is generally getting funded in a way that lets you research what you're interested in. Try to find professors that you like and that like you at other universities and then apply there. Doesn't hurt to reach out early to talk with them about the program and to have a little name recognition :) @@imogengreig2860
While I do feel that a lot of the allegations here pointed at the AI industry can easily be pointed in some part at a bunch of other industries (e.g. sub-employment in industries like tech and retail), I am glad to see that the interrogations and ethical questions previously fingered at those industries are finally being directed at the AI industry! Amazing work here as always!
I’ve been working on a deep dive project on how various creatives can use certain tools to help with their work flow… but as an artist it’s been disheartening to see artists like myself put out of work. Last holiday when the ai portraits became very popular, I lost all my income as a portrait artist. It’s been a process.
I don't want to downplay your losses. Being out of work always sucks. But, and there's always a but, isn't here? AI is a tool at the end of the day, and people will use that tool however they can to benefit themselves. Commissioning artwork is EXPENSIVE, as I'm sure you're well aware, and it only makes sense that a large number of people will opt for AI generated as opposed to human-made. AI is here, it's not going away, and it's up to the creatives to learn how to roll with those punches. Like they did when painters no longer had to mix their own pigments. Or with digital artwork. Or with photoshop. Or autotune.
@@ookami38I really don’t think many people who want specific art are gonna go to an AI. If they’re trying to make a series the AI would lack consistency, the only people who use AI are the people with absolutely zero taste (also as a side note I hate how much people call algorithms “AI” it’s so deliberately deceptive) plus AI being a tool would be fine if it wasn’t proactively robbing people, and no this isn’t an “every artist takes inspiration” this is a “this artist traced 8 different images and then put the original artists out of jokes” it is quite literally stealing from people and putting them out of jobs. Eventually the issue is gonna become if the majority of art is AI art the AI is gonna start training off other AIs amplifying their flaws 10 fold
I know it's a struggle that's why I don't generate AI art at all y'all worked your asses off to get where you are skill wise and stuff
@@eventhorizon2264 it really sucked because the holidays are the biggest time for commissions, and my mother had just died, so I was using that money to get her ashes. And even though they sign a contract, PayPal just locks you out of your account and gives their money back, and if you don’t have the money for a lawyer, as a seller you’re screwed. Most of them don’t care if the AI quality is shit when they get 200 for 7 dollars or whatever. Now the ai can even send you a print on canvas. Another issue is our art was stolen, data flattening, so the machine copies our styles. It even sometimes spits out artist’s signatures on the generated images.
@@ookami38 The biggest issues with AI from an artist standpoint atm (and I have worked as an artist in the industry since 2009) are specifically the fact that the AIs have been trained without our consent on OUR work. It is theft in its current form. I am not completely against AI as an artist. I am, however, against it in its current form, which IS exploiting artists. Both musicians and writers have had a lot more luck being heard on this front while visual artists continually get stepped all over. If I were to look at this from a pro-AI standpoint, AI is not currently able to be copywrited, so it is very volatile for companies to use at the moment. If the companies who are training on our work offered an opt-in option, I'd be far more okay with the inevitability of future technology. The issue is we like to step on others to get ahead.
Art has always been a luxury item. So yes, of course it is expensive, because of the years and time it takes an artist to both build skill and then translate that into an image for you. AI is simply taking our work and years of time building our skill without any consent on our part as artists and just merging images to make what is essentially a Frankenstein of our work. If AI was not available for commercial use, I'd be much happier allowing for my work to be part of the dataset to be just there as a inspirational tool or a personal use only tool.
Part of the ethical issue that goes beyond us being put into the dataset without our consent is the use by many proponents of AI of individual artist's names. They use our names to try to force the AI to generate work in our style, to the extent of occasionally just pulling up a very slightly changed version of one of our existing works. The problem is that, by using our names, our businesses and personal work is further harmed and eroded and the shitty AI companies are not handling this situation with the import it deserves. In some ways, by using our names, it quite literally places a fog over our work and within google searches, making it hard for someone else looking for our work to find us and what is really ours. It is extremely exploitative. I think humans as a collective need to0 push for more ethical restrictions on these technologies so as to cause less harm to those they are currently just straight up taking from. I am a realist. I know automation will come for most industries. It sucks to lose work. What sucks more, for me anyway, is the idea of having our work taken from us and our names and who we are and what we create, for all intense and purposes, stolen and marred beyond recognition.
Excellent video - as someone engrossed in tech, it’s so easy to see these fantastic machines as abstract objects. It was eye-opening to remember that I am holding a refined piece of earth that was mined, shipped, and processed by human labor.
Also how are you not freezing
Always keep in mind, the silicon only approximates a computer. A single-event upset could occur at any time.
Because she's hot.
You just have to turn up the heat a tad, possibly sitting on a heated pad would help too.
The Penis Detection Machine bit hit me in a way I wasn't ready for. I always called it the Gropetron9000 since I (without fail) would be groped on my stomach, upper thigh, and penis... every. single. time. I have never felt more seen
This might be TMI: I think for me it's because I got weight loss surgery and it always pings on excess skin and things that hang.
Just a random commenter here, but thank you for sharing that-- this makes me think that there's surely more people harmed by the Gropetron9000 (great name btw!) than we even realize. Different body shapes that it wasn't programmed for, medical devices, all sorts of things... it's kind of staggering. I've even run afoul of metal detectors as a cis woman with a non-standard bra size, I never realized how horrible those security checkpoints could be for other people.
Like, I knew it was bad. But somehow I failed to imagine HOW bad. I'm so sorry you've been put through that.
I don't know how, but your videos keep getting better and better. Didn't know that would be possible. So glad for you and your teams work. It's just a blessing to have you as a creator. 🥰
Every episode is so thought provoking. Thank you and your team for all the hard work you do.
I'm so happy philosophy tube exists. Genuinely a gem of a channel
As a computer scientist who is a bit tired of hearing all lay understandings around AI, I was not looking forward to this video, actually. I should have known that this work would not represent a lay understanding, but rather deep and expertly researched take on the subject. Excellent work as usual! I don't understand why people are drawing non-consensual images of you when you give them all they need in these spicy outfits :P
Excellent outfits as always!
You might run intro issues if you sexualize her in these videos. Honestly i found her arguement rather lacking on the topic. I assume that comes from her fury on the matter.
But yes generally speaking most of her arguments trend in the right direction, however its all stuff i i found out pretty quickly so i didnt really hear anything new.
I disagree with you on every point you've made here. I also didn't hear much new, but the arguments are valid and well thought through.
Arguments do not need to be new to be important, or valuable.
@cesarandrade1987 i think they do tho in this case. This feels like a summary of other yt vids
Look at the bloody cirations and references in the vid bro! Thats prolly more books and publications that you ever read mate!
But seriously, some people undermine even these basic points, so its very useful in debates to catch your opponents off guard just by how much data and proof there is against their position.
Ms Thorn, and I mean this with no sarcasm, Thank you for making a conclusion to this video that shoves the fact that it is a conclusion and has a definitive closing statement that is even emphasized with visual and audio changes before the nebula ad. It helps my brain process the information so much better❤
I get it. there are so many video essays that just... end.