Another thought I had recently: a good way to describe AI is a web search engine. You type in a prompt - you get a number of results. They are presented in this sophisticated "diffused" form, but it's still just a number of search results. You can have these results, you can learn from them, but you have no business getting rights to these results
Some people point out it's impossible to know how much a training image was involved in a particular prompt. I think it comes down to the fact that we don't have to use the technology that is underdeveloped this way, until licensing/royalties are possible. They can keep working on it and let people use it for free and only for noncommercial use. Again, similar to web search. The question of enforcing it is a big one, but it's the same as with classical image theft/plagiarism
Well at the moment my personal experience with AI is that it makes my job less creative and more cleaning unusable image to make it somewhat usable. It's a hype and people are eager to use it but it can't do specific things. I'm working in a company that creates slot games and the pipeline starts with the designers creating the images based on game design and then this images are separated and given to my team to animate and implement in the game, basically we are creating the game. I can tell you its HELL when its done with AI. Some of the designers love to use it because it's quick. But it's not created with thought for it's application and what happens is that we at the end of the pipeline are the ones who have to redo it to make it usable and losing the precious little time we have to actually animate and be creative. So Ai is doing what I was afraid of- taking away creativity and enjoyment of creative work.
Sounds like whining to me to be honest. In the end AI is just a tool, the design person can also create some usable iterations, instead of garbage or pick something that is more usable or create dogshit without ai with the same outcome.
My biggest gripe with AI ‘art’ is how it always floods whatever platform it’s on. Plus it all looks the same in style. It has no heart or soul just created as fast as possible and vomited out in the hopes to make money.
@@irek1394 dead internet theory is real honestly. Once the internet becomes bloated with just AI garbage more and more people will tired of the samey content generated and stop using the internet.
Yeah, it gets kinda annoying if you're someone like me who is trying to sell there art on sites like Redbubble, or displate to make a few bucks on the side. Something like 80% of the art on these sites seems to be AI generated, and another 10% I'm pretty sure is stolen or edits of other peoples work.
@@advladart Or they'll seek out the art that isn't the same generated over abundant slop. If people would quit something entirely (and I mean in mass and generalized, not just a few individuals) then TV would've been dropped ages ago, news stations would've ceased to exist after the war scares were gone, no one would stay in a job for more than a few months, people would've stopped watching UA-cam and other such algorithmitized platforms, etc. People are more dispositioned to just taking what comes their way than you might think, so while I do think that people will eventually get fed up with all of the AI slop that those without any kind of Artistic understanding/talent turn out, I don't think they'll just quit the entirety of the internet. Hell... Poeple are too addicted to social media these days to be able to completely quit the internet, though I'm not saying that people only using the internet for SM is a good outcome either.
Funny story. A friend of mine works in the mobile gaming industry. The art department generated a picture of a frog for one of their marketing campaigns. Just one frog. She asked for different matching frogs in different costumes. No dice - there was one frog. Only one frog. AI can't make matching frogs. Any additional frogs were going to look nothing like the first frog. She was forced to use the one frog for the entire campaign, even though it wasn't even a cute or interesting frog and she had to go out of her way to photoshop it to death to make it match anything they already had. You can certainly use AI to create one off images that are entirely unrelated from each other - you can probably use them in the place of any other forgettable image that you might use for a one off situation. However, if you're looking to create a brand identity, use it for interfacing with your customers, use it to represent your company, create an interesting product - AI literally can't do this. You'll be able to generate an incredible amount of images in a short amount of time, but they'll all look the same, and they won't help your company stand out against it's competitors. It creates forgettable, homogeneous experiences. That's it.
I'm finishing my master's degree in graphics now, and honestly, I'm so tired of it being everywhere. It's like this disgusting thing creeping up everywhere (so many book covers are now based on ai generated images! people just dont understand ☹️) It's theft in the way that it was made right now 🙃 it's sometimes discouraging, but then I just remeber that creative itch and look at my watercolors and colored pencils and then my tablet and yeah, I think people might still value human-made art ❤️
Yeah, I think the worst part is that its discouraging. But also, I think its reminding people of that quality that only true artists provide. The meaningfulness and the care. Maybe
I wonder how much companies would care about the copyright, at least at the moment without any legislation around AI. It’ll quickly become a game of who’ll be best at not getting caught, and the better the tech gets the harder it’ll become to find out. Heck, you could even just use any generated images and by the time it would get taken down by whatever external enforced action, it’s already done its work in case of marketing e.g. If any fine is lower than paying for it, it’ll be worth it for companies.
I like these video responses to the comments! Cool way to interact. You make a great point here. I hope the legal departments of the world come up with ethical AI standards soon. Before it starts spitting out good quality shit
If there are images within a 'legal' database like Firefly that originally were partially created with AI, then the issue of whether or not that causes a catastrophe only comes down to how immediately recognizable it is. Just the idea that an image 'might have been created using AI' in of itself doesn't really create any issue, as it becomes an entirely hypothetical almost semantic argument - so theoretical that it completely bypasses how things realistically are used. This might sound brutal, but this is a classic case of the saying "there's no crime if you don't get caught." As morally dubious as this might sound, it is nevertheless true in a practical sense, whether we want to like it or not. If a piece in Firefly was generated using another image that was in turn partially generated by AI, then both realistically and legally there will be no means of tracking that unless it is entirely obvious that it was made by an AI. (This itself opens another massive can of worms) And the same practice follows with the use of things like StableDiffusion and MidJourney as well. Whether they are 'unusable' for professional work really doesn't matter as long as the artist in question is sufficient enough at hiding however much % of the total image was originally created with AI. Again, there's no crime if you don't get caught. And for professional artists, they would have no issue whatsoever in creating a base using AI (let's say something akin to 25% of the total image) to save time and then obviously making that image in their own style, thus saving themselves a substantial amount of time while maintaining their own style in the process as they build upon the AI-generated base. I don't think there are any artists who would use AI and have the AI override their style entirely, hence the need to tweak the output is not only necessary for hiding the use of AI, but also for the artist to naturally maintain the style they have worked so hard to attain. I don't think it is particularly controversial for an artist to wish to save time but still retain his own style by ensuring the AI only does the dirty work in the beginning. The legality of this is also essentially thrown out the window as there's no direct tracability of the original AI's impact on the piece itself. And I wish to raise the question; perhaps that's for the better? I can think of very few cases worse than seeing artists being scrutinized by AI Witch-hunters in an environment so toxic that it removes any and all joy of the creative endeavour which is supposed to be welcoming and cherished. Far too many times there have been artists who have been critized for using AI - almost as if the burden of proof is entirely reversed - and the artists themselves have to show proof of them *not* using AI, when in any other reasonable system the burden of proof lies on the accuser and not the accused.
I like that idea you had of Ai trained on Ai images trained on peoples images. If the worry is that AI will be able to replicate your style and art exactly, I think having these layers would help. You'd no longer have to worry about someone typing an artist's name into a prompt because the art will be diluted with all the other artists in the same general style
One of my main problems with AI is that it waters down search results on big platforms such as google. In the past I could search for an artist's name and then I get the website/social media of said artist and if you went via image search you got the images made by that artist. So the artist could be found by future clients or other people that want appreciate and share their art. Now with some well-known artists if you search them via google images you get also ai-generated images in the style of that artist. that hurts creators, of course, because it means that first piece a potential client sees might be an ugly ai image with faulty lighting and shitty composition instead of an image that's actually representative of their work and they might decide not purchase from them. The people using AI not only take the results of their creative labour but also (ab)use the names of companies and individual artists by using them as prompts to generate images. That should not be allowed. people should have rights to their name and brand. So all names of companies, brands and people no matter if alive or death should be just blocked on default and then if the software wants to use those names as prompts, they got to pay for the rights and get the consent of the person/brand. sure they will find ways around the name by using other prompts, but then at least you can still find the original creator and their work without getting it mixed with AI images.
@@eggi4443 AI doesn´t need the internet. You get AI on military gear to follow terrain and such without the net. Google just sucks lately. You can´t fine anything useful anymore. AI content is just one other issue.
The future you want for AI art is actually impossible to achieve. You can't only train ai on a small set of images and get decent results, and this fact will likely never change. If you ask the generator for a picture of a dog, it will have to have seen thousands of pictures of dogs to approximate what you're talking about. You can use your portfolio of a small number of images to train a LoRa right now, and use it to make images in your specific style. But it will only work because the base Stable Diffusion model that was trained on billions of images lets it know what you're talking about when asking. The alternative, paying artists every time their image is used to generate images, is also impossible. This is because it's impossible to tell how a given image in training data impacted the output. You can think of the model as a machine with billions or trillions of little knobs, and each image its trained on moves a completely random collection of these knobs a little bit. In the end, you put text in the machine and the machine spits out an image, based on how those billions of knobs were adjusted in training. But it is technically impossible to tell how any single image in the training set adjusted the knobs, and its impossible to tell how a specific knob contributed to the image output. So its impossible to pay an artist for contributing to the creation of the image, because its fundamentally impossible to know which images in the training data contributed to the image and how much they did.
"The alternative, paying artists every time their image is used to generate images, is also impossible. This is because it's impossible to tell how a given image in training data impacted the output. " - it's the AI companies' job to make this possible. If they can't do it, they can't make AI
@@BoroCG You may feel that way, but that's not really what the law says. But even if, for the sake of argument, we passed a law that said you can only use a work to train AI if you have explicit permission, it wouldn't solve most of the issues you have with the tech. The Midjourney team is like 11 people. You don't really need a large corporation to make an image generation model if you have enough resources. A sufficiently motivated person could make their own AI image generator from scratch without any help. Well, except the help of the millions of artists whose images they use. And it's not really possible to tell if an image was made with AI in a concrete way. You can oftentimes tell because of common mistakes AI makes, but it doesn't always make those mistakes. A lot of times even trained art judges can't tell. So even if it was explicitly illegal to make art this way, enforcement would be almost impossible. Let's say someone makes a book cover using their personal AI image creator(or one someone shared on the internet). And an artist sees the cover and thinks it looks awfully similar to some of their paintings and that it was made by AI, so they try to sue for copyright infringement. Definitively proving to the court that an AI image generator was used at all would be a challenge, and likely wouldn't be possible unless the AI user admitted it themselves. And even if they established an AI generator was used, proving that the artists' paintings were in the training data at all will be fundamentally impossible unless the person who made it kept a record of what was in the training data. And even if they established that an AI generator was used AND some of the artists' work was in the training data, proving for certain that those images in the training data meaningfully contributed to the book cover would be basically impossible even if we wanted to. So even if we explicitly passed a law outlawing AI art unless the works in the training data were given with explicit permission, it wouldn't put the cat back in the bag on this one. AI art would still be generated with unauthorized copyrighted works, and that AI art would still get monetized. This cat ain't going back in the bag no matter what we do, so I think it's more productive to think about how we should deal with the cat rather than trying to find ways to shove it back in the bag.
Amen 🙏 personally I would like for a truly ethical AI to help me with some parts of drawing that are tedious and difficult, but it's just not possible. I've seen AI models promised as "ethical" like Firefly and they tend to look much lower quality because the pool of data they have to make exactly what you want without it outputting a disgusting mush is much smaller, AND they turn out to already be using a bit of illegal Midjourney data too! The only AI that looks good and usable is that which steps on the rights of other artists. The mass layoffs and sale/commission/revenue losses artist are going through after AI got shilled everywhere are also pretty concerning. It's not worth the additional "help" in your project.
Either they find a way to change this or it's never legal to use AI 🤷♀ How would you feel if someone took what you made without your permission and used it to train a robot to be able to make the exact same thing and make money from it? If it's impossible to make the models ethical, then they should not be used for commercial purposes.
That's such a nice video. I think you caring about the comments and feelings of your audience is awesome. Appreciate your work as always but love seeing you interact with us like this. :)
hate to bug you again, but im still stuck on rvt, i used your material function but im not to sure on where to plug it in at. No rush but was seeing if i could get some more insight (im dumb)
The idea of having an AI trained with only stuff I decided is something that I find nice, exactly because of what Boro said, I could train with my own drawings and royalty free images and generate stuff. That would be a way, I thought LeonardoAi did that, but not.
it is easy to set training data to pre 2021 dataset only and add training data only photo/art/Ai generated from vetted sources If it ever gets really heavily regulated which means only big companies can do that
i think it should be looked at like dj remixes, if it is mixed and changed enough meaning ai inspired than it is new art, but the pure output should never be enough to be a product on it`s own.
Yeah, because djs don't license the original source materials, right? Its fine at a party, etc. But as soon as they release their remixes only they have to license the original songs.
@@aaronorelup4024 they used images as training data from a show they don't own the rights for, then proceeded to make money from the final product. As cool as the result is, they should not be able to confidently say they own it because it's still using someone else's intellectual property to produce that for-profit product. If they commissioned some artists to make original images that are just inspired by the show maybe it would be a different story, because then they would actually own the source images they used.
I can't agree with the perspective that it'd be cool to have an ethical model (or "empty" as you describe it) that you can fine tune with your art to "clone" yourself and generate a ton of stuff quickly. To me that still opposes the purpose and spirit of art. With enough money to be an ethical employer, I could "clone" my labor as a parent by outsourcing it to a bunch of nannies, but that defeats the purpose of having a connection with my child. Ultimately more of a, "we could do it, but should we do it?" I'll stick with photobashing and thumbnailing. Art is not just images, but the problem solving, experience, and intentional choices behind every step of the process. I am also an aspiring solo game dev and I definitely understand the temptation to automate away some of the tedium, to get placeholder assets in place for quick prototyping because of the overwhelming amount of work, but I am not willing to sacrifice my integrity as a person or artist to get there. Sometimes we have to acknowledge the reality and limits of humanity. Tech we use to augment and enhance it should not come at the cost of our culture (or legal rights). Glad to hear your channel will take the tone of not promoting it due to the very real current legal considerations, if not the moral ones.
There is also sketch-to-image generators like Vizcom or new feature of Firefly which is structure reference. With those there is less risk of breaking copyright but it's still gray area and you can't take full ownership of such generated image.
Copyright in general is a minefield, looks at what has happened in the music industry over the years; people have been sued for something like just 4 notes that barely have any similarity with someone else's creation...
@BoroCG what about the millions and millions of non artist made images and non hand-drawn pictures? all the picture from random cameras around the world , everything thats ever been photographed , chairs , rocks ,buildings , cars , those are what really make the bulk of all those training data , think about it , laion dataset has 6 billion images in it , those are not all drawings and hand drawn art is the only things that we should care about since even professional made pictures are just real world objects and only the composition (and made clothes and similar hand made item if you really want to stretch the argument) my big gripe with the anti ai argument is that the importance of artist made art is overstated for most model since if you really spend time using the core models you realise that when you dont you specify a style of art you will still be able to create almost every object that exists on earth , and then every item use on every item (dog that is a car , chair made of rock , every possible combination) , then you mix in an artstyle and that thats the main use of ai and that perfectly fine the main problem is finetuning models on small subset of images,cjeckpoints , lora's ,dreambooth .ect since those are what give pretty mush all those super artistic images and the problem is you cant really remove the fundamental idea of finetuning , its just how models work so yeah , basically my biggest problem with your video is that when you say that its "all completely illegal" which sounds silly , you are forgeting that a model that was trained on data extremely curated to avoir artist-made images that model simply woudnt really output results that are very different then what we could get today from stable diffusion , most of the data is not artist made so in the perfect possible model most of the data would .. be the same we use today
Great clarification/follow up to the previous video, I disagree only at the very end because a lot of the "promising" aspects are currently way overblown by companies, but I can't and won't blame you for being excited for AI done right. I'm looking forward for the next video !
IMHO there are two big problems with fighting AI via legal means: 1) one can argue that using "publicly available" images to train AI models is not illegal - just like real human (artists) get to see a lot of images and drawings throughout their life, and don't need to pay for anything; the same logic can be extended to AI models. 2) Fixing the "legality" aspect of AI art doesn't fix everything. For one, you can't really prove / disprove what was used for the training; and you can't physically stop malicious user from doing so. For two, we now already have all possible sort of scammers spawned from this, eg. the discord "artist looking for a job" scammers. Oh and last but not the least, the problem with fixing anything via legal means is the fact that people who create the laws don't understand what they are talking about and can be easily persuaded either way (I've been following Louis Rossmann and his fighting for rights to repair for a long enough time to figure that one out).
You will hinder the further development of AI with those kind of legal concerns and others will develop it faster and further like China and in the end replace domestic software. In the end it´s of no use to fight it. Artists are simply pissed because they are becoming obsolete for many tasks. Instead you should focus on what AI can´t do. Or adopt AI into your personal workflow since it´s just a tool. Just like the copy paste function didn´t kill all creativity AI won´t either. All I can rad is a looooooot of bitching by artists, which is funny to me because as an mechanical engineer the same stuff applies to me. An AI can create mechanical parts, why does somebody need me right ? Well I will have to adapt or I will drown bitching.
@@sierraecho884 comparing AI with copy paste is just absurd. We've had "copy paste" in real life for centuries, it's called a printing press, we've never had anything like AI image generation before. It's an unprecedented thing that we need to be careful how we implement into society. What if China does it faster? Is that an excuse to try to catch up by doing it unethically? If someone else uses a shady and illegal approach to do something faster that doesn't give you the excuse to do the same.
@@dorum358 We never had copy paste in computer form before either. I am not saying we should disregard ethics but this has litle to do with ethics and more with fear, fear ti be replaced by machine. He is right you know, who needs the artist if you can just all his data instead. And China will do it faster anyways. You can either learn to live with it or you don´t.
@@sierraecho884 I am saying lots of shortcuts in computer form are derived from shortcuts that existed in reality. AI is not like that, it's a completely new thing. What if someone told you that we don't need you, we'll just want your data? It's absurd isn't it? There would be no data without you. Artists will just start posting their stuff without huge watermarks or alterations if it will negatively affect their livelihood or if they lose their income they will just stop posting altogether. AI needs artists for new training data so is it so absurd to just pay them for it? Have you seen the garbage that comes out when you try to train a model fully on other AI generated images? There is no future for AI without artists so why not think about their livelihood at least a little bit?
The internet was born from borrowed and repurposed code. AI is doing the same. The very platforms you post to take derivate rights to your work in perpetuity. No one owns the arrangement of X next to Y in space. Work you generate using AI is not illegal and anyone using it as a photocopier is missing the entire brilliance of it. If you aren’t considerably better in your field through using then you don’t understand it yet. Derivation is the bedrock of growth, the internet was built upon it and changed the world (for good or ill) AI will do the same, embrace it or get left behind.
It's really absurd for me that companies even DARE to use AI this way. My girlfriend works for an advertising company and is basically forced to only use AI in her social media posts because all the illustrators and graphic designers were layed off. And she has so much work to do that she cant even edit them a little to put some soul and work back into it. Really breaks my artist heart and baffles me that companies actually dare to operate in this legal grey zone.
This is probably the most level headed take I've heard about AI. I really hope that the future of this technology will turn out the way you explain it here, with artists getting paid waay more for their work that is used as training data. With how many images are required to train it, maybe it will actually open up new job opportunities for us. Unfortunately, I've already been confronted with AI in many advertising projects. I'm in a country where labor is much cheaper so a lot of the advertising companies now make their artists use AI like Firefly or Midjourney. As a freelancer, I've had a few cgi and animation projects where people on the teams that were formed for said projects used AI and I had to just accept it. I think you're right that it's important to talk about it and in the future if I get other projects like that I'll try to convince them not to use it or I'll back out of the project if they insist on it being the final on-screen version of the visual element it was used on. I feel like AI is going to be a big issue in these lower income countries where business people don't care about legal stuff as much. What's clear is that the people responsible for developing such a system where artists are compensated fairly for their work used in training data are the same people developing these current models and we all know what kind of people they are (key word is greedy haha)... That's why I think it's important to be loud about these issues as artists if we have an audience, because we can influence the people's opinions and force these companies to change, as they will avoid doing it on their own.
About paying the artists who's images were used to train it. I just want to be able to generate images for fun. And I can right now. The idea of paying for every image at full price as if someone drew it for me, sounds rediculous to me for my use case. I wouldnt want to pay anything more than a fee cents per generation. I'd rather not even pay that because its open source and I run it locally. Although, Id be happy to tip if I used an artist's name in the prompt
@@aaronorelup4024 Sorry, I did not express this clearly. I was talking about ownership and commercial use. It's still morally wrong right now to generate stuff with the current AI systems because they're made with stolen art, but if it ever gets to the point where the models are not made with stolen art I also agree you should be able to generate what you want with no pay and if you want ownership of a certain image that was generated then you should pay for it. Even right now with real art people usually offer non-commercial rights to their work for cheaper and commercial rights for a lot more. Also the part about using the artist's name in the prompt can also be bad. A big part of an artist's personality is their style, so using their name in a prompt without consent is akin to someone putting your name in an AI model and generating pictures of you without your knowledge. I know it's not an apples to apples comparison but it's quite creepy if you think about it like this. It can be flattering for some artists, but if you start showing those images off without crediting them and without asking for consent, it's wrong. About the open source aspect, it's a bit more complicated in reality, here's a great video from Steven Zapata where he talks about that among other stuff related to the topic: ua-cam.com/video/tjSxFAGP9Ss/v-deo.htmlsi=CFU7RtZ8urysfsJz
I'm so unbelievably tired of the witch-hunts around current day emerging technologies. I'm not saying that "anything goes" or that "you should just let it happen". Personally I want to be informed about what's happening in the world around me, without having to buy an Adobe subscription and trying to figure it all out myself. All this new tech is prohibitively expensive for someone like me who's not an artist, nor using it for any sort of work. Pandora's box has opened and I loath when, out of fear, people attempt to avert the eyes of everyone else, and act as thought it either doesn't exist or hasn't been opened. I've always appreciated your head-on approach to new tech, and I hope you'll continue do what YOU think is right, even if it's buckling to a disgruntled portion of the populous
It´s like the invention of "copy / paste". I theory there are rules what you can copy and what not, movies music etc. In practicality it does not really matter. You will never be save from AI as an artist, engineer or whatever. I would even argue nor should you be. Yes it will just reate all the art somebody worked 20 years to come up with but it will also create technical systems etc. Not only artists but everybody who creates anything will be influenced by this tech, it´s just how it goes. So you can argue and try to ban your data from being used etc. all you want but in the end it´s just how it goes and you will have to deal with it. Why should I hire a person for IDK how much to come up with a logo if I can simply use AI for next to nothing to perform the same task faster and probably better. Same goes for a Dr. why consult a specialist if AI can look through your data and find that cancer with a better rate than real life Dr.
The solution where licensing costs increase might be good for some artists but this leads to a situation where only big established companies control the AI market. AI already controls the content we consume, so indirectly it controls the democratic process. Giving this power to the select few who can afford paying milions of people is a dangerous idea. Art might be important but I am ready to sacrifice it so that I don't have to live in a dystopian corporate autocracy.
But it's not the companies (big or small) that have to pay the artists - it's the users of the trained model. The companies are just supposed to provide the tool (AI) that connects clients with artists. Or more specifically, a client with a cluster of artists, splitting the revenue
@@BoroCG I misunderstood your idea then. Then I agree, if a user of a model would pay a markup directly to the artists then that would circumvent this problem.
A lot of times in this video you say something is "illegal", but really what you're saying is "you feel it should be illegal". I can only speak to US copyright law(which is basically the only one that matters as these are almost all made by US companies). But it's actually incredibly unclear whether or not this content is illegal. It would actually be unsurprising if the current court cases find that this content is 100% legal(albeit, only copyrightable if a human puts in significant effort in the final product). This is because US precedent already says that scraping and using content from the internet is legal(Microsoft lost the lawsuit setting this precedent). And US copyright allows for fair use, where you can use works if it is transformative. And AI image generators are fundamentally transformative in nature. If I use Midjourney to make an image of Goku fighting Barack Obama in the style of Vincent Van Gogh, you'd be hard pressed to argue that the Van Gogh works used to inform the model on how to make the image were not sufficiently transformed. Even though you might feel that I should pay the Van Gogh estate some royalties if I wanted to use this image, the law and legal precedent are pretty clear that I don't have to
Oof. I wish I would have watched this before commenting on your last video. But either way, I think the ultimate outcome to all of this, will simply be that people won't stop making unethical AI models, they will only get better at hiding the fact that they used it. Ultimately, AI is just a mass scale torrent. Is it illegal for copyright works, yeah. Can you prove you got that content from that torrent (AI, in this analogy)... Not if you know what you're doing. I think it is important to talk about the ethics of AI, but unfortunately I don't think this topic is going to clear up, because the models being produced that are actually useful, are made by people who do not now, or ever will, care about the people who create the raw data they use to train their models. Because to them, it's just data, not work. A truly clean, crowd sourced AI model, is never going to happen just on the grounds that the temptation for a troll to taint the data would be too great.
I always see people getting this wrong about generative AI. When you talk about compensation for the artists used to generate that image. There's no such thing. ALL artists and images were used to get to that point. There's no particular influence of one image over the others and if there were, there's no way to know. The thing is that generative AI doesn't look at the images and then, based on them, create a "similar" one, or a "collage". The very complex mechanism behind all this models basically puts those images on a map, arranged by some unknown criteria, in fact there are more than 3.5 billion criteria for a model like Dall-e 2. Then, you give the system some words, and with those words it will try to put you on a point on that map that aligns with them. Maybe you land near a very well known point, like the Mona Lisa, so all images that land near that point will look very similar. However, most of the time you will land in a point where no one has landed before. Where there are no other "strong influence images" or "over represented images" near that point. This is where the generative part of the mechanism flourishes. Similar to a function in maths, you can "estimate" o "predict" what should be on that point, and that's how you come up with an image never seen before. The images used on the training are just there to plot the map. You could argue that a specific image looks very similar in style to a specific artist, or that people use the specific name of an artist to get a specific result. But that's just a "shortcut" word used to get to that location, and if that specific artist wasn't there to begin with, maybe other artist would have taken it's place. So my point is that you can't associate a generated image with a specific artist. So the legal and compensation aspect of this topic is even more complicated that we think...
well most image generation softwares still use written prompts and often there's something like "in the style of [ insert artist or company here]". a good start would be to ghost-block any registered brand names and personal or otherwise protected names from being used as prompts. this of course would also include names like names like "artstation" because that's also trademarked. that way the images would still be used without consent, but at least the artist's name wouldn't be as easily tainted by being muddled with ai generated images when you search for them.
@@mathilda6763 OpenAI already does that with DALL-E 3 through ChatGPT. But it's mostly a workaround to prevent big companies like Disney or Nintendo from suing them...
@@mathilda6763OpenAI already does that with DALL-E 2 through ChatGPT. But it's mostly a workaround to prevent big companies like Disney or Nintendo from suing them...
The AI companies made such a marvel of a technology. What a shame they can't create a system that would estimate the percentage of involvement of any particular data they used without asking. I guess AI is not finished to be usable yet
I think artists overestimate how much developers of AI and their users value the artwork that it is trained on. It's just the data that is important not the art style of each individual artist. no one really cares who the artist is or how good the art is. I think artists are mainly just scared that society won't value artists as much anymore, which I think the opposite is true
Well I don't really care if AI companies and users value my artwork or not - it's not free. If it's just data - then they should find different data, and see what model that makes. Without art there won't be artistic AI models. If they want to keep evolving their models, they need to make sure artists keep getting paid
@@BoroCG yeah I think so, but I was refering to the adobe library where adobe already bought the rights for the art and because of that, why should the original artist get componsated for AI images made using those photos as if the new photos were made by those artists. Is what I was trying to say
@@aaronorelup4024 they had the rights before AI was a thing, so the artists didn't agree to use it for training when they sold their rights. If it was an opt-in process it would have been better. It's like making a contract only for it to be changed later without your consent, you wouldn't like that now would you?
@@aaronorelup4024 I’m not arguing against that. I’m saying it’s just a huge asshole move that will hurt the market more than help it. If artists risk losing their jobs because they make stock images then they will do something else and nobody will make stock images anymore, then they won’t have new stuff to train their AI with and it will just remain stagnant and stop evolving. I don’t get why we’re supporting multibillion dollar corporations taking advantage of people :)) Adobe has the money to make it right, they just choose to be greedy cause that’s what evil corporations do if you don’t keep them in check.
Where is the difference between an artist looking at millions of paintings in galleries all over the world and then applying his observations to his own art and an AI beeing trained on these images and then generating a new image that is not just a copy of a single image but "inspired" by all these images? As far as I know, only real things (including digital good of course) like a painting or a piece of music are copyright protected. You cannot protect something like a certain painting method or style. So as long as the AI is generating new pixels for a new images and not just copy pasting from different images (wich at some degree of composition would also qualify as a new piece of art) I don't see a big legal problem. Of course I understand that artists might feel ripped of.
When artists get inspired by work it's not a perfect process. No human would be able to look at a painting and replicate every "brushstroke" in it. AI can do that, it's superhuman, it's pretty much like cheating and it's nothing like inspiration. It's like saying that a superhuman robot should be able to compete in professional boxing because it also looked at all the top boxers to perfectly train it's fighting style. Also people don't just "look at a million paintings" in a few days and perfectly remember them. It takes years to form a strong foundation of inspiration for your work. Artist's works are also not just inspired by images or things they saw, they are influenced by their mood and by their life experience, by their personality.
I read through many comments and I can say it´s 99,99% salty creative people because they are being replaced by software. Many say AI is dogshit, soulless and can´t create anything new. Well in that case don´t bother because it won´t replace you, right ? In other cases people say AI is good enough and will replace them, well then focus on something AI can´t do. Focus on using it as a tool. With new tech certain traits go obsolete. I am sure there was the same debate when books could be printed on mass "Noooo this will devalue my work, I am an artist, a book will loose it´s soul" yeah well we know how that went. You will not get rid of AI or close it up or make it ethical or whatever, just like you can´t get rid of the "copy / paste" function. Those things will always happen and if you insists and outlaw it or certain aspects of it others will develop it and replace you from somewhere else, say China for instance. They don´t give a fuck about you, they will simply come up with a better tool and those companies and enteties will replace you then... Either you adapt or you drown, simple as that. Photoshop will just implement a cloud function where it will just analyze 90% of all art and simply use that or something xD You don´t have to use that just lie you don´t have to a coputer with copy paste function .....good luck.
So your logic is: when someone uses GPT to create a text which is then published, everybody whose online postings had been used to train the LLM, should be compensated?
Sounds like a plan - considering AI will create the new and biggest class of Useless people, those people will need compensation. Eventually, most essential processes in the world would work on their own, and people simply get universal basic income
I think you a very naive if you think that you can proof any projects that studio or company are gonna do with AI , I mean who can tell if this or this project is not have rights? Who can tell exactly how much ai or not is in big project? I think it's sad but in the future we gonna see tons of production contents that made by ai and nobody will care, even not the law
"You actually own the rights to your image, and nobody will ever be able to take that away from you." _* The government that is the ones who get to decide what rights anyone has: *_ "Allow us to introduce ourselves!"
Makes no sense to have to pay for every artist an human was inspired by; and with the number of sources involved, each individual artist would be getting paid for less than one pixel per image...
you cant compensate every artist that their work have been used to generate an AI image because AI doesn't work that way, AI doesn't take single images and mash it up together to give you a result, please review how latent diffusion works, how it's made the noising and denoising process, how it's actually trained, not just how you think it works, lot of misconceptions about AI to figure out before we can state who should compensate who.
Thanks, I am aware it's not clear how images are involved in any particular prompt. My main point is that a few years ago image generation was impossible - now paying for training is impossible. AI companies have to figure this one out as well if they want to make things work. Unfinished technology that we don't exactly need, so we don't have to use it if it can't be legal yet
@@BoroCG you can actually train your own models from zero if you like, the resources, training codes, instructions and papers are there, it's just to much work, a few thousand images of your own won't work or will give you worse results, that's what happens with adobe firefly, allegatedly they don't use any existent dataset available like Laion or such, so they have to start from scratch if they want to be clean, now imagine a single artist doing all that effort to make it work, it's not efficient, but technically you can. You keep saying it's not legal, but it's not also illegal, i wouldn't care about legality yet unless you are really trying to replicate an specific existent artwork, like the mona lisa and call it your own, but if i can prompt the mona lisa with a furry suit then you are not committing any illegality, tools aren't illegal, having a gun neither, it's the stuff you do with it that will get you in troubles, i know it's hard to understand that machines can actually learn from images and not just memorize it, but that's how it really is.
@@Alarios711 i´m pretty sure Boro have a very prolific personal collection of his own art, but even if you make one artwork per day fdor 10 years, that´s just 3450 images, it wont work for trainning a personal model from scratch, that´s my point
@@ianalexanderreyes5890 Exactly, which makes AI art not viable, thank you. You either hire or compensate artists for their work to feed your model or you create the source yourself. In both cases it's too important an effort compared to just doing the art. Oh that's your argument as to why Ai companies should not have to compensate artists ? Well sucks to suck.
If it's expensive: That makes it useless. The whole point is to generate infinite free art without having to go through a person. This is just a massive cope. You don't have to pay a horse shoer every time you buy a car. The job of an "Artist" is just gone. No one will stop you from painting for your own pleasure.
A horse shoer didn't design a car. Without art, there's no artistic AI model, like at all. If you want AI to keep evolving, you have to pay people that train it
@@BoroCG We don't pay reparations to the natives either. We took the art. It belongs to the AI companies now. They beat you. It's a cope thinking we'll choose the path of more resistance. That's literally never happened. In the Star Trek future: They don't send reparations to wheat farmers every time they generate a meal out of thin air. You're just mad because you need to earn a living, and you won't be able to. We need to accelerate this to a point where like 90% of people don't need to work to live. This is the desirable outcome. This is the only positive outlook for the human future. A world without work.
You know horse-riding is still an industry right? Also cars didn't need billions of horse shoes to generate their design. Your argument is the actual cope bro. Instead of explaining why you're right with a real argument, you resort to an allegory, because you don't actually know what's right. What if someone told YOU today that your job will become obsolete? I think you'd find that to be unfair. And the way AI is going who knows, maybe in a few years your job will also be "obsolete" as you say. People like you think it's only a good thing when it doesn't affect them negatively. If you're not an artist you shouldn't have a say in how artist's work is used. If a model needs real art in order to exist, then that art should be paid for. If you had to melt down a thousand horseshoes to make a car then those horseshoes should not have been stolen, they should have been paid for (that's how your argument would actually work in reality).
Another thought I had recently: a good way to describe AI is a web search engine. You type in a prompt - you get a number of results. They are presented in this sophisticated "diffused" form, but it's still just a number of search results. You can have these results, you can learn from them, but you have no business getting rights to these results
Funny thing about this is, that normal search engines become largely ai images too (It's so bad when you try to look for specific images on Google)
Some people point out it's impossible to know how much a training image was involved in a particular prompt. I think it comes down to the fact that we don't have to use the technology that is underdeveloped this way, until licensing/royalties are possible. They can keep working on it and let people use it for free and only for noncommercial use. Again, similar to web search. The question of enforcing it is a big one, but it's the same as with classical image theft/plagiarism
@@BoroCG It´s like saying not to use the copy paste function. One day you will have to admit it´s won´t work in practicality, that´s just how it goes.
@@BoroCG Good point! It makes pretty sense, specially now that I'm learning to programme.
Well at the moment my personal experience with AI is that it makes my job less creative and more cleaning unusable image to make it somewhat usable. It's a hype and people are eager to use it but it can't do specific things. I'm working in a company that creates slot games and the pipeline starts with the designers creating the images based on game design and then this images are separated and given to my team to animate and implement in the game, basically we are creating the game. I can tell you its HELL when its done with AI. Some of the designers love to use it because it's quick. But it's not created with thought for it's application and what happens is that we at the end of the pipeline are the ones who have to redo it to make it usable and losing the precious little time we have to actually animate and be creative. So Ai is doing what I was afraid of- taking away creativity and enjoyment of creative work.
Sounds like whining to me to be honest. In the end AI is just a tool, the design person can also create some usable iterations, instead of garbage or pick something that is more usable or create dogshit without ai with the same outcome.
My biggest gripe with AI ‘art’ is how it always floods whatever platform it’s on.
Plus it all looks the same in style.
It has no heart or soul just created as fast as possible and vomited out in the hopes to make money.
there are accounts with 4000 posts in a month like how is a real artist going to compete with that spam
@@irek1394 dead internet theory is real honestly. Once the internet becomes bloated with just AI garbage more and more people will tired of the samey content generated and stop using the internet.
Yeah, it gets kinda annoying if you're someone like me who is trying to sell there art on sites like Redbubble, or displate to make a few bucks on the side. Something like 80% of the art on these sites seems to be AI generated, and another 10% I'm pretty sure is stolen or edits of other peoples work.
@@advladart Or they'll seek out the art that isn't the same generated over abundant slop. If people would quit something entirely (and I mean in mass and generalized, not just a few individuals) then TV would've been dropped ages ago, news stations would've ceased to exist after the war scares were gone, no one would stay in a job for more than a few months, people would've stopped watching UA-cam and other such algorithmitized platforms, etc.
People are more dispositioned to just taking what comes their way than you might think, so while I do think that people will eventually get fed up with all of the AI slop that those without any kind of Artistic understanding/talent turn out, I don't think they'll just quit the entirety of the internet. Hell... Poeple are too addicted to social media these days to be able to completely quit the internet, though I'm not saying that people only using the internet for SM is a good outcome either.
Or AI will die because there isn't enough data to train it
Funny story.
A friend of mine works in the mobile gaming industry.
The art department generated a picture of a frog for one of their marketing campaigns. Just one frog. She asked for different matching frogs in different costumes. No dice - there was one frog. Only one frog. AI can't make matching frogs. Any additional frogs were going to look nothing like the first frog. She was forced to use the one frog for the entire campaign, even though it wasn't even a cute or interesting frog and she had to go out of her way to photoshop it to death to make it match anything they already had.
You can certainly use AI to create one off images that are entirely unrelated from each other - you can probably use them in the place of any other forgettable image that you might use for a one off situation. However, if you're looking to create a brand identity, use it for interfacing with your customers, use it to represent your company, create an interesting product - AI literally can't do this. You'll be able to generate an incredible amount of images in a short amount of time, but they'll all look the same, and they won't help your company stand out against it's competitors. It creates forgettable, homogeneous experiences. That's it.
"Contaminating with illegality" I love the idea!
I'm finishing my master's degree in graphics now, and honestly, I'm so tired of it being everywhere. It's like this disgusting thing creeping up everywhere (so many book covers are now based on ai generated images! people just dont understand ☹️)
It's theft in the way that it was made right now 🙃 it's sometimes discouraging, but then I just remeber that creative itch and look at my watercolors and colored pencils and then my tablet and yeah, I think people might still value human-made art ❤️
Yeah, I think the worst part is that its discouraging. But also, I think its reminding people of that quality that only true artists provide. The meaningfulness and the care. Maybe
Oh boy you are going to have a bad time if you think this will ever change. all copy paste is theft.....
I wonder how much companies would care about the copyright, at least at the moment without any legislation around AI. It’ll quickly become a game of who’ll be best at not getting caught, and the better the tech gets the harder it’ll become to find out.
Heck, you could even just use any generated images and by the time it would get taken down by whatever external enforced action, it’s already done its work in case of marketing e.g. If any fine is lower than paying for it, it’ll be worth it for companies.
I like these video responses to the comments! Cool way to interact. You make a great point here. I hope the legal departments of the world come up with ethical AI standards soon. Before it starts spitting out good quality shit
i really like you sharing your perspective and do think you have a valid and nuanced view on the topic.
If there are images within a 'legal' database like Firefly that originally were partially created with AI, then the issue of whether or not that causes a catastrophe only comes down to how immediately recognizable it is. Just the idea that an image 'might have been created using AI' in of itself doesn't really create any issue, as it becomes an entirely hypothetical almost semantic argument - so theoretical that it completely bypasses how things realistically are used. This might sound brutal, but this is a classic case of the saying "there's no crime if you don't get caught." As morally dubious as this might sound, it is nevertheless true in a practical sense, whether we want to like it or not. If a piece in Firefly was generated using another image that was in turn partially generated by AI, then both realistically and legally there will be no means of tracking that unless it is entirely obvious that it was made by an AI. (This itself opens another massive can of worms)
And the same practice follows with the use of things like StableDiffusion and MidJourney as well. Whether they are 'unusable' for professional work really doesn't matter as long as the artist in question is sufficient enough at hiding however much % of the total image was originally created with AI. Again, there's no crime if you don't get caught. And for professional artists, they would have no issue whatsoever in creating a base using AI (let's say something akin to 25% of the total image) to save time and then obviously making that image in their own style, thus saving themselves a substantial amount of time while maintaining their own style in the process as they build upon the AI-generated base. I don't think there are any artists who would use AI and have the AI override their style entirely, hence the need to tweak the output is not only necessary for hiding the use of AI, but also for the artist to naturally maintain the style they have worked so hard to attain. I don't think it is particularly controversial for an artist to wish to save time but still retain his own style by ensuring the AI only does the dirty work in the beginning. The legality of this is also essentially thrown out the window as there's no direct tracability of the original AI's impact on the piece itself.
And I wish to raise the question; perhaps that's for the better? I can think of very few cases worse than seeing artists being scrutinized by AI Witch-hunters in an environment so toxic that it removes any and all joy of the creative endeavour which is supposed to be welcoming and cherished. Far too many times there have been artists who have been critized for using AI - almost as if the burden of proof is entirely reversed - and the artists themselves have to show proof of them *not* using AI, when in any other reasonable system the burden of proof lies on the accuser and not the accused.
I like that idea you had of Ai trained on Ai images trained on peoples images. If the worry is that AI will be able to replicate your style and art exactly, I think having these layers would help. You'd no longer have to worry about someone typing an artist's name into a prompt because the art will be diluted with all the other artists in the same general style
One of my main problems with AI is that it waters down search results on big platforms such as google.
In the past I could search for an artist's name and then I get the website/social media of said artist and if you went via image search you got the images made by that artist. So the artist could be found by future clients or other people that want appreciate and share their art.
Now with some well-known artists if you search them via google images you get also ai-generated images in the style of that artist.
that hurts creators, of course, because it means that first piece a potential client sees might be an ugly ai image with faulty lighting and shitty composition instead of an image that's actually representative of their work and they might decide not purchase from them.
The people using AI not only take the results of their creative labour but also (ab)use the names of companies and individual artists by using them as prompts to generate images.
That should not be allowed.
people should have rights to their name and brand.
So all names of companies, brands and people no matter if alive or death should be just blocked on default and then if the software wants to use those names as prompts, they got to pay for the rights and get the consent of the person/brand.
sure they will find ways around the name by using other prompts, but then at least you can still find the original creator and their work without getting it mixed with AI images.
Sound like a Google issue not AI issue....
@@sierraecho884 is it tho? you can't talk about AI without talking about it in the context of the internet
@@eggi4443 AI doesn´t need the internet. You get AI on military gear to follow terrain and such without the net. Google just sucks lately. You can´t fine anything useful anymore. AI content is just one other issue.
The future you want for AI art is actually impossible to achieve. You can't only train ai on a small set of images and get decent results, and this fact will likely never change. If you ask the generator for a picture of a dog, it will have to have seen thousands of pictures of dogs to approximate what you're talking about.
You can use your portfolio of a small number of images to train a LoRa right now, and use it to make images in your specific style. But it will only work because the base Stable Diffusion model that was trained on billions of images lets it know what you're talking about when asking.
The alternative, paying artists every time their image is used to generate images, is also impossible. This is because it's impossible to tell how a given image in training data impacted the output.
You can think of the model as a machine with billions or trillions of little knobs, and each image its trained on moves a completely random collection of these knobs a little bit.
In the end, you put text in the machine and the machine spits out an image, based on how those billions of knobs were adjusted in training.
But it is technically impossible to tell how any single image in the training set adjusted the knobs, and its impossible to tell how a specific knob contributed to the image output.
So its impossible to pay an artist for contributing to the creation of the image, because its fundamentally impossible to know which images in the training data contributed to the image and how much they did.
"The alternative, paying artists every time their image is used to generate images, is also impossible. This is because it's impossible to tell how a given image in training data impacted the output. "
- it's the AI companies' job to make this possible. If they can't do it, they can't make AI
@@BoroCG You may feel that way, but that's not really what the law says. But even if, for the sake of argument, we passed a law that said you can only use a work to train AI if you have explicit permission, it wouldn't solve most of the issues you have with the tech.
The Midjourney team is like 11 people. You don't really need a large corporation to make an image generation model if you have enough resources. A sufficiently motivated person could make their own AI image generator from scratch without any help. Well, except the help of the millions of artists whose images they use.
And it's not really possible to tell if an image was made with AI in a concrete way. You can oftentimes tell because of common mistakes AI makes, but it doesn't always make those mistakes. A lot of times even trained art judges can't tell.
So even if it was explicitly illegal to make art this way, enforcement would be almost impossible.
Let's say someone makes a book cover using their personal AI image creator(or one someone shared on the internet). And an artist sees the cover and thinks it looks awfully similar to some of their paintings and that it was made by AI, so they try to sue for copyright infringement.
Definitively proving to the court that an AI image generator was used at all would be a challenge, and likely wouldn't be possible unless the AI user admitted it themselves.
And even if they established an AI generator was used, proving that the artists' paintings were in the training data at all will be fundamentally impossible unless the person who made it kept a record of what was in the training data.
And even if they established that an AI generator was used AND some of the artists' work was in the training data, proving for certain that those images in the training data meaningfully contributed to the book cover would be basically impossible even if we wanted to.
So even if we explicitly passed a law outlawing AI art unless the works in the training data were given with explicit permission, it wouldn't put the cat back in the bag on this one. AI art would still be generated with unauthorized copyrighted works, and that AI art would still get monetized.
This cat ain't going back in the bag no matter what we do, so I think it's more productive to think about how we should deal with the cat rather than trying to find ways to shove it back in the bag.
Amen 🙏 personally I would like for a truly ethical AI to help me with some parts of drawing that are tedious and difficult, but it's just not possible.
I've seen AI models promised as "ethical" like Firefly and they tend to look much lower quality because the pool of data they have to make exactly what you want without it outputting a disgusting mush is much smaller, AND they turn out to already be using a bit of illegal Midjourney data too! The only AI that looks good and usable is that which steps on the rights of other artists.
The mass layoffs and sale/commission/revenue losses artist are going through after AI got shilled everywhere are also pretty concerning. It's not worth the additional "help" in your project.
Either they find a way to change this or it's never legal to use AI 🤷♀
How would you feel if someone took what you made without your permission and used it to train a robot to be able to make the exact same thing and make money from it? If it's impossible to make the models ethical, then they should not be used for commercial purposes.
interesting seeing this take from you following your ai journey last 2 years
That's such a nice video. I think you caring about the comments and feelings of your audience is awesome. Appreciate your work as always but love seeing you interact with us like this. :)
Thank you so much
hate to bug you again, but im still stuck on rvt, i used your material function but im not to sure on where to plug it in at. No rush but was seeing if i could get some more insight (im dumb)
Good vid, thanks.
The idea of having an AI trained with only stuff I decided is something that I find nice, exactly because of what Boro said, I could train with my own drawings and royalty free images and generate stuff. That would be a way, I thought LeonardoAi did that, but not.
The thumbnail is genius.
it is easy to set training data to pre 2021 dataset only and add training data only photo/art/Ai generated from vetted sources
If it ever gets really heavily regulated which means only big companies can do that
using nightshade to poison images in the ai dataset I think is a really important way of fighting ai and making sure its less accurate in the future
At this point the genie is out of the bottle.
i think it should be looked at like dj remixes, if it is mixed and changed enough meaning ai inspired than it is new art, but the pure output should never be enough to be a product on it`s own.
Yeah, because djs don't license the original source materials, right?
Its fine at a party, etc. But as soon as they release their remixes only they have to license the original songs.
I agree, like the corridor crew rock paper scissors anime. I think they should be able to confidently say that they own that
@@She_Asked good point but bands like Daft Punk made that their whole career, but fairly certain they had to pay royalties for the samples they used.
@@aaronorelup4024 they used images as training data from a show they don't own the rights for, then proceeded to make money from the final product. As cool as the result is, they should not be able to confidently say they own it because it's still using someone else's intellectual property to produce that for-profit product. If they commissioned some artists to make original images that are just inspired by the show maybe it would be a different story, because then they would actually own the source images they used.
I can't agree with the perspective that it'd be cool to have an ethical model (or "empty" as you describe it) that you can fine tune with your art to "clone" yourself and generate a ton of stuff quickly. To me that still opposes the purpose and spirit of art. With enough money to be an ethical employer, I could "clone" my labor as a parent by outsourcing it to a bunch of nannies, but that defeats the purpose of having a connection with my child. Ultimately more of a, "we could do it, but should we do it?" I'll stick with photobashing and thumbnailing.
Art is not just images, but the problem solving, experience, and intentional choices behind every step of the process. I am also an aspiring solo game dev and I definitely understand the temptation to automate away some of the tedium, to get placeholder assets in place for quick prototyping because of the overwhelming amount of work, but I am not willing to sacrifice my integrity as a person or artist to get there. Sometimes we have to acknowledge the reality and limits of humanity. Tech we use to augment and enhance it should not come at the cost of our culture (or legal rights).
Glad to hear your channel will take the tone of not promoting it due to the very real current legal considerations, if not the moral ones.
cool video. I think you might have quite right!
There is also sketch-to-image generators like Vizcom or new feature of Firefly which is structure reference. With those there is less risk of breaking copyright but it's still gray area and you can't take full ownership of such generated image.
Copyright in general is a minefield, looks at what has happened in the music industry over the years; people have been sued for something like just 4 notes that barely have any similarity with someone else's creation...
@BoroCG what about the millions and millions of non artist made images and non hand-drawn pictures? all the picture from random cameras around the world , everything thats ever been photographed , chairs , rocks ,buildings , cars , those are what really make the bulk of all those training data , think about it , laion dataset has 6 billion images in it , those are not all drawings and hand drawn art is the only things that we should care about since even professional made pictures are just real world objects and only the composition (and made clothes and similar hand made item if you really want to stretch the argument)
my big gripe with the anti ai argument is that the importance of artist made art is overstated for most model since if you really spend time using the core models you realise that when you dont you specify a style of art you will still be able to create almost every object that exists on earth , and then every item use on every item (dog that is a car , chair made of rock , every possible combination) , then you mix in an artstyle and that thats the main use of ai and that perfectly fine
the main problem is finetuning models on small subset of images,cjeckpoints , lora's ,dreambooth .ect since those are what give pretty mush all those super artistic images and the problem is you cant really remove the fundamental idea of finetuning , its just how models work
so yeah , basically my biggest problem with your video is that when you say that its "all completely illegal" which sounds silly , you are forgeting that a model that was trained on data extremely curated to avoir artist-made images that model simply woudnt really output results that are very different then what we could get today from stable diffusion , most of the data is not artist made so in the perfect possible model most of the data would .. be the same we use today
Great clarification/follow up to the previous video, I disagree only at the very end because a lot of the "promising" aspects are currently way overblown by companies, but I can't and won't blame you for being excited for AI done right.
I'm looking forward for the next video !
IMHO there are two big problems with fighting AI via legal means:
1) one can argue that using "publicly available" images to train AI models is not illegal - just like real human (artists) get to see a lot of images and drawings throughout their life, and don't need to pay for anything; the same logic can be extended to AI models.
2) Fixing the "legality" aspect of AI art doesn't fix everything.
For one, you can't really prove / disprove what was used for the training; and you can't physically stop malicious user from doing so.
For two, we now already have all possible sort of scammers spawned from this, eg. the discord "artist looking for a job" scammers.
Oh and last but not the least, the problem with fixing anything via legal means is the fact that people who create the laws don't understand what they are talking about and can be easily persuaded either way (I've been following Louis Rossmann and his fighting for rights to repair for a long enough time to figure that one out).
You will hinder the further development of AI with those kind of legal concerns and others will develop it faster and further like China and in the end replace domestic software. In the end it´s of no use to fight it. Artists are simply pissed because they are becoming obsolete for many tasks. Instead you should focus on what AI can´t do. Or adopt AI into your personal workflow since it´s just a tool. Just like the copy paste function didn´t kill all creativity AI won´t either. All I can rad is a looooooot of bitching by artists, which is funny to me because as an mechanical engineer the same stuff applies to me. An AI can create mechanical parts, why does somebody need me right ? Well I will have to adapt or I will drown bitching.
@@sierraecho884 comparing AI with copy paste is just absurd. We've had "copy paste" in real life for centuries, it's called a printing press, we've never had anything like AI image generation before. It's an unprecedented thing that we need to be careful how we implement into society. What if China does it faster? Is that an excuse to try to catch up by doing it unethically? If someone else uses a shady and illegal approach to do something faster that doesn't give you the excuse to do the same.
@@dorum358 We never had copy paste in computer form before either. I am not saying we should disregard ethics but this has litle to do with ethics and more with fear, fear ti be replaced by machine. He is right you know, who needs the artist if you can just all his data instead. And China will do it faster anyways. You can either learn to live with it or you don´t.
@@sierraecho884 I am saying lots of shortcuts in computer form are derived from shortcuts that existed in reality. AI is not like that, it's a completely new thing.
What if someone told you that we don't need you, we'll just want your data? It's absurd isn't it? There would be no data without you. Artists will just start posting their stuff without huge watermarks or alterations if it will negatively affect their livelihood or if they lose their income they will just stop posting altogether. AI needs artists for new training data so is it so absurd to just pay them for it? Have you seen the garbage that comes out when you try to train a model fully on other AI generated images? There is no future for AI without artists so why not think about their livelihood at least a little bit?
with huge watermarks*
Why can't AI be used for GOOD like UV UNWRAPPING or REMESHING other pain-in-the-ass procedures to make artist's workflow EASIER/More EFFICIENT?
The internet was born from borrowed and repurposed code. AI is doing the same. The very platforms you post to take derivate rights to your work in perpetuity. No one owns the arrangement of X next to Y in space. Work you generate using AI is not illegal and anyone using it as a photocopier is missing the entire brilliance of it. If you aren’t considerably better in your field through using then you don’t understand it yet. Derivation is the bedrock of growth, the internet was built upon it and changed the world (for good or ill) AI will do the same, embrace it or get left behind.
8:10 this is indeed very cool idea
It's really absurd for me that companies even DARE to use AI this way. My girlfriend works for an advertising company and is basically forced to only use AI in her social media posts because all the illustrators and graphic designers were layed off. And she has so much work to do that she cant even edit them a little to put some soul and work back into it. Really breaks my artist heart and baffles me that companies actually dare to operate in this legal grey zone.
This is probably the most level headed take I've heard about AI. I really hope that the future of this technology will turn out the way you explain it here, with artists getting paid waay more for their work that is used as training data. With how many images are required to train it, maybe it will actually open up new job opportunities for us.
Unfortunately, I've already been confronted with AI in many advertising projects. I'm in a country where labor is much cheaper so a lot of the advertising companies now make their artists use AI like Firefly or Midjourney. As a freelancer, I've had a few cgi and animation projects where people on the teams that were formed for said projects used AI and I had to just accept it. I think you're right that it's important to talk about it and in the future if I get other projects like that I'll try to convince them not to use it or I'll back out of the project if they insist on it being the final on-screen version of the visual element it was used on.
I feel like AI is going to be a big issue in these lower income countries where business people don't care about legal stuff as much.
What's clear is that the people responsible for developing such a system where artists are compensated fairly for their work used in training data are the same people developing these current models and we all know what kind of people they are (key word is greedy haha)... That's why I think it's important to be loud about these issues as artists if we have an audience, because we can influence the people's opinions and force these companies to change, as they will avoid doing it on their own.
About paying the artists who's images were used to train it. I just want to be able to generate images for fun. And I can right now. The idea of paying for every image at full price as if someone drew it for me, sounds rediculous to me for my use case. I wouldnt want to pay anything more than a fee cents per generation. I'd rather not even pay that because its open source and I run it locally. Although, Id be happy to tip if I used an artist's name in the prompt
@@aaronorelup4024 Sorry, I did not express this clearly. I was talking about ownership and commercial use. It's still morally wrong right now to generate stuff with the current AI systems because they're made with stolen art, but if it ever gets to the point where the models are not made with stolen art I also agree you should be able to generate what you want with no pay and if you want ownership of a certain image that was generated then you should pay for it. Even right now with real art people usually offer non-commercial rights to their work for cheaper and commercial rights for a lot more.
Also the part about using the artist's name in the prompt can also be bad. A big part of an artist's personality is their style, so using their name in a prompt without consent is akin to someone putting your name in an AI model and generating pictures of you without your knowledge. I know it's not an apples to apples comparison but it's quite creepy if you think about it like this. It can be flattering for some artists, but if you start showing those images off without crediting them and without asking for consent, it's wrong.
About the open source aspect, it's a bit more complicated in reality, here's a great video from Steven Zapata where he talks about that among other stuff related to the topic: ua-cam.com/video/tjSxFAGP9Ss/v-deo.htmlsi=CFU7RtZ8urysfsJz
Spot on analysis of FakeAI Boro. Thank you.
Boro did you get married? I see the ring on your left hand, congrats!
Yeah 3 years ago, thanks!
@@BoroCG Oh shit I had no idea lol, congrats to the both of you!
Its us or them.
I'm so unbelievably tired of the witch-hunts around current day emerging technologies. I'm not saying that "anything goes" or that "you should just let it happen". Personally I want to be informed about what's happening in the world around me, without having to buy an Adobe subscription and trying to figure it all out myself. All this new tech is prohibitively expensive for someone like me who's not an artist, nor using it for any sort of work. Pandora's box has opened and I loath when, out of fear, people attempt to avert the eyes of everyone else, and act as thought it either doesn't exist or hasn't been opened. I've always appreciated your head-on approach to new tech, and I hope you'll continue do what YOU think is right, even if it's buckling to a disgruntled portion of the populous
It´s like the invention of "copy / paste". I theory there are rules what you can copy and what not, movies music etc. In practicality it does not really matter. You will never be save from AI as an artist, engineer or whatever. I would even argue nor should you be. Yes it will just reate all the art somebody worked 20 years to come up with but it will also create technical systems etc. Not only artists but everybody who creates anything will be influenced by this tech, it´s just how it goes. So you can argue and try to ban your data from being used etc. all you want but in the end it´s just how it goes and you will have to deal with it. Why should I hire a person for IDK how much to come up with a logo if I can simply use AI for next to nothing to perform the same task faster and probably better. Same goes for a Dr. why consult a specialist if AI can look through your data and find that cancer with a better rate than real life Dr.
The solution where licensing costs increase might be good for some artists but this leads to a situation where only big established companies control the AI market. AI already controls the content we consume, so indirectly it controls the democratic process. Giving this power to the select few who can afford paying milions of people is a dangerous idea. Art might be important but I am ready to sacrifice it so that I don't have to live in a dystopian corporate autocracy.
But it's not the companies (big or small) that have to pay the artists - it's the users of the trained model. The companies are just supposed to provide the tool (AI) that connects clients with artists. Or more specifically, a client with a cluster of artists, splitting the revenue
@@BoroCG I misunderstood your idea then. Then I agree, if a user of a model would pay a markup directly to the artists then that would circumvent this problem.
A lot of times in this video you say something is "illegal", but really what you're saying is "you feel it should be illegal". I can only speak to US copyright law(which is basically the only one that matters as these are almost all made by US companies). But it's actually incredibly unclear whether or not this content is illegal.
It would actually be unsurprising if the current court cases find that this content is 100% legal(albeit, only copyrightable if a human puts in significant effort in the final product).
This is because US precedent already says that scraping and using content from the internet is legal(Microsoft lost the lawsuit setting this precedent). And US copyright allows for fair use, where you can use works if it is transformative. And AI image generators are fundamentally transformative in nature.
If I use Midjourney to make an image of Goku fighting Barack Obama in the style of Vincent Van Gogh, you'd be hard pressed to argue that the Van Gogh works used to inform the model on how to make the image were not sufficiently transformed.
Even though you might feel that I should pay the Van Gogh estate some royalties if I wanted to use this image, the law and legal precedent are pretty clear that I don't have to
hi
Oof. I wish I would have watched this before commenting on your last video. But either way, I think the ultimate outcome to all of this, will simply be that people won't stop making unethical AI models, they will only get better at hiding the fact that they used it. Ultimately, AI is just a mass scale torrent. Is it illegal for copyright works, yeah. Can you prove you got that content from that torrent (AI, in this analogy)... Not if you know what you're doing.
I think it is important to talk about the ethics of AI, but unfortunately I don't think this topic is going to clear up, because the models being produced that are actually useful, are made by people who do not now, or ever will, care about the people who create the raw data they use to train their models. Because to them, it's just data, not work. A truly clean, crowd sourced AI model, is never going to happen just on the grounds that the temptation for a troll to taint the data would be too great.
I always see people getting this wrong about generative AI. When you talk about compensation for the artists used to generate that image. There's no such thing. ALL artists and images were used to get to that point. There's no particular influence of one image over the others and if there were, there's no way to know.
The thing is that generative AI doesn't look at the images and then, based on them, create a "similar" one, or a "collage". The very complex mechanism behind all this models basically puts those images on a map, arranged by some unknown criteria, in fact there are more than 3.5 billion criteria for a model like Dall-e 2. Then, you give the system some words, and with those words it will try to put you on a point on that map that aligns with them. Maybe you land near a very well known point, like the Mona Lisa, so all images that land near that point will look very similar. However, most of the time you will land in a point where no one has landed before. Where there are no other "strong influence images" or "over represented images" near that point. This is where the generative part of the mechanism flourishes. Similar to a function in maths, you can "estimate" o "predict" what should be on that point, and that's how you come up with an image never seen before. The images used on the training are just there to plot the map. You could argue that a specific image looks very similar in style to a specific artist, or that people use the specific name of an artist to get a specific result. But that's just a "shortcut" word used to get to that location, and if that specific artist wasn't there to begin with, maybe other artist would have taken it's place.
So my point is that you can't associate a generated image with a specific artist. So the legal and compensation aspect of this topic is even more complicated that we think...
well most image generation softwares still use written prompts and often there's something like "in the style of [ insert artist or company here]". a good start would be to ghost-block any registered brand names and personal or otherwise protected names from being used as prompts. this of course would also include names like names like "artstation" because that's also trademarked. that way the images would still be used without consent, but at least the artist's name wouldn't be as easily tainted by being muddled with ai generated images when you search for them.
@@mathilda6763 OpenAI already does that with DALL-E 3 through ChatGPT. But it's mostly a workaround to prevent big companies like Disney or Nintendo from suing them...
@@mathilda6763OpenAI already does that with DALL-E 2 through ChatGPT. But it's mostly a workaround to prevent big companies like Disney or Nintendo from suing them...
The AI companies made such a marvel of a technology. What a shame they can't create a system that would estimate the percentage of involvement of any particular data they used without asking. I guess AI is not finished to be usable yet
our comments are very similar, and i swear i didn't read it before, when you actually get informed on how AI works you get to these same conclusions.
I think artists overestimate how much developers of AI and their users value the artwork that it is trained on. It's just the data that is important not the art style of each individual artist.
no one really cares who the artist is or how good the art is. I think artists are mainly just scared that society won't value artists as much anymore, which I think the opposite is true
Well I don't really care if AI companies and users value my artwork or not - it's not free. If it's just data - then they should find different data, and see what model that makes.
Without art there won't be artistic AI models. If they want to keep evolving their models, they need to make sure artists keep getting paid
@@BoroCG yeah I think so, but I was refering to the adobe library where adobe already bought the rights for the art and because of that, why should the original artist get componsated for AI images made using those photos as if the new photos were made by those artists. Is what I was trying to say
@@aaronorelup4024 they had the rights before AI was a thing, so the artists didn't agree to use it for training when they sold their rights. If it was an opt-in process it would have been better. It's like making a contract only for it to be changed later without your consent, you wouldn't like that now would you?
@@dorum358 They sold it. They can use it however they want
@@aaronorelup4024 I’m not arguing against that. I’m saying it’s just a huge asshole move that will hurt the market more than help it. If artists risk losing their jobs because they make stock images then they will do something else and nobody will make stock images anymore, then they won’t have new stuff to train their AI with and it will just remain stagnant and stop evolving. I don’t get why we’re supporting multibillion dollar corporations taking advantage of people :))
Adobe has the money to make it right, they just choose to be greedy cause that’s what evil corporations do if you don’t keep them in check.
Where is the difference between an artist looking at millions of paintings in galleries all over the world and then applying his observations to his own art and an AI beeing trained on these images and then generating a new image that is not just a copy of a single image but "inspired" by all these images?
As far as I know, only real things (including digital good of course) like a painting or a piece of music are copyright protected. You cannot protect something like a certain painting method or style. So as long as the AI is generating new pixels for a new images and not just copy pasting from different images (wich at some degree of composition would also qualify as a new piece of art) I don't see a big legal problem.
Of course I understand that artists might feel ripped of.
When artists get inspired by work it's not a perfect process. No human would be able to look at a painting and replicate every "brushstroke" in it. AI can do that, it's superhuman, it's pretty much like cheating and it's nothing like inspiration. It's like saying that a superhuman robot should be able to compete in professional boxing because it also looked at all the top boxers to perfectly train it's fighting style. Also people don't just "look at a million paintings" in a few days and perfectly remember them. It takes years to form a strong foundation of inspiration for your work. Artist's works are also not just inspired by images or things they saw, they are influenced by their mood and by their life experience, by their personality.
I think the way you use the word "illegal" is weird, because there hasn't been a verdict yet on what is or isn't legal with training AI on images.
Yeah, I'm voting on it right now
Then why AI companies avoid questions about *"where they get data for training?"*
I read through many comments and I can say it´s 99,99% salty creative people because they are being replaced by software.
Many say AI is dogshit, soulless and can´t create anything new. Well in that case don´t bother because it won´t replace you, right ?
In other cases people say AI is good enough and will replace them, well then focus on something AI can´t do. Focus on using it as a tool. With new tech certain traits go obsolete. I am sure there was the same debate when books could be printed on mass "Noooo this will devalue my work, I am an artist, a book will loose it´s soul" yeah well we know how that went.
You will not get rid of AI or close it up or make it ethical or whatever, just like you can´t get rid of the "copy / paste" function. Those things will always happen and if you insists and outlaw it or certain aspects of it others will develop it and replace you from somewhere else, say China for instance. They don´t give a fuck about you, they will simply come up with a better tool and those companies and enteties will replace you then... Either you adapt or you drown, simple as that.
Photoshop will just implement a cloud function where it will just analyze 90% of all art and simply use that or something xD You don´t have to use that just lie you don´t have to a coputer with copy paste function .....good luck.
@@sierraecho884 can ai sexytime
So your logic is: when someone uses GPT to create a text which is then published, everybody whose online postings had been used to train the LLM, should be compensated?
Sounds like a plan - considering AI will create the new and biggest class of Useless people, those people will need compensation. Eventually, most essential processes in the world would work on their own, and people simply get universal basic income
@@BoroCG This is a WILD thought. A very interesting one tho...
I think you a very naive if you think that you can proof any projects that studio or company are gonna do with AI , I mean who can tell if this or this project is not have rights? Who can tell exactly how much ai or not is in big project? I think it's sad but in the future we gonna see tons of production contents that made by ai and nobody will care, even not the law
"You actually own the rights to your image, and nobody will ever be able to take that away from you."
_* The government that is the ones who get to decide what rights anyone has: *_ "Allow us to introduce ourselves!"
Makes no sense to have to pay for every artist an human was inspired by; and with the number of sources involved, each individual artist would be getting paid for less than one pixel per image...
you cant compensate every artist that their work have been used to generate an AI image because AI doesn't work that way, AI doesn't take single images and mash it up together to give you a result, please review how latent diffusion works, how it's made the noising and denoising process, how it's actually trained, not just how you think it works, lot of misconceptions about AI to figure out before we can state who should compensate who.
Thanks, I am aware it's not clear how images are involved in any particular prompt.
My main point is that a few years ago image generation was impossible - now paying for training is impossible. AI companies have to figure this one out as well if they want to make things work. Unfinished technology that we don't exactly need, so we don't have to use it if it can't be legal yet
@@BoroCG you can actually train your own models from zero if you like, the resources, training codes, instructions and papers are there, it's just to much work, a few thousand images of your own won't work or will give you worse results, that's what happens with adobe firefly, allegatedly they don't use any existent dataset available like Laion or such, so they have to start from scratch if they want to be clean, now imagine a single artist doing all that effort to make it work, it's not efficient, but technically you can. You keep saying it's not legal, but it's not also illegal, i wouldn't care about legality yet unless you are really trying to replicate an specific existent artwork, like the mona lisa and call it your own, but if i can prompt the mona lisa with a furry suit then you are not committing any illegality, tools aren't illegal, having a gun neither, it's the stuff you do with it that will get you in troubles, i know it's hard to understand that machines can actually learn from images and not just memorize it, but that's how it really is.
@@ianalexanderreyes5890 No he can't because he would need to have the budget to buy a few thousand pieces of art to feed into his model :).
@@Alarios711 i´m pretty sure Boro have a very prolific personal collection of his own art, but even if you make one artwork per day fdor 10 years, that´s just 3450 images, it wont work for trainning a personal model from scratch, that´s my point
@@ianalexanderreyes5890 Exactly, which makes AI art not viable, thank you. You either hire or compensate artists for their work to feed your model or you create the source yourself. In both cases it's too important an effort compared to just doing the art.
Oh that's your argument as to why Ai companies should not have to compensate artists ? Well sucks to suck.
If it's expensive: That makes it useless. The whole point is to generate infinite free art without having to go through a person. This is just a massive cope. You don't have to pay a horse shoer every time you buy a car. The job of an "Artist" is just gone. No one will stop you from painting for your own pleasure.
A horse shoer didn't design a car. Without art, there's no artistic AI model, like at all. If you want AI to keep evolving, you have to pay people that train it
@@BoroCG We don't pay reparations to the natives either. We took the art. It belongs to the AI companies now. They beat you.
It's a cope thinking we'll choose the path of more resistance. That's literally never happened.
In the Star Trek future: They don't send reparations to wheat farmers every time they generate a meal out of thin air. You're just mad because you need to earn a living, and you won't be able to. We need to accelerate this to a point where like 90% of people don't need to work to live. This is the desirable outcome. This is the only positive outlook for the human future. A world without work.
You know horse-riding is still an industry right?
Also cars didn't need billions of horse shoes to generate their design. Your argument is the actual cope bro. Instead of explaining why you're right with a real argument, you resort to an allegory, because you don't actually know what's right.
What if someone told YOU today that your job will become obsolete? I think you'd find that to be unfair. And the way AI is going who knows, maybe in a few years your job will also be "obsolete" as you say. People like you think it's only a good thing when it doesn't affect them negatively. If you're not an artist you shouldn't have a say in how artist's work is used. If a model needs real art in order to exist, then that art should be paid for.
If you had to melt down a thousand horseshoes to make a car then those horseshoes should not have been stolen, they should have been paid for (that's how your argument would actually work in reality).
nah, artists are doing fine. people who actually appreciate art don't want to consume AI generated slop
@@eggi4443 Who appreciates art? Speculators? Money launderers? Rich white liberals?