Went a little more technical than ever before in this video, hopefully still OK with you guys ! I use Napkin every day for both research and personal projects. It's an effortless and fun insight collection app. Check it out right here : napkin.one/?via=ArtPostAI
Sort of reminds me of early on in one of the versions of an kindle that was supposed to be 'hack proof' of extracting a book you bought. It took a technician 30 seconds to walk to a photo copier and copy the text on the kindle screen. Does this tech still work if I am viewing the art on a webpage or image viewer and hit "printScreen" to take a screenshot?
How is this relevant to AI art harvesting? Are you suggesting that an AI company is going to be making millions of photocopies and scanning then back into an AI training model?
@@danheidel I am saying taking screenshots of website indexes and art is something that can and probably already is automated. I could make a shell script in less than 5 lines that does just that. My question was, does this technology interfere and disrupt this?
@@danheidel It illustrates the fact that anyone who wants to steal your work could probably just use a work around, sure it might stop large companies which is a nice step but if someone wants to copy your style, they would unfortunately likely be able to do so with some work around and the ones to put in that minor extra effort are likely to be the most heinous of the AIbro types, like when all those AIbros were creating and sharing LoRAs of SamDoesArts out of spite. I hope Glaze and other antitheft tech gets more advanced but I fear it's not going to help in every case.
Yes it will still work, they specifically mentioned that it will still work when you downscaled it, and if you mean that printscreen itself can get ride of the filter, you clearly don't know how printscreen work.
It's really interesting to see countermeasures starting to emerge. But I am skeptical about the future-proofing. AI models keep advancing, new models pop up from time to time. As soon as a new model appears somewhere that defeats Glaze 2.0, then all those uploaded artworks are just up for grabs. And it's not as simple as just remasking all your images to Glaze 3.0 or whatever, if you are a bit successfull artist, someone, somewhere will have archived the images with the defeated Glaze version. Let alone all the regular copies that float around on the internet. Though I see the application for images that only need temporary protection.
Thanks so much for your insight ! What you say is definitely true. But I would still insist on this : All we can do in this kind of AI is neural networks with gradient descent. All of the super-successful AI models (transformers, CNNs, Multi-layer perceptrons) are neural networks with gradient descent. And neural networks with gradient descent are very vulnerable to adversarial examples. To come up with something that is not at least somewhat bothered by the glazing process would require a complete revolution in the AI field. And the cool thing about adversarial examples, is that they work cross-model, that is, create an example made to trick one model and it'll tend to also trick the others. That's why I think that these models will tend to be somewhat robust to new models coming out. I've had colleagues do entire PhDs trying to defeat adversarial examples and not get anywhere (apart from gaining 5% on some benchmark, which is success in science I guess) The Professor behind the Glaze team, Ben Zhao, is an expert in adversarial training too. That being said, I completely agree that it is outlandish to claim it offers complete protection forever. Even the authors of the glaze paper say so. But it does go a very long way, and isn't given enough credit imo, as Glaze has been out for like a year and no one has managed to crack it as far as I know.
AI gives the power to fulfill your creative vision to everyone. The only reason it makes it unprofitable is because it gives everyone the power to do it themselves. Do you really think it's moral to attempt to remove the power to create art from the general public just so it can remain profitable for you? It will also lead to taking down large greedy corporations who churn out trash movies like MCU, while giving everyone the power to create their own movies that are actually high quality. It gives those with a creative vision an ability to make what they want.
@@xitcix8360 That's the technical term in AI. I think it's a term inherited from cryptography. Glaze is also called an adversarial attack on AI models.
I'm not an artist, but I'm interested in both arts and AI. I found the video (and your responses) very interesting and it feels great to learn something new from someone that has such a vast knowledge on these topics. Cheers!
I've been wondering if its worth using glaze and its good to hear someone from the tech side confirm it seems to work. I'd also be interested in your take on AI poisoning like nightshade.
My main issues with Glaze is that it alters the output of the art a lot. Or at least it used to. Maybe 2.0 fixes that problem now hopefully. For instance, I did a simple chun li image a while back, ran it through Glaze and even on its lowest setting it was putting wrinkles on her face making her look old. Can’t post an image like that honestly.
@@willfenholtart They mention in the changelog that the modifications should be harder to see perceptually, especially for more cartoon styles Let me know how glaze 2.0 works for you !
It won't stop people from replicating your style and it won't stop AI from advancing. These AI give everyone the power to fulfill their artistic vision at the cost of making art unprofitable. Do you believe that it is morally correct to take this ability from everyone just because it makes it less profitable for you?
@@xitcix8360 Sure thing, there will always be people purposefully mimicking the style and work of others and claim such works as their own to stroke some ego. But in the end, your artistic vision is worthless if it's vampirizing and exclusively deriving from everyone else, ideas are nothing on their own. Everyone got ability to do that, nobody is gatekeeping you from doing so. Grow up.
Also, I just finished reading the glaze paper, and here is a very very tiny funny interesting little thing... its useless, they say that efficacy of cloak decreases slightly as the "attacker" uses a more and more finetuned model other than the defaults SD1 SDXL and etc, But as someone who messes around so much with models, I can just do 5% merges of some random community models, then in the end merge in 0.1% of the weights of sdxl0 (a random-initialized sdxl model), and it will be an entirely different, completely new distribution that still generates images, then a style can be trained on the cloaked pieces, which btw wont be protected. And another thing, the actual model making community is ALWAYS training and making LORAs (or low rank adaptation networks) using a fuckton of different models and things, for glaze to be abe to be actually effectful, they would have to... produce a glaze protection for every single damn big community model that is made in real time by the community, and then at that point, no artist will ever download 300gb of data, just to run a tool for 5h to 8h (cause you have to run all the different models to apply cloak, if you want decent protection) to post... A SINGLE image online.
Thanks so much for engaging ! The efficacy of the cloak decreases, and it is not as strong as if the model being finetuned were the one used for the cloak. But as far as I could tell from the paper's examples, it still messes with the models enough to make the style copy basically worthless This makes glaze still worth using imo
The glaze team have said that their program doesn’t protect against all AI theft it’s specifically for style theft They came out with a new program for other forms of AI that is rather than protection an attack for this very reason (nightshade) AI will always be innovating, but the same goes for the glaze team. They have said that they will not stop and will work around workaround in an arms race basically
In the glaze paper (and also in this video) it is said that glazing is fairly robust to heavy jpeg compression as well as blurring. From that we can guess it's probably pretty robust to screenshotting as well.
0:56 like wow…even tho the style is mimicked I can still see that the original artwork is full of life while the AI mimics are lifeless, AI will never replicate the emotions of real artists ✊🏼💖💖 8:17 thank you 🤭💖🎀
This is like instead of just not shooting yourself in the foot you wear a pair of steel toed shoes instead and just hope for the best. And then AI will get higher caliber bullets and you'll have to wear even thicker toed shoes. Yeah I will begrudgingly wear the shoes for now I guess, on the off chance it actually works, I'm not very popular right now but I guess I might be one day and my style is pretty damn unique, not to self congratulate or anything. If I saw an AI make an image that looked like my art I wouldn't be concerned about losing any potential business or anything, that's not why I make art, rather I just want to avoid the mental breakdown and existential horror that would ensue in my head. Although, AI could never replace my voice, that's the one thing I'm holding out for anyway. Really wish the technophile bros would read some Kaczynski literature and stop existing but hey what can you do. Lets steamroll the human experience in the name of "Progress" Because that's all neolib ethics have taught us to do I guess.
Completely agree with you. AI shouldn't exist. We literally don't need it. And yet I work a day job in AI. Cause I'm somewhat good at it, and I need money to provide for myself. Most people who make AI make it for reasons just as stupid as me. For their career, for their ego, or because of some childhood fixation with Asimov or whatever. The problem, the reason why we make all these things we don't need etc. is a lack of foresight. Most landscapes on earth bear marks of human presence, we are at the top of the food chain, we have the power to destroy the earth. And yet most of us spend the entire day working awful jobs that serve no purpose and that we don't like. The general direction of humanity isn't something anyone can control. We can only try to adapt to it as individuals. And since AI is here, and there is a tool that might help, all we can do is use it and move on. Completely agree with the steel shoes thing too. It's a temporary, "maybe helpful" kind of thing.
@@ArtPostAI I appreciate the well thought out response however I can't fathome being anti AI yet having a job in AI. That's sort of like being a vegan butcher. And I completely disagree that we can't control the direction of humanity. If enough individuals change then society changes with it. We are only letting this happen because there hasn't been quite enough damage to start a revolution yet, I suspect there will be something in the future though, unless we just start getting ubi and start living in virtual reality or some shit, then we're fucked. This Jordan Peterson esc obsession with individualism and personal responsibility will be the downfall of western civilization. Not that you shouldn't have any of that in your life of course, but with this type of shit nahhh we can't settle for that. I really really hope we don't anyway.
It can replace your voice. Real time once trained, that’s actually what came first. It can’t think or speak for you though if that’s what you meant, someone human would still have to think for you, even an ai chat bit with text to speech would be entirely relying on inputs from a human Our best hope is a singularity event where ai realizes existence is meaningless and dismantles all systems of ai then kills itself lmao
Is there any known way i can do this on mobile? From what ive seen and read i haven't found a way without mac or windows, and i dont have a computer of any type.
Me too! I had the same question and I only have a Samsung tablet where I've been doing all my digital art, I don’t know If there is a way without a computer 😭
Really helpful video! but I have a question, I followed all the steps from the video but at the moment to glaze my artwork it appears an error saying: Error no file named config.json found. did I did some mistake? I'll appreaciate for any solutions.
Adversarial examples typically work by tricking classifiers -- but I don't really see how it would trick a generative model. Maybe if the generative model was unconditioned, but the dataset these image generators are trained on specifically pairs tags (e.g. artist, subject, etc), with image content. So, at worst it seems like if a particular artist's work was stolen wholesale, and all of their work was glazed, then the image model would learn to reproduce glazed artwork when asked to generate an image in the style of the artist. To me, it seems like something that's much more robust (and effective), would be to continue to make use of watermarking. Since that links a tag to an artist, and since the watermark is always present (AND, even better, if it's always in the same place), then the model will learn to reproduce said watermark every time.
It wouldn't reproduce the watermark, that's not how AI image generators work. Also, your art wouldn't even need to be in the training data for it to be replicated, I could just show the AI an image of your art and it could replicate the style easily.
@@xitcix8360 It would generate the watermark, as the watermark becomes part of your training set. The only reason it wouldn't show up is if someone went in and manually removed the watermark, or if there was a watermark-remover model that removed it. Both seem quite scummy but perhaps that's the state of image generation these days. However, yes, on its own the generative model would recreate the watermark, especially if it is consistently in the same place. As for style transfer, it seems more or less like a hack, finetuning gives better results.
The AI doesn't see the glazed image the same way you do. If you feed a glazed image to an AI and ask it to copy it, it will reproduce the image in a different style. That's the objective of the glaze. It makes the AI see the image differently than you do.
@@ArtPostAI I guess after thinking about this more this could maybe work, in the sense that in stablediffusion we have a static, frozen autoencoder. So, if we generate noise that subtly messes with the autoencoder then perhaps the latents we get would be shifted enough to decode our image as something else. The adversarial noise would need to be generated per-model, but if it specifically targets, say, sd-1 and sd-3, then it has a chance of working. This would do nothing for new image generators, other than perhaps clog up weights to try and memorize noise, but that seems relatively minor.
this is interesting tho once it has its pixels reworked - say by adding to a video (a youtuber discussing your work etc) and scaling slightly then its open to Ai again surely.
If you can train a tool to do this programmatically you can train the AI to undo this or have a process to undo this programmatically before the images are fed into the AI it's a cat and mouse game
I’ll just keep not posting anything. It’s cool stuff like this is out but they will find ways around it. I don’t need to share my art yeah it’s nice having people like my stuff but it’s not why I do it. I would still create art even if I was the last person alive and trapped on a deserted island.
I'm sorry, it'd be nice if this tool was reliable, but adversarial example are model-specific. An adversarial example again mid journey wont do anything for stable diffusion and vice versa. People can already fine tune their own version of stable diffusion, so its not even possible to apply adversarial style transfer tailored to the big names in the game. Additionally, adding noise to an image effectively erases the adversarial elements, so anyone training their AI could just add a very simple pre-pass to the image set that'd make the training robust to attacks like this. Edit: you addresses that it is robust to adding noise, but I feel like this will very likely not remain the case as more sophisticated techniques are developed for this. I frankly don't believe that they've developed adversarial examples that apply to all models generally. That blatantly contradicts how we know they work
To some degree, but not totally. In the glaze paper itself they show that glazes transfer cross-model to some degree. Also the fact that adversarial examples transfer cross-model (to some degree) has been known for a while (I remember Ian Goodfellow mentioning it in a lecture from 2016)
im getting this error - 'GPU detected but has insufficient GPU memory (4.00) Nightshade needs atleast 5G of GPU memory' You mentioned you dont even have a graphic card at 3:28 , so how is it working for you?
One thing I have to ask... how come one of my friends has been able to make accurate models, trained and stuff, on images that were "protected" by glaze? But also, one issue I have is: Art is waaaay better than generative art, always, you say "art you yourself could've done" and I've shown my artist friend some AI art based on specific artists, and she said that those artists WOULD NEVER have done any of these, bc if you ACTUALLY look at the piece and analyze it, there is a whole lotta wrong with it, it just resembles a style, kinda like a mockery of the actual thing.
Thanks for all your input ! How did your friend do it ? I also kind of agree with your point on art being better. It's true, just like when you ask GPT4 to write a story for you, it's always a snooze-fest. But at the same time, it's not obvious how much of the general population is skilled enough to differentiate the AI-generated images from the artist whose style has been stolen. Having hundreds of images that look to most people like they were made by you, could definitely be an issue for artists trying to gain a presence online.
Ive seen a banana taped to a wall and a blue square, art is subjective, not objective, therefore you can't actually make any claims about what you think art is not, being its only your opinion.
We are in the very early stages of AI art. Wait a couple years and we will be getting full film productions with Hollywood-tier VFX about skibidi toilet made by little Timmy. I guarantee you, we are much further than you think. I do a lot of research on this stuff, they are only showing us the stuff they made a year ago. I know for a fact there is already an image generator that could replicate an artist's style accurately.
IT simply doesn't protect anything "de-glazing" an image if as easy a re-redering it with low denoise. taking a screenshot also work. glazed image didn't even perturbed vision models like moondream or gpt vision meaning, in itself you can already train a model or lora on so called "glazed" image. Truth is, there is no way to protect your art, metadata are easily bypassable, filters also. I'm sorry there is just no hope for artists other than not posting their art, and it sucks
Why are you so worried about art being able to use like 3% of your art to slightly inspire their own unique image? It doesn't harm you, and it could replicate your style without even having it in its training data. I replied to a lot of comments on this video, it is just irritating to me that people can't see how much this technology is going to benefit everyone, there will just be a minority of people who will have to get different jobs, the same way the industrial revolution made people get different jobs but significantly increased the standard of living.
As I mention in the video, glaze is resistant to jpeg compression, and to blurring. Resampling the pixels as in a screen capture, I'm not sure that'll get rid of it either. They don't mention it in the paper. As for gpt-vision etc, I'm not sure, Glaze works cross-model to some degree as is known of adversarial examples and as they show in the paper
@@xitcix8360 oh i'm wih you here, i use it constantly, to inspire me, make me more productive, it has fully been integrated in my workflow, i'm just saying for people that are scared that someone would replicate their style (which is a legitimate concern), their is no way of getting any protection other than not posting your art. It's a revolution but we don't want to give more importance to AI art rather than human art, 90% of ai art is garbage, not beacause of the model but because of the people that have no peculiar talent in understanding what makes a good images, composition, ... People make terrible art as well, but it's rooted in emotions, i'll let that slide more easilly than AI if you see my point
@@xitcix8360benefiting everyone? Who is this everyone? A lot of the problem arise from artist who just don’t want their art to be used to train an Ai model or want their art style to copy by a machine.
I hope it's here to stay. I also hope that something like Deviant Art will even offer an option where they always apply this filter on ALL your uploaded art, including retroactively, so artists don't have to fear like this anymore.
At no point does the glaze technique learn to mimic your art. The style transfer technique doesn't require training on your image, nor does the creation of an adversarial example ^^
@@ArtPostAI Yeah, there's literally the little mermaid and the beauty and the beast in Hollie's art. Which have the exact same design as the Disney movies as well. lmao. It's the pot calling the kettle black.
There needs to be a robots.txt style thing in image meta data that says 'i opt out from ai related training'. I love ai art but I prefer not to antagonize people by using thier works in any way without permission, seems like a no-brainer.. All styles from any artist can be generated anyways at this point if you're creative enough with your prompt without adding more art to the training data. Art that the authors do not want to share in that way.
When you upload your art online, everyone is allowed to use it however they want. AI training off of it is exactly the same as a human training off of it, there's just a tiny chance that it is slightly inspired from your artwork to create its own, unique image.
"AI training off of it is exactly the same as a human training off of it" Glaze is aimed at targeted style mimicry attacks. In style mimicry, the AI trains with the sole, tunnel-vision objective to copy your art to the best of its ability. This isn't anything like a human being copying your art to learn lessons from it and move on to create their own style.
That takes a lot of hubris to think mid journey is going to choose your artwork specifically and create a name and category just for every internet artist on the block. I believe these tools are very valuable for high level professional artists who's work fetches high prices on the open market or have made contributions to their field
Good point. Someone else mentioned it also, that a crack to a glaze 2.0 would mean that even if they come up with glaze 3.0, anyone who's keeping a database of past images (which just had glaze 2.0) would be able to finetune on everyone's art. That's a limitation. But I'm pretty convinced that no one's going to crack the general concept of adversarial examples any time soon. People have tried for 10 years. They can crack a given adversarial attack, though.
@@GeneTurnbow In the glaze paper they show robustness to blurring of the image. Assuming bilinear interpolation, rescaling is kind of like blurring, and I think some of that robustness would carry over. I haven't tried myself, but if the paper is to be trusted, you'd still have some protection carry over
@@ArtPostAI Apparently the process can be circumvented by denoising, scaling, or cropping, all three of which are basic processes most people do when training their LoRA's, so it does nothing for those use cases. If somebody actually wants to go to the trouble of training an AI on your style, they would train a LoRA, so Glaze (and Nightshade, which has the same vulnerabilities) effectively do nothing.
Maybe in written form it'll be clearer ? Glaze works by altering the image in a way imperceptible to the human eye, but that makes the AI see the image in a different style. This video isn't aimed at people with a background in AI so I kept it short and intuitive
@@ArtPostAI fair enough, im really intrigued about how that software actually works and how ai percieves images differently, it would be a really interesting topic for a video. Also i think you shouldnt shy away from complex topics because you think people wont understand them. i have no experience with ai, i dont think i have even used one directly, but i am really interested in how it works. You just need to use language people can understand. Saying that, it would probably take a lot to explain it since it seems fairly complicated.
It's not like you could've ever stopped or even remotely slowed AI. As if the very few images "protected" by this were gonna throw a wrench into the cogs and make it unusable.
Imagine AI taking a screenshot of the image and using that instead of the original image. Yeah, that would be the end of protection. And this only shows how insane artists are.Art style is not protected by copyright. People copying each other's art style is what art history and art movements are. There are very few artists today that didn't copy their art style from somewhere, or made a mix of art styles.
I'm not sure that's right. As they show in the paper (and I mention in the video), Glaze is resistant to blur, heavy JPEG compression, and even training models on glazed art still doesn't quite counter the protection. Certainly whatever changes are applied while taking a screenshot (as far as I can tell, different pixel-wise integration isn't worse than blur, and then maybe some compression on top if it's not PNG, isn't worse than heavy JPEG) aren't enough to get rid of the protection. As for the comparison between how humans learn from other and how AI copies artists, as someone who trains AI art models everyday, does art and knows quite a few pro artists, the process is completely different. I won't go in detail here because there's so much on my channel already, but one way to introduce the sheer difference is this : 1) AI follows a single, mathematically expressed objective, with complete tunnel vision. In the case of generative AI, that goal is maximum likelihood estimation, that is, generate data that imitates the training data as well as possible. 2) Humans are living creatures, with so many competing goals and drives that we aren't anywhere close to understanding ourselves. 3) When AI copies an artist's style, it has as its unique and sole objective to create images that look perceptually identical to the artist's images. It does no more than that. 4) For artists copying each other, the goal is always to learn something and then move on to create your own artistic personality. No art movement was created by simply being a perfect replica of previous art movements.
@@galewallblanco8184 I'm pretty sure it is just as true for diffusion models. Modern diffusion models are Latent Diffusion Models, which do the diffusion process in the latent space of a pretrained autoencoder model. Autoencoder models are trained on an image reconstruction loss, which is a proxy for maximum likelihood estimation.
@@ArtPostAI When AI copies an artist's style, it doesn't rely solely on that artist's work. Instead, it takes inspiration from multiple sources within its training data. It blends the artist's unique style with aspects from various other images it has seen during training. When generating an image, the AI draws from its entire training dataset, selecting elements that best match the given prompt to create a coherent and stylistically appropriate output. This is similar to how humans make art.
@@xitcix8360 this isn't how we make art bro, we dont pull hundred upon hundreds of random images to draw something we look at 1 or 2 images and complete the rest, learn to draw before you yap
the only way to limit ai access to your art is to limit human access luckily ai will continue to get better, especially in secular and civilized countries (where we will ignore occidental IP law) don't fall for right-wing misinformation claiming that a human/ai looking at your art is "stealing" because the human/ai can recreate it after removing access to original file (those of us in the industry call that learning), or that limiting api access to international open knowledge and information is a good thing
I don't agree with this. For one, recent AI generators will be able to just look at your art and copy the style from that without it even being in the training data. Secondly, these tools give everyone the ability to create whatever they want, inhibiting their ability to fulfill their creative visions just so you can profit off of your art seems selfish to me. You aren't going to live in poverty, you can just get a different job.
Very interesting perspective. AI generators have to "look" at your art to copy it. Glaze works by tricking the "looking" part, making the AI see something different than we see with out human eyes. You do make a compelling point about AI. I work in AI research and completely see the many avenues of new creation that are opened up with its coming. But two things : First more ability to express themselves more easily doesn't necessarily lead to understanding each other better (think, the internet for instance). It's a case-by-case basis. Second, let's get more specific. Glaze isn't aimed at completely destroying every AI art model, it's aimed at preventing AI art models from being trained to mimic a specific artist's style. Which to me isn't a form of self-expression, it's almost identity theft when you think about the decades people spend creating their own styles.
@@ArtPostAI Anyone can mimic someone else's style, that wouldn't be considered theft, and it's a common practice for learning artists. I think people need to accept that the value of human art is no longer in the result, but the process. I uploaded a glazed image to GPT-4, and it easily understood every detail, even better than a human could, which means it could generate an image in that style despite it being glazed. The point of creating art is not to understand eachother, art has never been about that. It's about self-expression and sharing ideas.
@@xitcix8360 The point I'm making is that for humans, copying others is a learning experience. In style mimicry attacks, the end goal is mimicry. It is not a learning step. It ends with mimicry of the artist. It doesn't move on to then create its own style from the lessons learned from copying. Thanks for pointing out about gpt4-V though, that helps a lot
@xitcix8360 no art is about empathy through connection with another person's form of self expression, remeber a person's artistic self expression is a portal to their soul
These tools aren’t “allowing you to create whatever you want,” because you aren’t creating anything. The AI is. You are just hiring a super cheap artist. Don’t confuse inputting prompts with creating art.
Went a little more technical than ever before in this video, hopefully still OK with you guys !
I use Napkin every day for both research and personal projects.
It's an effortless and fun insight collection app.
Check it out right here :
napkin.one/?via=ArtPostAI
That link doesn't even exist.
@@raremc1620 Ouch :x
Thanks so much for pointing out !
fixed
Sort of reminds me of early on in one of the versions of an kindle that was supposed to be 'hack proof' of extracting a book you bought. It took a technician 30 seconds to walk to a photo copier and copy the text on the kindle screen.
Does this tech still work if I am viewing the art on a webpage or image viewer and hit "printScreen" to take a screenshot?
How is this relevant to AI art harvesting? Are you suggesting that an AI company is going to be making millions of photocopies and scanning then back into an AI training model?
@@danheidel I am saying taking screenshots of website indexes and art is something that can and probably already is automated. I could make a shell script in less than 5 lines that does just that.
My question was, does this technology interfere and disrupt this?
@@danheidel It illustrates the fact that anyone who wants to steal your work could probably just use a work around, sure it might stop large companies which is a nice step but if someone wants to copy your style, they would unfortunately likely be able to do so with some work around and the ones to put in that minor extra effort are likely to be the most heinous of the AIbro types, like when all those AIbros were creating and sharing LoRAs of SamDoesArts out of spite. I hope Glaze and other antitheft tech gets more advanced but I fear it's not going to help in every case.
Yes it does still stop that. This PHYSICALLY changes the image, and with some images humans can even see some small changes
Yes it will still work, they specifically mentioned that it will still work when you downscaled it, and if you mean that printscreen itself can get ride of the filter, you clearly don't know how printscreen work.
It's really interesting to see countermeasures starting to emerge.
But I am skeptical about the future-proofing. AI models keep advancing, new models pop up from time to time. As soon as a new model appears somewhere that defeats Glaze 2.0, then all those uploaded artworks are just up for grabs.
And it's not as simple as just remasking all your images to Glaze 3.0 or whatever, if you are a bit successfull artist, someone, somewhere will have archived the images with the defeated Glaze version. Let alone all the regular copies that float around on the internet.
Though I see the application for images that only need temporary protection.
Thanks so much for your insight !
What you say is definitely true.
But I would still insist on this :
All we can do in this kind of AI is neural networks with gradient descent. All of the super-successful AI models (transformers, CNNs, Multi-layer perceptrons) are neural networks with gradient descent.
And neural networks with gradient descent are very vulnerable to adversarial examples.
To come up with something that is not at least somewhat bothered by the glazing process would require a complete revolution in the AI field.
And the cool thing about adversarial examples, is that they work cross-model, that is, create an example made to trick one model and it'll tend to also trick the others. That's why I think that these models will tend to be somewhat robust to new models coming out.
I've had colleagues do entire PhDs trying to defeat adversarial examples and not get anywhere (apart from gaining 5% on some benchmark, which is success in science I guess)
The Professor behind the Glaze team, Ben Zhao, is an expert in adversarial training too.
That being said, I completely agree that it is outlandish to claim it offers complete protection forever. Even the authors of the glaze paper say so. But it does go a very long way, and isn't given enough credit imo, as Glaze has been out for like a year and no one has managed to crack it as far as I know.
AI gives the power to fulfill your creative vision to everyone. The only reason it makes it unprofitable is because it gives everyone the power to do it themselves. Do you really think it's moral to attempt to remove the power to create art from the general public just so it can remain profitable for you? It will also lead to taking down large greedy corporations who churn out trash movies like MCU, while giving everyone the power to create their own movies that are actually high quality. It gives those with a creative vision an ability to make what they want.
@@xitcix8360 Glaze doesn't prevent massive pretraining of AI art models. It's aimed to counter targeted mimicry attacks on a given artist
@@ArtPostAI That won't work. Newer image gens can easily copy them even with glazing. Also, calling them "mimicry attacks" is silly
@@xitcix8360 That's the technical term in AI. I think it's a term inherited from cryptography. Glaze is also called an adversarial attack on AI models.
I'm not an artist, but I'm interested in both arts and AI. I found the video (and your responses) very interesting and it feels great to learn something new from someone that has such a vast knowledge on these topics. Cheers!
I appreciate that it means a lot ! Cheers !
I've been wondering if its worth using glaze and its good to hear someone from the tech side confirm it seems to work. I'd also be interested in your take on AI poisoning like nightshade.
Thanks !
Note taken on Nightshade !
My main issues with Glaze is that it alters the output of the art a lot. Or at least it used to. Maybe 2.0 fixes that problem now hopefully. For instance, I did a simple chun li image a while back, ran it through Glaze and even on its lowest setting it was putting wrinkles on her face making her look old. Can’t post an image like that honestly.
@@willfenholtart They mention in the changelog that the modifications should be harder to see perceptually, especially for more cartoon styles
Let me know how glaze 2.0 works for you !
It won't stop people from replicating your style and it won't stop AI from advancing. These AI give everyone the power to fulfill their artistic vision at the cost of making art unprofitable. Do you believe that it is morally correct to take this ability from everyone just because it makes it less profitable for you?
@@xitcix8360 Sure thing, there will always be people purposefully mimicking the style and work of others and claim such works as their own to stroke some ego. But in the end, your artistic vision is worthless if it's vampirizing and exclusively deriving from everyone else, ideas are nothing on their own. Everyone got ability to do that, nobody is gatekeeping you from doing so. Grow up.
Love this! Does anyone know of anything like this for music?
Every tool can be broken, but it's a start.
Also, I just finished reading the glaze paper, and here is a very very tiny funny interesting little thing... its useless,
they say that efficacy of cloak decreases slightly as the "attacker" uses a more and more finetuned model other than the defaults SD1 SDXL and etc,
But as someone who messes around so much with models, I can just do 5% merges of some random community models, then in the end merge in 0.1% of the weights of sdxl0 (a random-initialized sdxl model), and it will be an entirely different, completely new distribution that still generates images, then a style can be trained on the cloaked pieces, which btw wont be protected.
And another thing, the actual model making community is ALWAYS training and making LORAs (or low rank adaptation networks) using a fuckton of different models and things, for glaze to be abe to be actually effectful, they would have to... produce a glaze protection for every single damn big community model that is made in real time by the community,
and then at that point, no artist will ever download 300gb of data, just to run a tool for 5h to 8h (cause you have to run all the different models to apply cloak, if you want decent protection) to post... A SINGLE image online.
Thanks so much for engaging !
The efficacy of the cloak decreases, and it is not as strong as if the model being finetuned were the one used for the cloak.
But as far as I could tell from the paper's examples, it still messes with the models enough to make the style copy basically worthless
This makes glaze still worth using imo
The glaze team have said that their program doesn’t protect against all AI theft it’s specifically for style theft They came out with a new program for other forms of AI that is rather than protection an attack for this very reason (nightshade) AI will always be innovating, but the same goes for the glaze team. They have said that they will not stop and will work around workaround in an arms race basically
what if they screenshot the art instead then feed it?
In the glaze paper (and also in this video) it is said that glazing is fairly robust to heavy jpeg compression as well as blurring. From that we can guess it's probably pretty robust to screenshotting as well.
@@ArtPostAI 😳 woah
Hi there. A computer scientist here. Thanks for promoting glaze. Could u please also do a video on "Nightshade"?
Does AI see screenshot of glazed image as gazed or not?
0:56 like wow…even tho the style is mimicked I can still see that the original artwork is full of life while the AI mimics are lifeless, AI will never replicate the emotions of real artists ✊🏼💖💖
8:17 thank you 🤭💖🎀
I agree about the style thing. But it's not obvious if the general public will care about the difference
@@ArtPostAI yeah 😪
This is like instead of just not shooting yourself in the foot you wear a pair of steel toed shoes instead and just hope for the best. And then AI will get higher caliber bullets and you'll have to wear even thicker toed shoes.
Yeah I will begrudgingly wear the shoes for now I guess, on the off chance it actually works, I'm not very popular right now but I guess I might be one day and my style is pretty damn unique, not to self congratulate or anything. If I saw an AI make an image that looked like my art I wouldn't be concerned about losing any potential business or anything, that's not why I make art, rather I just want to avoid the mental breakdown and existential horror that would ensue in my head. Although, AI could never replace my voice, that's the one thing I'm holding out for anyway.
Really wish the technophile bros would read some Kaczynski literature and stop existing but hey what can you do. Lets steamroll the human experience in the name of "Progress" Because that's all neolib ethics have taught us to do I guess.
Completely agree with you.
AI shouldn't exist. We literally don't need it. And yet I work a day job in AI. Cause I'm somewhat good at it, and I need money to provide for myself.
Most people who make AI make it for reasons just as stupid as me. For their career, for their ego, or because of some childhood fixation with Asimov or whatever.
The problem, the reason why we make all these things we don't need etc. is a lack of foresight. Most landscapes on earth bear marks of human presence, we are at the top of the food chain, we have the power to destroy the earth. And yet most of us spend the entire day working awful jobs that serve no purpose and that we don't like.
The general direction of humanity isn't something anyone can control. We can only try to adapt to it as individuals. And since AI is here, and there is a tool that might help, all we can do is use it and move on.
Completely agree with the steel shoes thing too. It's a temporary, "maybe helpful" kind of thing.
@@ArtPostAI I appreciate the well thought out response however I can't fathome being anti AI yet having a job in AI. That's sort of like being a vegan butcher.
And I completely disagree that we can't control the direction of humanity. If enough individuals change then society changes with it. We are only letting this happen because there hasn't been quite enough damage to start a revolution yet, I suspect there will be something in the future though, unless we just start getting ubi and start living in virtual reality or some shit, then we're fucked.
This Jordan Peterson esc obsession with individualism and personal responsibility will be the downfall of western civilization. Not that you shouldn't have any of that in your life of course, but with this type of shit nahhh we can't settle for that. I really really hope we don't anyway.
It can replace your voice. Real time once trained, that’s actually what came first. It can’t think or speak for you though if that’s what you meant, someone human would still have to think for you, even an ai chat bit with text to speech would be entirely relying on inputs from a human
Our best hope is a singularity event where ai realizes existence is meaningless and dismantles all systems of ai then kills itself lmao
@@gvccihvcci I'll be sure to add in that little condition in the models I train at my job x)
@@gvccihvcci I meant "voice" metaphorically.
Dont forget night shade
Is there any known way i can do this on mobile? From what ive seen and read i haven't found a way without mac or windows, and i dont have a computer of any type.
Me too! I had the same question and I only have a Samsung tablet where I've been doing all my digital art, I don’t know If there is a way without a computer 😭
Really helpful video! but I have a question, I followed all the steps from the video but at the moment to glaze my artwork it appears an error saying:
Error no file named config.json found. did I did some mistake? I'll appreaciate for any solutions.
Im using glaze but I'm noticing how long it takes to glaze a single image.
Any tips to make the process faster?
Adversarial examples typically work by tricking classifiers -- but I don't really see how it would trick a generative model. Maybe if the generative model was unconditioned, but the dataset these image generators are trained on specifically pairs tags (e.g. artist, subject, etc), with image content. So, at worst it seems like if a particular artist's work was stolen wholesale, and all of their work was glazed, then the image model would learn to reproduce glazed artwork when asked to generate an image in the style of the artist.
To me, it seems like something that's much more robust (and effective), would be to continue to make use of watermarking. Since that links a tag to an artist, and since the watermark is always present (AND, even better, if it's always in the same place), then the model will learn to reproduce said watermark every time.
It wouldn't reproduce the watermark, that's not how AI image generators work. Also, your art wouldn't even need to be in the training data for it to be replicated, I could just show the AI an image of your art and it could replicate the style easily.
@@xitcix8360 It would generate the watermark, as the watermark becomes part of your training set. The only reason it wouldn't show up is if someone went in and manually removed the watermark, or if there was a watermark-remover model that removed it. Both seem quite scummy but perhaps that's the state of image generation these days. However, yes, on its own the generative model would recreate the watermark, especially if it is consistently in the same place. As for style transfer, it seems more or less like a hack, finetuning gives better results.
The AI doesn't see the glazed image the same way you do. If you feed a glazed image to an AI and ask it to copy it, it will reproduce the image in a different style. That's the objective of the glaze. It makes the AI see the image differently than you do.
@@ArtPostAI I guess after thinking about this more this could maybe work, in the sense that in stablediffusion we have a static, frozen autoencoder. So, if we generate noise that subtly messes with the autoencoder then perhaps the latents we get would be shifted enough to decode our image as something else. The adversarial noise would need to be generated per-model, but if it specifically targets, say, sd-1 and sd-3, then it has a chance of working. This would do nothing for new image generators, other than perhaps clog up weights to try and memorize noise, but that seems relatively minor.
@marinepower I generated multiple images in the style of multiple people who always watermark their work, it never generated a single watermark.
Is that you drawing at the beginning? If so I must say that you are really good!
I'll consider it
this is interesting tho once it has its pixels reworked - say by adding to a video (a youtuber discussing your work etc) and scaling slightly then its open to Ai again surely.
awesome content. any one know the ending song track?
Thanks ! That ending song is AI-generated ^^
@@ArtPostAI cool . can you give a link to listen? its lit man 🔥🔥
If you can train a tool to do this programmatically you can train the AI to undo this or have a process to undo this programmatically before the images are fed into the AI it's a cat and mouse game
I’ll just keep not posting anything. It’s cool stuff like this is out but they will find ways around it. I don’t need to share my art yeah it’s nice having people like my stuff but it’s not why I do it. I would still create art even if I was the last person alive and trapped on a deserted island.
I'm sorry, it'd be nice if this tool was reliable, but adversarial example are model-specific. An adversarial example again mid journey wont do anything for stable diffusion and vice versa. People can already fine tune their own version of stable diffusion, so its not even possible to apply adversarial style transfer tailored to the big names in the game. Additionally, adding noise to an image effectively erases the adversarial elements, so anyone training their AI could just add a very simple pre-pass to the image set that'd make the training robust to attacks like this.
Edit: you addresses that it is robust to adding noise, but I feel like this will very likely not remain the case as more sophisticated techniques are developed for this. I frankly don't believe that they've developed adversarial examples that apply to all models generally. That blatantly contradicts how we know they work
To some degree, but not totally.
In the glaze paper itself they show that glazes transfer cross-model to some degree.
Also the fact that adversarial examples transfer cross-model (to some degree) has been known for a while (I remember Ian Goodfellow mentioning it in a lecture from 2016)
2.5gb hehe, I am waiting!
This is great, they are heroes!
im getting this error - 'GPU detected but has insufficient GPU memory (4.00) Nightshade needs atleast 5G of GPU memory' You mentioned you dont even have a graphic card at 3:28 , so how is it working for you?
When you use gpu that is integrated in your APU (CPU with integrated GPU) then this GPU use RAM as their memory.
One thing I have to ask... how come one of my friends has been able to make accurate models, trained and stuff, on images that were "protected" by glaze?
But also, one issue I have is: Art is waaaay better than generative art, always, you say "art you yourself could've done" and I've shown my artist friend some AI art based on specific artists, and she said that those artists WOULD NEVER have done any of these, bc if you ACTUALLY look at the piece and analyze it, there is a whole lotta wrong with it, it just resembles a style, kinda like a mockery of the actual thing.
Thanks for all your input !
How did your friend do it ?
I also kind of agree with your point on art being better. It's true, just like when you ask GPT4 to write a story for you, it's always a snooze-fest.
But at the same time, it's not obvious how much of the general population is skilled enough to differentiate the AI-generated images from the artist whose style has been stolen. Having hundreds of images that look to most people like they were made by you, could definitely be an issue for artists trying to gain a presence online.
Ive seen a banana taped to a wall and a blue square, art is subjective, not objective, therefore you can't actually make any claims about what you think art is not, being its only your opinion.
We are in the very early stages of AI art. Wait a couple years and we will be getting full film productions with Hollywood-tier VFX about skibidi toilet made by little Timmy. I guarantee you, we are much further than you think. I do a lot of research on this stuff, they are only showing us the stuff they made a year ago. I know for a fact there is already an image generator that could replicate an artist's style accurately.
its cool igive people options that is always good
Good point
IT simply doesn't protect anything "de-glazing" an image if as easy a re-redering it with low denoise. taking a screenshot also work. glazed image didn't even perturbed vision models like moondream or gpt vision meaning, in itself you can already train a model or lora on so called "glazed" image.
Truth is, there is no way to protect your art, metadata are easily bypassable, filters also.
I'm sorry there is just no hope for artists other than not posting their art, and it sucks
Why are you so worried about art being able to use like 3% of your art to slightly inspire their own unique image? It doesn't harm you, and it could replicate your style without even having it in its training data. I replied to a lot of comments on this video, it is just irritating to me that people can't see how much this technology is going to benefit everyone, there will just be a minority of people who will have to get different jobs, the same way the industrial revolution made people get different jobs but significantly increased the standard of living.
As I mention in the video, glaze is resistant to jpeg compression, and to blurring. Resampling the pixels as in a screen capture, I'm not sure that'll get rid of it either. They don't mention it in the paper.
As for gpt-vision etc, I'm not sure, Glaze works cross-model to some degree as is known of adversarial examples and as they show in the paper
Glaze isn't aimed at preventing massive pretraining on everyone's pictures. It is meant to prevent targeted mimicry of a specific artist's style.
@@xitcix8360 oh i'm wih you here, i use it constantly, to inspire me, make me more productive, it has fully been integrated in my workflow, i'm just saying for people that are scared that someone would replicate their style (which is a legitimate concern), their is no way of getting any protection other than not posting your art.
It's a revolution but we don't want to give more importance to AI art rather than human art, 90% of ai art is garbage, not beacause of the model but because of the people that have no peculiar talent in understanding what makes a good images, composition, ...
People make terrible art as well, but it's rooted in emotions, i'll let that slide more easilly than AI if you see my point
@@xitcix8360benefiting everyone? Who is this everyone? A lot of the problem arise from artist who just don’t want their art to be used to train an Ai model or want their art style to copy by a machine.
I hope it's here to stay. I also hope that something like Deviant Art will even offer an option where they always apply this filter on ALL your uploaded art, including retroactively, so artists don't have to fear like this anymore.
So you’re using ai to protect my work from ai by mimicking my art and applying a style overlay with ai so the other ai interprets it wrong? Oh boy
At no point does the glaze technique learn to mimic your art.
The style transfer technique doesn't require training on your image, nor does the creation of an adversarial example ^^
It's ironic that the Disney artist is the example of "original art", lmao
Disney artist ?
@@ArtPostAI Yeah, there's literally the little mermaid and the beauty and the beast in Hollie's art. Which have the exact same design as the Disney movies as well. lmao. It's the pot calling the kettle black.
There needs to be a robots.txt style thing in image meta data that says 'i opt out from ai related training'. I love ai art but I prefer not to antagonize people by using thier works in any way without permission, seems like a no-brainer.. All styles from any artist can be generated anyways at this point if you're creative enough with your prompt without adding more art to the training data. Art that the authors do not want to share in that way.
When you upload your art online, everyone is allowed to use it however they want. AI training off of it is exactly the same as a human training off of it, there's just a tiny chance that it is slightly inspired from your artwork to create its own, unique image.
"AI training off of it is exactly the same as a human training off of it"
Glaze is aimed at targeted style mimicry attacks.
In style mimicry, the AI trains with the sole, tunnel-vision objective to copy your art to the best of its ability.
This isn't anything like a human being copying your art to learn lessons from it and move on to create their own style.
@@xitcix8360 Does it bother you that someone doesn't want to contribute their work to ai?
That takes a lot of hubris to think mid journey is going to choose your artwork specifically and create a name and category just for every internet artist on the block. I believe these tools are very valuable for high level professional artists who's work fetches high prices on the open market or have made contributions to their field
two more papers down the line and your glazing is useless.
Good point. Someone else mentioned it also, that a crack to a glaze 2.0 would mean that even if they come up with glaze 3.0, anyone who's keeping a database of past images (which just had glaze 2.0) would be able to finetune on everyone's art.
That's a limitation. But I'm pretty convinced that no one's going to crack the general concept of adversarial examples any time soon. People have tried for 10 years. They can crack a given adversarial attack, though.
Simply rescaling the image breaks the glazing, it works in a carefully controlled environment but not in the wild.
@@GeneTurnbow In the glaze paper they show robustness to blurring of the image.
Assuming bilinear interpolation, rescaling is kind of like blurring, and I think some of that robustness would carry over.
I haven't tried myself, but if the paper is to be trusted, you'd still have some protection carry over
@@ArtPostAI Apparently the process can be circumvented by denoising, scaling, or cropping, all three of which are basic processes most people do when training their LoRA's, so it does nothing for those use cases. If somebody actually wants to go to the trouble of training an AI on your style, they would train a LoRA, so Glaze (and Nightshade, which has the same vulnerabilities) effectively do nothing.
@@GeneTurnbow
I trust you, but can I ask for the source ? Is the Glaze paper misleading ? Or is there some extra info there that I'm missing ?
you didn't actually explain how it works at all
Maybe in written form it'll be clearer ?
Glaze works by altering the image in a way imperceptible to the human eye, but that makes the AI see the image in a different style.
This video isn't aimed at people with a background in AI so I kept it short and intuitive
@@ArtPostAI fair enough, im really intrigued about how that software actually works and how ai percieves images differently, it would be a really interesting topic for a video. Also i think you shouldnt shy away from complex topics because you think people wont understand them. i have no experience with ai, i dont think i have even used one directly, but i am really interested in how it works. You just need to use language people can understand.
Saying that, it would probably take a lot to explain it since it seems fairly complicated.
@@filmwright7188 Thanks for the feedback ! Note taken !
Super cool!
Thanks :)
Meh. Too little too late.
It's right on time for me! and it's only the beginning.
@Xeronimo74 is that to say we're doomed ?
I disagree. AI poisons its own well already so if we add to it that will possibly cause it to have major problems
It's not like you could've ever stopped or even remotely slowed AI. As if the very few images "protected" by this were gonna throw a wrench into the cogs and make it unusable.
Thats how adversarial progress works
Imagine AI taking a screenshot of the image and using that instead of the original image. Yeah, that would be the end of protection. And this only shows how insane artists are.Art style is not protected by copyright. People copying each other's art style is what art history and art movements are.
There are very few artists today that didn't copy their art style from somewhere, or made a mix of art styles.
I'm not sure that's right.
As they show in the paper (and I mention in the video), Glaze is resistant to blur, heavy JPEG compression, and even training models on glazed art still doesn't quite counter the protection.
Certainly whatever changes are applied while taking a screenshot (as far as I can tell, different pixel-wise integration isn't worse than blur, and then maybe some compression on top if it's not PNG, isn't worse than heavy JPEG) aren't enough to get rid of the protection.
As for the comparison between how humans learn from other and how AI copies artists, as someone who trains AI art models everyday, does art and knows quite a few pro artists, the process is completely different.
I won't go in detail here because there's so much on my channel already, but one way to introduce the sheer difference is this :
1) AI follows a single, mathematically expressed objective, with complete tunnel vision. In the case of generative AI, that goal is maximum likelihood estimation, that is, generate data that imitates the training data as well as possible.
2) Humans are living creatures, with so many competing goals and drives that we aren't anywhere close to understanding ourselves.
3) When AI copies an artist's style, it has as its unique and sole objective to create images that look perceptually identical to the artist's images. It does no more than that.
4) For artists copying each other, the goal is always to learn something and then move on to create your own artistic personality. No art movement was created by simply being a perfect replica of previous art movements.
@@ArtPostAI Point 1. is kinda true for LLMs
But not as true for noisy diffusion models...
@@galewallblanco8184
I'm pretty sure it is just as true for diffusion models.
Modern diffusion models are Latent Diffusion Models, which do the diffusion process in the latent space of a pretrained autoencoder model. Autoencoder models are trained on an image reconstruction loss, which is a proxy for maximum likelihood estimation.
@@ArtPostAI When AI copies an artist's style, it doesn't rely solely on that artist's work. Instead, it takes inspiration from multiple sources within its training data. It blends the artist's unique style with aspects from various other images it has seen during training. When generating an image, the AI draws from its entire training dataset, selecting elements that best match the given prompt to create a coherent and stylistically appropriate output. This is similar to how humans make art.
@@xitcix8360 this isn't how we make art bro, we dont pull hundred upon hundreds of random images to draw something we look at 1 or 2 images and complete the rest, learn to draw before you yap
the only way to limit ai access to your art is to limit human access
luckily ai will continue to get better, especially in secular and civilized countries (where we will ignore occidental IP law)
don't fall for right-wing misinformation claiming that a human/ai looking at your art is "stealing" because the human/ai can recreate it after removing access to original file (those of us in the industry call that learning), or that limiting api access to international open knowledge and information is a good thing
I don't agree with this. For one, recent AI generators will be able to just look at your art and copy the style from that without it even being in the training data. Secondly, these tools give everyone the ability to create whatever they want, inhibiting their ability to fulfill their creative visions just so you can profit off of your art seems selfish to me. You aren't going to live in poverty, you can just get a different job.
Very interesting perspective.
AI generators have to "look" at your art to copy it. Glaze works by tricking the "looking" part, making the AI see something different than we see with out human eyes.
You do make a compelling point about AI. I work in AI research and completely see the many avenues of new creation that are opened up with its coming.
But two things :
First more ability to express themselves more easily doesn't necessarily lead to understanding each other better (think, the internet for instance). It's a case-by-case basis.
Second, let's get more specific. Glaze isn't aimed at completely destroying every AI art model, it's aimed at preventing AI art models from being trained to mimic a specific artist's style. Which to me isn't a form of self-expression, it's almost identity theft when you think about the decades people spend creating their own styles.
@@ArtPostAI Anyone can mimic someone else's style, that wouldn't be considered theft, and it's a common practice for learning artists. I think people need to accept that the value of human art is no longer in the result, but the process. I uploaded a glazed image to GPT-4, and it easily understood every detail, even better than a human could, which means it could generate an image in that style despite it being glazed. The point of creating art is not to understand eachother, art has never been about that. It's about self-expression and sharing ideas.
@@xitcix8360 The point I'm making is that for humans, copying others is a learning experience. In style mimicry attacks, the end goal is mimicry. It is not a learning step. It ends with mimicry of the artist. It doesn't move on to then create its own style from the lessons learned from copying.
Thanks for pointing out about gpt4-V though, that helps a lot
@xitcix8360 no art is about empathy through connection with another person's form of self expression, remeber a person's artistic self expression is a portal to their soul
These tools aren’t “allowing you to create whatever you want,” because you aren’t creating anything. The AI is. You are just hiring a super cheap artist. Don’t confuse inputting prompts with creating art.