COLOR CONTROL! - NEW ControlNET Method for A1111
Вставка
- Опубліковано 26 лют 2023
- Use ControlNET to change any Color and Background perfectly. In Automatic 1111 for Stable Diffusion you have full control over the colors in your images. Use a Color Map to set any color you want. Then use the ControlNET Canny, Depth and MLSD Method to bring the original Details AND new Background Patterns into your image.
#### Links from the Video ####
OpenPose and ControlNET Explained: • ControlNET Stable Diff...
Multi-ControlNET: • ControlNET MULTI-MODE ...
Change Light with ControlNET: • Control Light in AI Im...
Unsplash Background: unsplash.com/photos/m_7p45JfXQo
Support my Channel:
/ @oliviosarikas
Subscribe to my Newsletter for FREE: My Newsletter: oliviotutorials.podia.com/new...
How to get started with Midjourney: • Midjourney AI - FIRST ...
Midjourney Settings explained: • Midjourney Settings Ex...
Best Midjourney Resources: • 😍 Midjourney BEST Reso...
Make better Midjourney Prompts: • Make BETTER Prompts - ...
My Facebook PHOTOGRAPHY group: / oliviotutorials.superfan
My Affinity Photo Creative Packs: gumroad.com/sarikasat
My Patreon Page: / sarikas
All my Social Media Accounts: linktr.ee/oliviotutorials - Навчання та стиль
#### Links from the Video ####
OpenPose and ControlNET Explained: ua-cam.com/video/ci7NfTsifd0/v-deo.html
Multi-ControlNET: ua-cam.com/video/aY39egKfEfQ/v-deo.html
Change Light with ControlNET: ua-cam.com/video/_xHC3bT5GBU/v-deo.html
Unsplash Background: unsplash.com/photos/m_7p45JfXQo
how do i know u are not an ai?
@@ReyZar666 I am a AI = Awesome Indiviual 😅
how to do this in cumfy ui...
Very cool way to evolve what I did with lights! Thanks for the shoutout my friend 😊🌟 Here's a dad joke for your viewers since you did it with colours: I just found out I'm color blind. The diagnosis came completely out of the purple.
You are welcome! Nice Joke! I would give you the green light for more jokes, but you wouldn't see that 😂
@@OlivioSarikas lmao this is the best part of the internet
@@OlivioSarikas Glad to see that two of my fav AI content creators also have a talent to become a comedy duo 🤣
yes totally. we are discord buddies to. colab video coming some time soon
@@OlivioSarikas YES! my 2 fav buddies in a collab. Can't wait
Thats such a neat little trick!! Awesome!!
Thanks. I thought I'd mention that this works perfectly for me with only one ControlNet, i.e. by doing everything up to the 7 minute mark of the video and ignoring the rest. I don't what the rest does because I didn't watch it since I got the results I wanted without it!
Nice, you can also do this effect pretty easily without AI by changing the blending mode of the new layers to "color"
Right? We've been doing this in Photoshop for years way easier than this. Lol
Yeah, there's a certain amount you can do in Photoshop, but sometimes the effect is slightly different when regenating the AI image. For example, when he adds the two white stripes acting as lights notice how the hair gets backlit as if that is an actual light source. Pretty cool, and yes you could do that in Photoshop too but it might take a bit more work, and once you have the AI set up, depending on your denoise level you can get a bit of variation whilst still having some control, which is an aspect I really like.
Спасибо за вашу работу!
Thank you for the video!
I wish we had am Affinity Photo plugin for Stable Diffusion.
Like there already are plugins for Photoshop, Krita, Gimp, Blender and others.
That grin at the end "fyck yeah, that looks great!"
the fashion industry is going to blow up with this.
now all we need is a 3D printer for clothing ;)
@@OlivioSarikas it will happen for those folks.
Hi Olivio. Thank you for your tutorials. In a previous video, you said you wished that you could have a live update of the ControlNet preprocessed image. The "Preview Annotator Result" button allows you to render the preprocessed image next to the ControlNet reference image without iterating on the prompt. You may have noticed that option after your previous video, but I thought I would share in case it helps any reader.
ohhhh, cool! I did not know that
Thanks! It's a pity that there is no color highlighting function in HakuImg. I am glad that SD is becoming a full-fledged creative studio. Also, after the update, I found that the ControlNet-M2M script for working with video appeared.-
Very cool 👍Do you need to have black and white image for canny? It doesn't carry with it any colors to the processed map, does black and white make the canny more accurate?
Props for using Affinity!
Thank you
I have a fun suggestion for you. Try to make a run animation sprite sheet or a sword swing (2D side scroller style). I’ve been fiddling with it but I haven’t been able to get anything decent.
Would be huge for indie gamers
Love your Videos! Are you going to cover Control Net in Deforum? That would be Awesome!
that was actually the video i wanted to do today. but it seems like it's still in early development and i kind of would like it to actually work well for a tutorial. but i will probably cover it very soon
Hi there Olivio, very nice and useful tutorials, but I have a technical question... :( Why my depth leres doen't work? Is the only one model that doesn't work in automatic1111. I am on Mac OS I always get the error - RuntimeError: Input type (MPSFloatType) and weight type (torch.FloatTensor) should be the same
Great video as always. So well presented and explained. In ControlNet just wondering if it's possible to use the same original background each time, without the AI changing it, and then have different figure poses in front of it using OpenPose e.g. producing a set of images of the same person doing different poses - such as in a photographer's studio. Thanks in advance.
Hi how are you?
I was wondering if you have done a video on how to change a childs sketch or any sketch (kids would be better as its more complicated) with ControlNet? as its been updated since other people have made tutorials.
As usual Olivio, you have a completely different take that really works wonders. Thanks for the tutorial.
Hi!
I'm not sure to see the advantage using AI to do this task... A color overlay layer in any image editor will do the trick a lot easier !
was thinking the same thing, if your wanting the same exact image but re-colored, image editors have supported this for a very long time and any goodone will do it very well. AI image generators should be for generating new images, not re-cloring existing ones
Agreed. This seems to overcomplicate things since AI doesn't really add much in this case?
Damn, that's awesome. Bring more tutorials like this.
Thank you. Keep sharing it. The more views this get's the more often i can to stuff like this 😍
Btw, you should try to compose images with the segmentation model, in affinity photo (same method as here, you select different color for different objects,... Segmentation uses a specific color for specific objects. A chair for example is represented by a very specific blue). Then you use the segmentation model, with the preprocessor on "none"
I watched a video about it yesterday... I can't find it now. The guy had a complete list of each color associated to each objects/animals/person.
I still need to look into that
I was thinking the same. Use segmentation. Also while the video is informative and instructive, based on all these manual steps in an image editor, in practice I'd probably just stay in the image editor and use the color changing feature native there :) Still a neat video, though.
@@OlivioSarikas I can't post the link, but there is a google sheet doc with all the colors and names of objects associated to it in a post on the reddit page of Stable Diffusion (with examples)
@@MikeHowles
Changing colors of clothes is pretty difficult especially to make it look "natural", and even more on black and white clothes, or just darker colors in general.
With segmentation, you can place couch/flowers/trees/water/road, etc where you want (with each of their color), and you'll get a very consistent scene.
I just tested now. I took a reference image, draw a (specific) color of red on each framed pictures, draw on the couch... so just fr
I added it in the controlnet - none + segmentation, and with a prompt like "a room with framed pictures of dogs with a blue couch", and you get a different image each time with framed pictures at the same spot, same with the couch.
You can even do it without any prompt if you select "guess mode". What's on the segmentation image will always be at the same place.
@@MikeHowles The only difference I see is SD may fill in gradients more randomly or naturally than a color shift in the image editor.
I couldn't understand where did you get the image you edited? I see the result of the prompt is different from the one you edited and refed into img2img.
I don’t get it. This doesn’t solve the problem of SD not being able to get colours right *while* rendering images. If you’re going to select everything, you didn’t accomplish anything you couldn’t have just done in Affinity. ?
I was thinking the other day if there was a control net mode that could use the input of a colour palette.
is that still useful for animal or other things?
I am not able to see different controlnet image tab in img2img please help me
Dude, I created a full body character with a white background, I'm trying to add different backgrounds in stable diffusion with Controlnet, but I can't at all. Is there any way you can make a tutorial on this? It doesn't cut right, even messing with the contrast and when I put it to change the background, it doesn't matter if I put it normal or inverted it messes with the character.
Have you tried Depth-leres with the background removal slider to around 80? Like i do in this video
Watch on the PYTON window... sometimes controlnet is hanging up and doesnt work anymore.
Could you create a guide on how to use the toyxyz blender file to generate poses including depth passes for the hands? I cannot figure out blender for the life of me.
looks good. but why not use Daz3D like i showed in one of my other videos?
ohhhh, i see. That' actually pretty cool!
Controlnet is the most powerful feature for stable diffusion.
I feel like ControlNET has become Stable Diffusion 😂
I tried to ask for help relating to ControlNet on the facebook group. But my post was removed by an admin? Do you know why Olivio?
Cool I still can't get control net or auto 1111 to work
yaml file dont exist
But.. The model changed >,
I’m loving AI!😊
Same video on cumfyUI, please 🥺
お兄ちゃんすごい!
I want to use that with environments
great idea. Should work the same way
First !
hey! Winner, Winner, Chicken Dinner!
@@OlivioSarikas i have another method for your color and it will be more simple i guess, you can simple change HUE/Saturation of the clothe (hait) you want to change and then replace it with a correct prompt into SD ! have you got a devianart btw ?
good idea, but chaing the HUE will change all the colors in that area, but not in the same way. So you end up with some strange color combinations. But it's surely worth a try
@@OlivioSarikas just select the aera with the inteligent smart lasso tool or magnetic tool on photoshop en then you can create a mask with it fill the select part with black and the rest in white then upload it on SD with the upload mask option and choose the color you can change the hairstyle while maintaining the color :) (work also for the hairs) :)
I got lost along the way. You generated an image, then completely ignored it and used a different image. Why? That made no sense to me. If you wanted to use a different image, why not just start in the Img2Img tab? It also seems like you were maybe one or two steps away in your editor from having set the colors the way you wanted, so what advantage was there in passing everything back to the AI?
I guess there was too much "here are the steps" and not enough "here are the reasons" for me.
I tried using your shirt as a template and it banned me.
banned? from where?
@@OlivioSarikas Sir, I believe he is attacking your sense of fashion. I however support you because you remind me of Standartenführer Hans Landa from Inglorius Bastards :P
WTF 😂
🤔🤔
yeah no, I think you are better off just editing this in Photoshop. This needs control
+1 for affinity suit... won't ever go back to Adobe again...
best alternative out there :)
Haha by the half of the video I have already change the colors by hand. Too complex to be using a AI
Have you also changed the background by hand in the art style of the render? You seem to be missing the point. Also changing hue of something isn't the same as for example different hair colors have different ways to reflect light
This video is weired. It didn't save any procedure by using SD, it just make it more complex.
Just inpaint her shirt and hair. So much easier than all this.
Step one.. buy something. No thanks😢
this better to use than Photoshop it just insane