COLOR CONTROL! - NEW ControlNET Method for A1111

Поділитися
Вставка
  • Опубліковано 26 лют 2023
  • Use ControlNET to change any Color and Background perfectly. In Automatic 1111 for Stable Diffusion you have full control over the colors in your images. Use a Color Map to set any color you want. Then use the ControlNET Canny, Depth and MLSD Method to bring the original Details AND new Background Patterns into your image.
    #### Links from the Video ####
    OpenPose and ControlNET Explained: • ControlNET Stable Diff...
    Multi-ControlNET: • ControlNET MULTI-MODE ...
    Change Light with ControlNET: • Control Light in AI Im...
    Unsplash Background: unsplash.com/photos/m_7p45JfXQo
    Support my Channel:
    / @oliviosarikas
    Subscribe to my Newsletter for FREE: My Newsletter: oliviotutorials.podia.com/new...
    How to get started with Midjourney: • Midjourney AI - FIRST ...
    Midjourney Settings explained: • Midjourney Settings Ex...
    Best Midjourney Resources: • 😍 Midjourney BEST Reso...
    Make better Midjourney Prompts: • Make BETTER Prompts - ...
    My Facebook PHOTOGRAPHY group: / oliviotutorials.superfan
    My Affinity Photo Creative Packs: gumroad.com/sarikasat
    My Patreon Page: / sarikas
    All my Social Media Accounts: linktr.ee/oliviotutorials
  • Навчання та стиль

КОМЕНТАРІ • 91

  • @OlivioSarikas
    @OlivioSarikas  Рік тому +3

    #### Links from the Video ####
    OpenPose and ControlNET Explained: ua-cam.com/video/ci7NfTsifd0/v-deo.html
    Multi-ControlNET: ua-cam.com/video/aY39egKfEfQ/v-deo.html
    Change Light with ControlNET: ua-cam.com/video/_xHC3bT5GBU/v-deo.html
    Unsplash Background: unsplash.com/photos/m_7p45JfXQo

    • @ReyZar666
      @ReyZar666 Рік тому

      how do i know u are not an ai?

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      @@ReyZar666 I am a AI = Awesome Indiviual 😅

    • @godofdream9112
      @godofdream9112 7 місяців тому

      how to do this in cumfy ui...

  • @sebastiankamph
    @sebastiankamph Рік тому +56

    Very cool way to evolve what I did with lights! Thanks for the shoutout my friend 😊🌟 Here's a dad joke for your viewers since you did it with colours: I just found out I'm color blind. The diagnosis came completely out of the purple.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +17

      You are welcome! Nice Joke! I would give you the green light for more jokes, but you wouldn't see that 😂

    • @p_p
      @p_p Рік тому +3

      @@OlivioSarikas lmao this is the best part of the internet

    • @Lyntur
      @Lyntur Рік тому +6

      @@OlivioSarikas Glad to see that two of my fav AI content creators also have a talent to become a comedy duo 🤣

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +7

      yes totally. we are discord buddies to. colab video coming some time soon

    • @p_p
      @p_p Рік тому +1

      @@OlivioSarikas YES! my 2 fav buddies in a collab. Can't wait

  • @swannschilling474
    @swannschilling474 Рік тому

    Thats such a neat little trick!! Awesome!!

  • @Bob3D2000
    @Bob3D2000 6 місяців тому

    Thanks. I thought I'd mention that this works perfectly for me with only one ControlNet, i.e. by doing everything up to the 7 minute mark of the video and ignoring the rest. I don't what the rest does because I didn't watch it since I got the results I wanted without it!

  • @timeless3d858
    @timeless3d858 Рік тому +9

    Nice, you can also do this effect pretty easily without AI by changing the blending mode of the new layers to "color"

    • @cwdoby
      @cwdoby Рік тому +8

      Right? We've been doing this in Photoshop for years way easier than this. Lol

    • @D4rrenCr4bb
      @D4rrenCr4bb Рік тому

      Yeah, there's a certain amount you can do in Photoshop, but sometimes the effect is slightly different when regenating the AI image. For example, when he adds the two white stripes acting as lights notice how the hair gets backlit as if that is an actual light source. Pretty cool, and yes you could do that in Photoshop too but it might take a bit more work, and once you have the AI set up, depending on your denoise level you can get a bit of variation whilst still having some control, which is an aspect I really like.

  • @LikoKZN
    @LikoKZN Рік тому

    Спасибо за вашу работу!

  • @olegdragora2557
    @olegdragora2557 Рік тому

    Thank you for the video!
    I wish we had am Affinity Photo plugin for Stable Diffusion.
    Like there already are plugins for Photoshop, Krita, Gimp, Blender and others.

  • @FollowMarcos
    @FollowMarcos Рік тому

    That grin at the end "fyck yeah, that looks great!"

  • @synthoelectro
    @synthoelectro Рік тому

    the fashion industry is going to blow up with this.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      now all we need is a 3D printer for clothing ;)

    • @synthoelectro
      @synthoelectro Рік тому

      @@OlivioSarikas it will happen for those folks.

  • @zephyra6248
    @zephyra6248 Рік тому +3

    Hi Olivio. Thank you for your tutorials. In a previous video, you said you wished that you could have a live update of the ControlNet preprocessed image. The "Preview Annotator Result" button allows you to render the preprocessed image next to the ControlNet reference image without iterating on the prompt. You may have noticed that option after your previous video, but I thought I would share in case it helps any reader.

  • @onzo7977
    @onzo7977 Рік тому +1

    Thanks! It's a pity that there is no color highlighting function in HakuImg. I am glad that SD is becoming a full-fledged creative studio. Also, after the update, I found that the ControlNet-M2M script for working with video appeared.-

  • @mxroot
    @mxroot Рік тому

    Very cool 👍Do you need to have black and white image for canny? It doesn't carry with it any colors to the processed map, does black and white make the canny more accurate?

  • @KrakenCMT
    @KrakenCMT Рік тому

    Props for using Affinity!

  • @AIWarper
    @AIWarper Рік тому

    I have a fun suggestion for you. Try to make a run animation sprite sheet or a sword swing (2D side scroller style). I’ve been fiddling with it but I haven’t been able to get anything decent.
    Would be huge for indie gamers

  • @Meta-Gnome
    @Meta-Gnome Рік тому

    Love your Videos! Are you going to cover Control Net in Deforum? That would be Awesome!

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      that was actually the video i wanted to do today. but it seems like it's still in early development and i kind of would like it to actually work well for a tutorial. but i will probably cover it very soon

  • @chinico68
    @chinico68 Рік тому

    Hi there Olivio, very nice and useful tutorials, but I have a technical question... :( Why my depth leres doen't work? Is the only one model that doesn't work in automatic1111. I am on Mac OS I always get the error - RuntimeError: Input type (MPSFloatType) and weight type (torch.FloatTensor) should be the same

  • @theamazonclub
    @theamazonclub Рік тому

    Great video as always. So well presented and explained. In ControlNet just wondering if it's possible to use the same original background each time, without the AI changing it, and then have different figure poses in front of it using OpenPose e.g. producing a set of images of the same person doing different poses - such as in a photographer's studio. Thanks in advance.

  • @Nutronic
    @Nutronic Рік тому

    Hi how are you?
    I was wondering if you have done a video on how to change a childs sketch or any sketch (kids would be better as its more complicated) with ControlNet? as its been updated since other people have made tutorials.

  • @DandelionStrawart
    @DandelionStrawart Рік тому

    As usual Olivio, you have a completely different take that really works wonders. Thanks for the tutorial.

  • @zephilde
    @zephilde Рік тому +4

    Hi!
    I'm not sure to see the advantage using AI to do this task... A color overlay layer in any image editor will do the trick a lot easier !

    • @junehanabi1756
      @junehanabi1756 Рік тому +2

      was thinking the same thing, if your wanting the same exact image but re-colored, image editors have supported this for a very long time and any goodone will do it very well. AI image generators should be for generating new images, not re-cloring existing ones

    • @Xeronimo74
      @Xeronimo74 Рік тому +1

      Agreed. This seems to overcomplicate things since AI doesn't really add much in this case?

  • @clouds2593
    @clouds2593 Рік тому +2

    Damn, that's awesome. Bring more tutorials like this.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      Thank you. Keep sharing it. The more views this get's the more often i can to stuff like this 😍

  • @TransformXRED
    @TransformXRED Рік тому +6

    Btw, you should try to compose images with the segmentation model, in affinity photo (same method as here, you select different color for different objects,... Segmentation uses a specific color for specific objects. A chair for example is represented by a very specific blue). Then you use the segmentation model, with the preprocessor on "none"
    I watched a video about it yesterday... I can't find it now. The guy had a complete list of each color associated to each objects/animals/person.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      I still need to look into that

    • @MikeHowles
      @MikeHowles Рік тому +2

      I was thinking the same. Use segmentation. Also while the video is informative and instructive, based on all these manual steps in an image editor, in practice I'd probably just stay in the image editor and use the color changing feature native there :) Still a neat video, though.

    • @TransformXRED
      @TransformXRED Рік тому

      @@OlivioSarikas I can't post the link, but there is a google sheet doc with all the colors and names of objects associated to it in a post on the reddit page of Stable Diffusion (with examples)

    • @TransformXRED
      @TransformXRED Рік тому

      @@MikeHowles
      Changing colors of clothes is pretty difficult especially to make it look "natural", and even more on black and white clothes, or just darker colors in general.
      With segmentation, you can place couch/flowers/trees/water/road, etc where you want (with each of their color), and you'll get a very consistent scene.
      I just tested now. I took a reference image, draw a (specific) color of red on each framed pictures, draw on the couch... so just fr
      I added it in the controlnet - none + segmentation, and with a prompt like "a room with framed pictures of dogs with a blue couch", and you get a different image each time with framed pictures at the same spot, same with the couch.
      You can even do it without any prompt if you select "guess mode". What's on the segmentation image will always be at the same place.

    • @UnknownUser-eb1lk
      @UnknownUser-eb1lk Рік тому

      @@MikeHowles The only difference I see is SD may fill in gradients more randomly or naturally than a color shift in the image editor.

  • @Faris-1900
    @Faris-1900 Рік тому

    I couldn't understand where did you get the image you edited? I see the result of the prompt is different from the one you edited and refed into img2img.

  • @the_trevoir
    @the_trevoir 5 місяців тому

    I don’t get it. This doesn’t solve the problem of SD not being able to get colours right *while* rendering images. If you’re going to select everything, you didn’t accomplish anything you couldn’t have just done in Affinity. ?

  • @Renfield286
    @Renfield286 Рік тому

    I was thinking the other day if there was a control net mode that could use the input of a colour palette.

  • @honyeechua9670
    @honyeechua9670 Рік тому

    is that still useful for animal or other things?

  • @rahulhanot
    @rahulhanot 6 місяців тому

    I am not able to see different controlnet image tab in img2img please help me

  • @welton26
    @welton26 Рік тому +1

    Dude, I created a full body character with a white background, I'm trying to add different backgrounds in stable diffusion with Controlnet, but I can't at all. Is there any way you can make a tutorial on this? It doesn't cut right, even messing with the contrast and when I put it to change the background, it doesn't matter if I put it normal or inverted it messes with the character.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      Have you tried Depth-leres with the background removal slider to around 80? Like i do in this video

    • @Veruky
      @Veruky Рік тому

      Watch on the PYTON window... sometimes controlnet is hanging up and doesnt work anymore.

  • @mohammedmulazada9488
    @mohammedmulazada9488 Рік тому

    Could you create a guide on how to use the toyxyz blender file to generate poses including depth passes for the hands? I cannot figure out blender for the life of me.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      looks good. but why not use Daz3D like i showed in one of my other videos?

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      ohhhh, i see. That' actually pretty cool!

  • @Veruky
    @Veruky Рік тому

    Controlnet is the most powerful feature for stable diffusion.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      I feel like ControlNET has become Stable Diffusion 😂

  • @platonicgamer
    @platonicgamer Рік тому

    I tried to ask for help relating to ControlNet on the facebook group. But my post was removed by an admin? Do you know why Olivio?

  • @Motivationstation82
    @Motivationstation82 Рік тому

    Cool I still can't get control net or auto 1111 to work

  • @royaron8101
    @royaron8101 6 місяців тому +1

    yaml file dont exist

  • @daryladhityahenry
    @daryladhityahenry Рік тому

    But.. The model changed >,

  • @dekompose
    @dekompose Рік тому +1

    I’m loving AI!😊

  • @godofdream9112
    @godofdream9112 7 місяців тому

    Same video on cumfyUI, please 🥺

  • @123yms
    @123yms Рік тому

    お兄ちゃんすごい!

  • @pn4960
    @pn4960 Рік тому

    I want to use that with environments

  • @Ekkivok
    @Ekkivok Рік тому

    First !

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      hey! Winner, Winner, Chicken Dinner!

    • @Ekkivok
      @Ekkivok Рік тому

      @@OlivioSarikas i have another method for your color and it will be more simple i guess, you can simple change HUE/Saturation of the clothe (hait) you want to change and then replace it with a correct prompt into SD ! have you got a devianart btw ?

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      good idea, but chaing the HUE will change all the colors in that area, but not in the same way. So you end up with some strange color combinations. But it's surely worth a try

    • @Ekkivok
      @Ekkivok Рік тому

      @@OlivioSarikas just select the aera with the inteligent smart lasso tool or magnetic tool on photoshop en then you can create a mask with it fill the select part with black and the rest in white then upload it on SD with the upload mask option and choose the color you can change the hairstyle while maintaining the color :) (work also for the hairs) :)

  • @travissmith5994
    @travissmith5994 Рік тому

    I got lost along the way. You generated an image, then completely ignored it and used a different image. Why? That made no sense to me. If you wanted to use a different image, why not just start in the Img2Img tab? It also seems like you were maybe one or two steps away in your editor from having set the colors the way you wanted, so what advantage was there in passing everything back to the AI?
    I guess there was too much "here are the steps" and not enough "here are the reasons" for me.

  • @jeffwads
    @jeffwads Рік тому

    I tried using your shirt as a template and it banned me.

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      banned? from where?

    • @ashen_kills
      @ashen_kills Рік тому

      @@OlivioSarikas Sir, I believe he is attacking your sense of fashion. I however support you because you remind me of Standartenführer Hans Landa from Inglorius Bastards :P

    • @OlivioSarikas
      @OlivioSarikas  Рік тому

      WTF 😂

  • @RhysAndSuns
    @RhysAndSuns Рік тому

    🤔🤔

  • @gagi121
    @gagi121 4 місяці тому

    yeah no, I think you are better off just editing this in Photoshop. This needs control

  • @stefanb7015
    @stefanb7015 Рік тому

    +1 for affinity suit... won't ever go back to Adobe again...

  • @samsara2024
    @samsara2024 Рік тому +1

    Haha by the half of the video I have already change the colors by hand. Too complex to be using a AI

    • @OlivioSarikas
      @OlivioSarikas  Рік тому +1

      Have you also changed the background by hand in the art style of the render? You seem to be missing the point. Also changing hue of something isn't the same as for example different hair colors have different ways to reflect light

  • @gogoldgoal
    @gogoldgoal Рік тому

    This video is weired. It didn't save any procedure by using SD, it just make it more complex.

  • @therookiesplaybook
    @therookiesplaybook Рік тому

    Just inpaint her shirt and hair. So much easier than all this.

  • @vicioustide
    @vicioustide Рік тому

    Step one.. buy something. No thanks😢

  • @KurtStaInes
    @KurtStaInes Рік тому

    this better to use than Photoshop it just insane