How to change ANYTHING you want in an image with INPAINT ANYTHING+ControlNet A1111 [Tutorial Part2]

Поділитися
Вставка
  • Опубліковано 13 жов 2024

КОМЕНТАРІ • 73

  • @DigitalGhost269
    @DigitalGhost269 Рік тому +13

    I appreciate your videos so much! Over my year long obsession w Stable Diffusion i've watched every single tutorial creator I can find on UA-cam n ur stuff hits me right in my brainbox
    You do a couple things others don't that really impact my ability to learn stable diffusion deeper:
    - you describe not just the action but _why_ you did the action.... All the way down to 'i ticked this box because...'
    - you _actually zoom in on what you're doing_ and that's absolutely essential for this kinda tutorial content
    - you have a fun, friendly and 'low stakes' vibe as u narrate
    - you're explaining extensions and content in far greater depth than anyone else imo; perfect for the experience tier I'm at
    Over these last two videos you've taught me:
    - to use 'inpaint anything', an extension I initially wrote off as 'meh not that much more useful than inpainting' n removed it
    - incidental explanations of things along the way, like cleaning the artifacts up in this video
    - _what the actual fuck ControlNet reference is used for_ cuz nobody seemed to be able to explain it adequately on UA-cam, reddit or that horribly designed Stable Diffusion Tutorials site
    Thank you so much for your time, labor and knowledge! It's made an appreciable difference in my understanding, workflow, final results and enjoyment of Stable Diffusion.
    If ur looking for ideas to go deeper into, I'd love to learn more about:
    - ROOP, the difference between n effect of the 'use generated face' and 'use face restore'
    - can one use resources like ROOP as a way to create nonexistent amalgamations of folk to then train not real ppl 'nobodys' into LoRA for consistent characters
    - how to actually train _concepts_ into LoRA; I got ppl n even my dog working but it seems to be an entirely different process to get stuff like recurring magical/sci-fi effects, holograms, _a goddamn cigarette_ in someone's mouth/hands or - my holy grail - translucent streaming ethereal ribbons of azure barcode.
    - ways to fix or make less frustrating the recurring memory leaks automatic1111 has
    - ways you're slowing down your automatic1111 without realizing it. Like once I figured out - pretty sure u said it tbh - installed extensions slow u down loading, I removed stuff like image viewer n things sped up. It's made me wonder: does having an absurd amount of checkpoints or LoRA in your source directory slow it down, something I'm beginning to suspect is true
    But like - you do you, I'll enjoy your wisdom along the way no matter what u pick. I'm not ur dad.
    Once again: thank you. You're an asset I truly appreciate 💪

    • @KeyboardAlchemist
      @KeyboardAlchemist  Рік тому +3

      Hello! First of all, thank you very much for taking the time to write such a detailed comment. My initial drive to create SD tutorials was because I wanted to create and share more in-depth tutorials with the community such that newcomers and intermediate users would not struggle to find certain information like I did when I first started out. It feels good to know that my videos made an impact for viewers like you. Plus genuine and kind feedback like yours gives me extra motivation to continue making videos and improve myself. So from the bottom of my heart, thank you!
      Also, thank you for sharing ideas for future videos. Some of your ideas match videos that I'm working on, so you will definitely see a few being explained in future tutorials.
      Again, thank you for taking the time to share your thoughts and feedback. I really appreciate it! Cheers!

    • @TheDocPixel
      @TheDocPixel Рік тому +2

      I agree 100% with everything you wrote. The other SD channels go from bad to terrible extremely fast. Especially since Aitrepreneur decided to stop doing SD tuts.

  • @ViratxDoodle
    @ViratxDoodle Рік тому +3

    Hats off to your editing. You're doing much more for the SD community than the flood of walk-through type videos floating around on youtube.

  • @Tigermania
    @Tigermania 5 місяців тому +1

    Been using SD1.5 for a year but found some really useful techniques from this video. 👍

  • @TheDocPixel
    @TheDocPixel Рік тому +2

    Absolutely the best channel for intermediate to advanced SDers! Keep up the great content, and I truly enjoy your no-frills, yet professionally edited tutorials. A lot of time-wasters and narcissists who like to see themselves inline, in the SD community(!). PLEASE don't become one just for views and the algorithm.

  • @cyberprompt
    @cyberprompt Рік тому +1

    I actually DID l&s during your breakmercial. this is a nice tutorial as I've been meaning to try to use CN more.

  • @omniscientvillage
    @omniscientvillage 11 місяців тому +1

    This is huge man. Thanks for sharing. Ive been inpainting the old fashioned way. Now i have some big scale images that take for ever using this method. i like that you can "export" the mask basically and use the "inpaint upload" section. handy for using masks from segment anything. and the cleaner function is so huge for me. im hoping using these new tools can speed up my process for my channel!

    • @KeyboardAlchemist
      @KeyboardAlchemist  11 місяців тому

      I'm glad you found the video helpful to your workflow! This extension has been invaluable for me in making better images. I hope it will do the same for you and your channel. Cheers!

  • @nenickvu8807
    @nenickvu8807 Рік тому +1

    Thanks for pointing it out. Didn't even see it there.

  • @MonotonousLifeEnjoyer
    @MonotonousLifeEnjoyer Рік тому +1

    Bro doing god's work 🙏🏻🔥Keep posting bro!! Your videos are literally so insightful bro!!!

    • @KeyboardAlchemist
      @KeyboardAlchemist  Рік тому

      I'm glad you liked the videos! Thank you for your support!

  • @DrOrion
    @DrOrion Рік тому +1

    Yes, please do hand correction. Thanks!

  • @rexs2185
    @rexs2185 Рік тому +1

    Excellent content as always. Thank you for the consistent and informative tutorials!

    • @KeyboardAlchemist
      @KeyboardAlchemist  Рік тому

      I'm glad you liked the video! Thank you very much for your support!

  • @PascalMeienberg
    @PascalMeienberg 9 місяців тому +1

    amazingly well explained the settings :)

  • @AlexG.O.A.T.
    @AlexG.O.A.T. 2 місяці тому

    Thank you for this tutorial it was very useful because you showed things step by step, and didnt skip anything.
    Only problem I have, is that I cant find anything else than the inpainting model I put in ControlNet in the models folder, is that normal?

  • @ChrisadaSookdhis
    @ChrisadaSookdhis Рік тому +3

    Great tutorial!

  • @yiluwididreaming6732
    @yiluwididreaming6732 Рік тому +1

    appreciate the tutorial. Thank you. It seems that you ca do a lot of similar stuff to fix images using PS or Krita. And prob just as time consuming if not quicker....This might have good application for finer details, hair, eyes, fingers maybe....

    • @KeyboardAlchemist
      @KeyboardAlchemist  11 місяців тому

      I'm glad you liked the video! Thank you for watching and for the comment!

  • @melissie7396
    @melissie7396 Рік тому +1

    Amazing video! I have not found a YT tutorial for intermediate users that is this detailed.
    Quick question, based on Part 1 and 2 of this video, isn't the Control Inpaint tab in Inpaint Anything the superior method? Why bother with the regular Inpainting tab in Inpaint Anything or the Controlnet Inpaint in Img2Img?
    Looking forward to your other future tutorials, especially how to fix broken hands!

    • @KeyboardAlchemist
      @KeyboardAlchemist  Рік тому +1

      Thank you for your kind feedback! I appreciate it! Regarding your question, yes, I agree with you that the ControlNet Inpainting Tab is better than the regular Inpainting tab in the Inpaint Anything extension. However, the Img2img Inpaint in combination with ControlNet global harmonious preprocessor offer some flexibility if you need to utilize other Control Net Units (i.e., more than 1 ControlNet Units working together), which is why I wanted to show that method in the video as well.

  • @magic-4-ai
    @magic-4-ai Рік тому +2

    Hi great tututorial. But do you hava an idea to use inpainting to swithc clothes to the cloths from other image? asking because using prompts you ll not receive image with a man or women with exact cloth.

    • @magic-4-ai
      @magic-4-ai Рік тому +1

      I mean is there a possiblity to give a stable diffiusion flat image of a shirt or other cloth and than try to put it on on the person ?

    • @KeyboardAlchemist
      @KeyboardAlchemist  Рік тому

      Hi thank you for watching! I answered a similar question yesterday, the short answer is, you can do it, but it's not very easy. Here is the long answer:
      I have not seen a perfect workflow that will essentially copy a piece of clothing from the reference image to an input image, but the workflow that I showed in this video with Inpainting + Control Net Reference preprocessor will get you close (you can do this in Img2Img too). Be sure to do the following things to increase your chances of success: (1) make sure your reference image and input image are the same size; you will have a much easier time with it, (2) don't put any positive prompts in when you are doing inpainting; you never know which keyword is going to mess with your reference clothing's style (you can always add keywords back later), (3) make sure your inpaint denoising strength is very high (0.9 - 1.0), (4) make sure your Control Weight is very high (greater than 1.5), (5) Control Mode = 'ControlNet is more important', and (6) you may need to try a few different models/checkpoints because the impact of the model on this process is very high. Finally, you will probably need to generate a bunch of images with the random seed and hopefully get the one that you like.
      I hope this helps you. Cheers!

    • @magic-4-ai
      @magic-4-ai Рік тому +1

      @@KeyboardAlchemist Thank you a lot i ll try, and if it will work ill share a link :)

  • @EddieLF
    @EddieLF Рік тому +1

    really great video!!

  • @Rasukix
    @Rasukix Рік тому +1

    Amazing tutorial, but quick question, why not use the part 1 method and then use the reference model for just the kimono after? Surely controlnet only has an impact if it has input data to use (e.g. depth, openpose, canny)

    • @KeyboardAlchemist
      @KeyboardAlchemist  Рік тому +1

      I'm not sure if I understand the question fully, but I'll take a shot at it. In Part 2, I mainly wanted to use the example to illustrate how to use the different features of the extension, so perhaps some methods are a bit more convoluted. I would say if you have a workflow that works well, then definitely go with it. Cheers!

  • @TheBlackOperations
    @TheBlackOperations 11 місяців тому +1

    This is WIld!!

  • @vincentmilane
    @vincentmilane 10 місяців тому +1

    Hello
    Thank you very much for your content
    I tried to reproduce the part with reference.
    There is one problem, i created the mask, but when i run the process it is also changing the rest of the picture, not just the mask as it should be.
    Do you have any idea where it comes from ?
    Best regards

    • @KeyboardAlchemist
      @KeyboardAlchemist  10 місяців тому

      You're welcome! Regarding your problem, I found that sometimes, the program will remember your previous mask (this is a bug). It doesn't show this in the mask window, but it's combining your previous mask with the current one, and that might be why you are seeing it change things outside of your current mask. The way to correct this is just to clear everything and re-create the mask. If that doesn't work, you can reload the webUI. I hope this helps.

  • @knoqx79
    @knoqx79 11 місяців тому +2

    4:15 i don't have any of all those models allready in my folder, is that normal ? also when i try to download the .yaml it appears as .txt ... what is a .yaml ^^' ?

    • @KeyboardAlchemist
      @KeyboardAlchemist  11 місяців тому

      Actually, not to worry if you don't already have the .yaml files in that folder. After I made this video and after a particular controlNet update, it got rid of all my existing .yaml files, so I believe those are no longer needed for the models to work. You will just need to download the .pth files from Hugging Face. There's a link in the video descriptions if you need to find the model files for downloading.

  • @coco71920
    @coco71920 Рік тому +1

    hey, I have a problem when I click on "create mask" the entire image is masked, not just the part I wanted. I already tried to reinstall the extension but it still doesn't work.
    anyway, nice video :)

  • @AIPixelFusion
    @AIPixelFusion 10 місяців тому +1

    Top notch content

  • @sossepanter
    @sossepanter Рік тому +1

    hi, Thanks for the tutorial! I have few problems which you could maybe help me with. First is that i can only run the segment using my CPU. Second is that a lot of the Segment Anything Model ID fail running them. with this error ( Inpaint Anything - ERROR - Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU). Do you have any idea why this could be? I have a 7900xtx just in case this would help in anyway.

    • @KeyboardAlchemist
      @KeyboardAlchemist  Рік тому

      For the first question, there is a checkbox in the Inpaint Anything Settings that says "Run Segment Anything on CPU", make sure this box is not checked. If that is not the source of the problem, you might want to check whether your computer is in fact using your graphics card when generating images or not.
      I'm not sure why the second issue happens, maybe it has something to do with the fact that it is running on your CPU instead of GPU; not sure. You may have to uninstall the extension and reinstall to see if it will fix this issue.

  • @magic-4-ai
    @magic-4-ai Рік тому

    one more question have you got an idea how to setup Stable diffusion on Google colab and save work on Google drive with preset settings?

  • @NguyenDucTung-y4d
    @NguyenDucTung-y4d Рік тому +1

    Are there anyway to keep the exact same clothe of the reference image ?. Or change the girl in input image but keep her clothe ?

    • @KeyboardAlchemist
      @KeyboardAlchemist  Рік тому +2

      Okay, so your second question is easier. Just use the inpainting with Control Net method I showed in this video to change the girl's face. If you need it to be a specific face, then you will probably need to use Roop.
      Your first question is a bit more involved, here is a long answer, but I hope this helps you:
      I have not seen a perfect workflow that will essentially copy a piece of clothing from the reference image to an input image, but the workflow that I showed in this video with Inpainting + Control Net Reference preprocessor will get you close (you can do this in Img2Img too). Be sure to do the following things to increase your chances of success: (1) make sure your reference image and input image are the same size; you will have a much easier time with it, (2) don't put any positive prompts in when you are doing inpainting; you never know which keyword is going to mess with your reference clothing's style (you can always add keywords back later), (3) make sure your inpaint denoising strength is very high (0.9 - 1.0), (4) make sure your Control Weight is very high (greater than 1.5), (5) Control Mode = 'ControlNet is more important', and (6) you may need to try a few different models/checkpoints because the impact of the model on this process is very high. Finally, you will probably need to generate a bunch of images with the random seed and hopefully get the one that you like.
      Best of luck!

  • @美女热舞吧
    @美女热舞吧 Рік тому +1

    How to change different hairstyles?

  • @TheFoxstory
    @TheFoxstory Рік тому +1

    my sd-webui-controlnet/models is empty just the one that I put in? how so?

    • @KeyboardAlchemist
      @KeyboardAlchemist  Рік тому

      I just checked my folder and all the .yaml files are gone too. I think it has something to do with the latest v1.1.4 update. If you put the model files in there and everything works, then don't worry about the .yaml files. If you need to download the .yaml files, they are here: huggingface.co/lllyasviel/ControlNet-v1-1/tree/main

  • @knoqx79
    @knoqx79 11 місяців тому +1

    5:43 I get this error : RuntimeError: mat1 and mat2 shapes cannot be multiplied (1232x2048 and 768x320)
    the image i'm trying to inpaint is 840hx512w

    • @knoqx79
      @knoqx79 11 місяців тому +1

      or this one : AttributeError: 'ControlNet' object has no attribute 'label_emb'
      when i use low vram

    • @KeyboardAlchemist
      @KeyboardAlchemist  11 місяців тому

      @@knoqx79 Hi, thanks for watching the video! Unfortunately, I have never gotten those errors before when inpainting with ControlNet. So I won't be much help. You might want to update your control net extension, in case it's not updated already. I hope you figure out those errors.

  • @kicapanmanis1060
    @kicapanmanis1060 2 місяці тому

    Tried going to the hugging face page but the file is gone. Or at least if it's there it's very different than the one you showed.

  • @fortoday04
    @fortoday04 5 місяців тому

    How are you doing the AI voice?

  • @DoozyyTV
    @DoozyyTV Рік тому

    Is this available for comfyui?

  • @novysingh713
    @novysingh713 8 місяців тому

    Why does only inpaint anything use all of my GPU when I upload any image and then give an error out of cuda memory

  • @Aaisn
    @Aaisn Рік тому +1

    what is the difference between inpainting menu and controlnet inpainting menu?

    • @KeyboardAlchemist
      @KeyboardAlchemist  Рік тому +1

      Good question, within the Inpaint Anything extension, the inpainting menu is like a simplified version of your normal Img2Img Inpaint interface. The ControlNet Inpainting menu is like using the Img2Img Inpaint interface + enabling a Control Net Unit with inpaint model selected. Hope this helps!

  • @MABtheGAME
    @MABtheGAME Рік тому +1

    subscribed

  • @artofgarduno
    @artofgarduno Рік тому

    whats the difference between inpainting in control net vs inpainting in img2img?

    • @KeyboardAlchemist
      @KeyboardAlchemist  11 місяців тому

      To clarify, you do inpainting in img2img, but controlNet has an inpainting model that will support and help make the inpainting result better. Hope this helps. Thanks for watching!

  • @LouisGedo
    @LouisGedo Рік тому +1

    👋

  • @philipp1960
    @philipp1960 Рік тому +1

    RNGsus - I fell off my chair mate!

  • @corza5647
    @corza5647 11 місяців тому +1

    My face skintones don't match. It looks like a bad photoshop face replacement for some reason. How do you get it to not suck when it isn't working?

    • @KeyboardAlchemist
      @KeyboardAlchemist  11 місяців тому

      After inpainting, I would do a latent upscaling in img2img to get rid of artifacts like the skintone mismatch. Take a look at my other inpaint video where I explain how to do latent upscaling, it starts at 16:35. I hope this helps you. Cheers!

  • @futurefun3274
    @futurefun3274 8 місяців тому

    your talk too fast like you in rush to finish this lesson 😂 i don't understand much except installation. 😂 yeahhh