Guide to Change Image Style and Clothes using IP Adapter in A1111

Поділитися
Вставка
  • Опубліковано 29 січ 2025

КОМЕНТАРІ • 81

  • @carloosmartz
    @carloosmartz 11 місяців тому

    Hello my friend, i want to create a LoRa for clothes. I want that if i use Lora parameters between -1 and 1 when i increment the parameter in the prompt the model have to be fat. I mean you know how to use numbers in the dataset .txt files to teach the AI that -1 i skinny 0 IS fit and 1 IS fat when i use the prompt?

  • @dflfd
    @dflfd Рік тому +1

    thank you, this is a great tutorial! 🤩 it's really amazing how fast these tools have developed over the last year. IP adapter is incredible, can't wait to install it.

    • @AI-HowTo
      @AI-HowTo  Рік тому +2

      indeed, its too fast, sometimes it feels scary how fast things are progressing, hopefully they end up affecting our lives positively on the long run.

  • @BabylonBaller
    @BabylonBaller Рік тому

    Appreciate the tutorials buddy. I can always count on you!

  • @BabylonBaller
    @BabylonBaller Рік тому +1

    Brilliant my friend.. IP Adapter is quite powerful I see

  • @luciusblackheart
    @luciusblackheart Рік тому

    Thank you so much I leveled up thanks to you and your videos. This is an amazing tutorial.

    • @AI-HowTo
      @AI-HowTo  Рік тому

      Great to hear that it was useful, wish you the best in you learning journey.

  • @damned7583
    @damned7583 9 місяців тому +1

    where do I download the ip_adapter_clip_sd15 processor?

    • @AI-HowTo
      @AI-HowTo  9 місяців тому +1

      I think it is (ip-adapter_sd15.bin) ... all 1.5 models are in huggingface.co/h94/IP-Adapter/tree/main/models

    • @damned7583
      @damned7583 9 місяців тому

      @@AI-HowTo I work with Google Colab, could you tell me which folder to place this file in?

    • @AI-HowTo
      @AI-HowTo  9 місяців тому

      I think it should be the same as local installation folder --- which is the Controlnet model's folder, on my local installation that is stable-diffusion-webui\extensions\sd-webui-controlnet\models ... but i think A1111 also looks inside stable-diffusion-webui\models\ControlNet folder as well

    • @kakaoxi
      @kakaoxi 7 місяців тому

      @@AI-HowTo I downloaded that but i don 't have the preprocessor ip_adapter_clip_sd15

    • @AI-HowTo
      @AI-HowTo  6 місяців тому

      it is fine, the developer might have removed the clip version or renamed it, you can use any other sd15 version and try them out.

  • @b4ngo540
    @b4ngo540 Рік тому

    i really appreciate the amount of information and effort you put in each video of yours
    i enjoy watching these video while im comfyui user, but they are still really useful to watch for how you explain everything in a simple way, and im still able to use these tips and advices in comfyui
    it would be really interesting to see this quality of tutorials as a new series about comfyui
    and im down to join your journey going from installing it all the way to the super complicated workflows
    you don't need all this edit to videos by adding text notes , just hit record screen and talk to the mic , everything you say is clear even without these extra notes, while still every effort you put there is appreciated (but it would be better if less edit would produce more videos :D )

    • @AI-HowTo
      @AI-HowTo  Рік тому

      thank you, will minimize these texts in future videos, I enjoy making these videos too, they are fun, and I am learning while doing them too, was hoping to start comfyui videos too, but I am short on time, hopefully next month i will do some, and thanks for the encouragment and the notes

  • @nicolaseraso162
    @nicolaseraso162 8 місяців тому

    Hey bro, do you know how to install insightface in Automatic1111 (I use PaperSpace) in order to use the option of Face ID in IP Adapter?

    • @AI-HowTo
      @AI-HowTo  8 місяців тому

      not sure, for me it worked without any problems, just downloaded the IP Adapter Face id models into the Controlnet models folder, and the Face ID loras into the LoRA folder, and made sure Controlnet was upto date, and it automatically downloaded necessary extra models related to insightface such as buffalo_l , not sure, why some have troubles with this while others dont.

  • @GabryBSK
    @GabryBSK Рік тому

    Thank you! I have a beginner question for you: what is it the best way to get consistent characters in terms of face and body fisicity today?

    • @AI-HowTo
      @AI-HowTo  Рік тому +1

      no idea...I think it is a composition of multiple methods and techniques though, IP Adapter models are Good, Controlnets/segmentatio/compsition...etc. but for me, I find using a LoRA model of a a face/body with after detailer to give most accurate and lively results, compared to all other methods.... but it takes alot of time to train a model that is really good for a certain character.

  • @michaellong5871
    @michaellong5871 5 місяців тому

    Can't reproduce your video. I guess you are using the ip-adaptor_clip-sd15 preprocessor. Where to download it and how to put it in A1111's controlnet?

    • @AI-HowTo
      @AI-HowTo  5 місяців тому

      I think i downloaded the preprocessors from control net page, huggingface.co/lllyasviel/sd_control_collection/tree/main , at that time, i downloaded some models from the official page of IP Adapter too huggingface.co/h94/IP-Adapter/tree/main, it is most likely that something changed until this point in time, deleted or updated

  • @lilillllii246
    @lilillllii246 11 місяців тому

    Thanks. A bit of a different question, is there a way to naturally synthesize the character files I want in an existing image background file rather than a text prompt?

    • @AI-HowTo
      @AI-HowTo  11 місяців тому

      I dont fully understand the question, but for more complicated image senthesis, ComfyUI and segmentation could be the way to go, as it gives us more workflows and tools to work with for complex scenes, you can google IP Adapter for ComfyUI for more details or segmentation of a picture, hopefully that guides you somewhere useful.

  • @awais6044
    @awais6044 Рік тому

    In production level we don't do inpainting process.what we do.user upload image and enter text change only outfit.
    Or we used faceswap method.
    Thanks

    • @AI-HowTo
      @AI-HowTo  Рік тому +1

      I agree that manual inpainting is not practical for real world apps, it is only fine on a personal level.

  • @AiNomadArt
    @AiNomadArt 10 місяців тому

    Wondering if it can fix deformed hands by giving it a hand photo

    • @AI-HowTo
      @AI-HowTo  10 місяців тому +1

      :) , hope so, but of course it doesnt... after detailer (hand model), controlnets ...etc. the way to go with hands.

    • @AiNomadArt
      @AiNomadArt 10 місяців тому

      @@AI-HowTo I have tried the Adetailer hand with depth hand refiner, but keep getting similar bad hand even i set it high denoise, I got heart attack then....

  • @froilen13
    @froilen13 10 місяців тому

    Does this works for forge?

    • @AI-HowTo
      @AI-HowTo  10 місяців тому

      yes, all control net work in forge too.

  • @Valentina-zx1pi
    @Valentina-zx1pi 10 місяців тому

    Thank you! I have a qustion.. It worked but the dress looks blurry, how could I solve this? What VAE do I need to download? Does the dress that I put as input changes the quality?
    Also if you could please help me, it takes 30m to create an image, for you it takes seconds! is there anything i can do to solve this?
    thank you in advance!

    • @AI-HowTo
      @AI-HowTo  10 місяців тому

      You are welcome.
      1- VAE is only required if it is not baked into the Model... this is mentioned in the Model, when you download it for example from Civit ai, it tells you which VAE you need or you dont.
      2- blurriness might happen if the inpait area is small (for example, when you try to inpaint for example a large image using a small inpaint area which results in resizing up and losing quality) ... may also result sometimes from using low denoising levels, changing the model may help, image quality if used in the IP Adapter doesnt change much the output, because the image is just analyzed.
      3- Generation time depends on your graphics card, I used RTX 3080 8GB Laptop, if you have a low end Graphics card then it becomes slow...but 30 minutes, means there is something wrong, if your graphics card is low end, then its best to use free online tools for Comfy UI... or install Comfy UI on your computer, it will probably give you better performance and you will find videos on youtube for Comfy UI usage.

  • @moulichand9852
    @moulichand9852 9 місяців тому

    is there is any script availabe without using web ui?

    • @AI-HowTo
      @AI-HowTo  9 місяців тому

      the web ui is built on top of python scripts, so everything in stable diffusion image generation or training is based on scripts, so they can be automated, but i have not used that unfortunately so i dont have enough expertise to guide you on that

  • @eiermann1952
    @eiermann1952 5 місяців тому

    how can swap your brilliance onto myself using IP adapter, pls explain : )

    • @AI-HowTo
      @AI-HowTo  5 місяців тому

      experimentation, IP Adapter may not always give the results that you hope, but with more experimentation, you might get better results...and in some cases it gives better results than you expect... there is some random level in anything related to stable diffusion in general and the control nets.

  • @vishalchouhan07
    @vishalchouhan07 Рік тому +1

    hi.. can you create a tutorial for transferring interior design style from one image to another by using IP adaptor? I saw a video where multiple design styles were translated onto a single source image (for example a drawing room picture) to create various design options of the same room (as per the reference design style images)

    • @AI-HowTo
      @AI-HowTo  Рік тому +1

      The same principel applies for Architectural styles or any other style, the reference image style will impact the generated image even in architecture... I might be doing something in ComfyUI soon, but i might not have enough time to do so, not sure yet, but I''l try.

    • @vishalchouhan07
      @vishalchouhan07 Рік тому

      @@AI-HowTo thanks in advance ☺.. would surely wait for the tutorial.

  • @novysingh713
    @novysingh713 11 місяців тому

    how can I replace the same dress on the model why do dress style and design change whenever you try to replace the dress on model?

    • @AI-HowTo
      @AI-HowTo  11 місяців тому

      when you replace the dress each time you often get a new dress style, unless you used a LoRA with a specific dress inside the detailer command which will dress up the new dress using the same style, as stable diffusion will always create new random elements with each new seed (if i understood you correctly)

  • @odev6764
    @odev6764 Рік тому

    is it possible to be done on comfyui ?

    • @AI-HowTo
      @AI-HowTo  Рік тому +1

      yes, its best done on ComfyUI, it gives your more options, and can achieve better workflows there.

    • @odev6764
      @odev6764 Рік тому

      ​@@AI-HowTo I'm try many control nets with segmentation, ip adapter, open pose, to enter with a picture and pass any clothes as reference but it always change a little bit the clothes, and face is not so good. do you know any way to use an image of a person an change just his clothes keep all details on the clothes?

    • @AI-HowTo
      @AI-HowTo  Рік тому +1

      I dont think keeping all details now is possible with SD... currently there are great researches on this subject without using SD, but they are not open source yet unfortunately like this one humanaigc.github.io/outfit-anyone/ hopefully soon we see something like this available as open source, now SD cannot do that as far as i know with high level of accuracy.

    • @odev6764
      @odev6764 Рік тому

      @@AI-HowTo I saw this project but they don't left it open source. the only way I believe it could be done is fine tuning an SD model to do it, but it requires a huge dataset

    • @AI-HowTo
      @AI-HowTo  Рік тому

      yes, I think with proper training good results can be achieved indeed, but requires lots of resources trials/errors, hopefully soon we get something out of the box or requires less training and resources.

  • @HeinleinShinobu
    @HeinleinShinobu Рік тому

    where to download deepfashion2 model for adetailer?

    • @AI-HowTo
      @AI-HowTo  Рік тому +1

      huggingface.co/Bingsu/adetailer/tree/main

    • @AI-HowTo
      @AI-HowTo  Рік тому +1

      huggingface.co/Bingsu/adetailer/resolve/main/deepfashion2_yolov8s-seg.pt?download=true

    • @HeinleinShinobu
      @HeinleinShinobu Рік тому

      @@AI-HowTo thanks!

  • @타오바오-h8l
    @타오바오-h8l Рік тому

    In the video, the outfits are slightly different from the reference, is it possible to make them exactly the same?

    • @AI-HowTo
      @AI-HowTo  Рік тому +1

      currently, no it is impossible in stable diffusion using controlnets or simple methods, as far as i know, stable diffusion will always introduce a certain chaos element into the generation, this is part of it is designed, the best match of dressing could be achieved by Training a LoRA or Dreambooth as shown in this video ua-cam.com/video/wJX4bBtDr9Y/v-deo.htmlsi=FCafRmzr8675RBuZ , but this process takes a long time to do and may not work from first attempt, and even then, it maynot not give 100% match... 3D Tools such as Blender are the only way for 100% match of clothes so far.

  • @szw7729
    @szw7729 Рік тому

    Thank you!

  • @omgkhazix9442
    @omgkhazix9442 Рік тому

    Thank you, it's been nothing but pleasure watching and learning from your videos. Where could I possibly contact you for some additional questions regarding A1111? Would love you to be my mentor.

    • @AI-HowTo
      @AI-HowTo  Рік тому

      Thank you, it nice to read that some people find these Videos useful, and hopefully they contain some useful info here and there, unfortunately i cannot at the time being allocate more time for the channel or do any private or business contacts beyond these comments, I wish if i can allocate more time for the channel, hopefully i will in the future, because making these videos is really fun, and I love this topic too.

  • @TanVN-t3z
    @TanVN-t3z 5 місяців тому

    what happen when i using code ? Can you tell me about this

    • @AI-HowTo
      @AI-HowTo  5 місяців тому

      I am sorry, i dont have expertise about this, I think you are talking about using python code for the command generation, the official guide explains how code is used, which could be important for automation in some cases.

  • @quotesspace1713
    @quotesspace1713 10 місяців тому

    ComfyUI please please 🙏🙏

  • @tonyibarra1523
    @tonyibarra1523 Рік тому +1

    Hello, thank you for such great videos!
    I watched your videos in order, from oldest to latest and nothing was working for me: I was trying to follow along what you were doing since several months, and I just assumed that back in August you were using SD1.5 because SDXL was not production ready. So I was simply replacing "1.5" with "XL".
    Now I notice that even your latest videos are still on SD1.5, and that might be the reason nothing works for me :(
    Can you please explain why you're using 1.5 instead of XL?
    And yes, maybe it could be an important subject that no one has covered so far: why ControlNet works so good in 1.5 and nothing seems to work on XL, except for Lineart and a couple more. But even Canny/OpenPose are not working, at least for me.
    Thanks in advance, it's been frustrating to follow and not getting the same results!

    • @AI-HowTo
      @AI-HowTo  Рік тому

      sorry to hear that, it is not always easy to get good results from Stable diffusion, lots of testing is required to get the hangs of it, and in the Videos, I try to explain the concepts and the overall settings, which might require some adjusting depending on the prompt/subject/video you are using.
      I use SD 1.5 still because it is light weight and fast compared to SDXL, if my PC was more powerful, i would not use SD 1.5, i expect results would be better in SDXL for the same videos though.
      your should consider downloading Controlnets for SDXL which have been updated in recent months and better suited for SDXL than the older controlnets huggingface.co/lllyasviel/sd_control_collection/tree/main
      I also suggest that you turn on Preview mode when using Controlnet, to see the output of the preprocessor, to see if it is detecting things correctly... notice for example in my first usage in this Video, I didnt get the robot in the first test of the control net, and had to lower the controlnet weight down to 0.5 .... in some cases, you might also need to start Controlnet from 0.25 (Starting Control step).
      Image sizes in SDXL are 1024x1024 to get good images, unlike SD 1.5 which are 512x512 ot 512x768 or 768x1024 these too play a great role...this could have contributed to the problem too, not sure.
      also consider reinstalling A1111 a new, incase some conflicts are happening and possibly causing things stuck somewhere and producing illogical results.

    • @tonyibarra1523
      @tonyibarra1523 Рік тому

      @@AI-HowTo Thank you so much for your answer. I will address each part:
      - Your videos are great and you explain very well. My frustration is about not being able to replicate what you seem to do so easily :)
      - I have a very old computer, not powerful at all. I do SD on Runpod, where I pay about $0.40 per hour. It's cheaper than getting a new PC/GPU !
      - I have downloaded and installed all those SDXL Controlnet models, I've been working on this for 3 weeks and I have made matrices in Excel with what works and what doesn't. Even the most basic OpenPose is not working :(
      - I use preview mode for everything, so I know the Pre-processor is doing its job. I will try tweaking the weight some more and see if I get anything useful.
      But even following you first ControlNet video in SDXL doesn't work. Yesterday I tried 1.5 and it works. I switch the same to SDXL (model and CNs) and nothing. A/B testing shows the issue seems to be in CN models, and I have tried them all: several Canny, OpenPose, Depth...
      - I understand the difference in resolution, and I'm trying the same images in CN, 512x512 when in 1.5 and 1024x1024 when in XL, just to make 100% sure.
      - A1111 is installed in Runpod template and it works, it's updated and I run the updates everyday before starting.
      I'd love to share my findings and comparisons with you if that would make any sense. Even give you access to my Runpod account so you can A/B like I do.
      Do you have a Discord or some other way we could talk about this, send you some screen captures and shots?

    • @AI-HowTo
      @AI-HowTo  Рік тому

      I see, hopefully things work out for you, other channels may have more info about Controlnet in SDXL that may help... unfortunately I dont have discord, nor can I allocate time more than I do now for this Channel, it is really fun to make these videos, and answer questions sometimes, and was hoping to make more and allocate more time for the channel, but unlikely to be able to for another year at least given how my life is running now :), wish you best of luck, if I found a question here, and i know its answer, I will reply here.

  • @thewebstylist
    @thewebstylist Рік тому

    Oh wow wish wasn’t so complex to setup

  • @ibrahimismaeeldawood
    @ibrahimismaeeldawood Рік тому

    did the same why not my image not converted. I added image of KIA car in return got a young girl

    • @AI-HowTo
      @AI-HowTo  Рік тому

      You might have used a light model, try using plus model, I suggest to watch the video carefully, IP Adapter basically just describes the image and affectst the output based on accurate image description and injection of its description into the generation... try other examples to detect where the root cause of the problem is.

  • @erenliify
    @erenliify 10 місяців тому

    ipadapter plus face / inpaint is not working for me. Trying to inpaint the face and set everything same with you but the result are robotic dumbest face. Not even a face...

    • @AI-HowTo
      @AI-HowTo  10 місяців тому

      not sure, make sure you are selecting (Whole Picture) for Inpait Area option instead of (only masked), this worked better for me.... usually when i inpaint, i selecte only masked, but with IP Adapter, results look different.

    • @erenliify
      @erenliify 10 місяців тому

      @@AI-HowTo every settings are same with you but not working. Then i change model, reduce ds to 0.4 now its better but still not working properly like you use :(z

    • @AI-HowTo
      @AI-HowTo  10 місяців тому

      but 0.4 denoising strength will not significantly change the style of the face, it will change it, but not enough to make it very different... in general, these models dont work as we always hope, we need to keep trying to figure something that works for us

  • @kallamamran
    @kallamamran Рік тому

    LCM LORA is useless for final results. Quality is NOT good enough

    • @AI-HowTo
      @AI-HowTo  Рік тому +2

      with Euler a it gives good quality, but not as good as Normal generation indeed, I guess this is a compromise between speed vs. quality, so it will have its own usecases for Videos or Preparing content quicky, but for Quality Results the slower Normal Generation seems inescapable

  • @अघोर
    @अघोर Рік тому

    We are living in a very interesting time. We have seen the birth and rise of AI. I do not wish it, but this maybe the last era of humans.

    • @AI-HowTo
      @AI-HowTo  Рік тому

      if AI is not propertly regulated, and oriented to help all people, it will definately be a big problem in the near future, especially what is know now as Artificial General Intelligence, which Chat GPT seems to be almost achieving that.

  • @masterzed1
    @masterzed1 10 місяців тому

    you edited to much......