ComfyUI: IP Adapter Clothing Style (Tutorial)

Поділитися
Вставка
  • Опубліковано 21 гру 2024

КОМЕНТАРІ • 82

  • @controlaltai
    @controlaltai  10 місяців тому +4

    IP Adapter V2: I have tested the workflow, it works perfectly fine with v2. This is what is required just replace IP Adapter with IP adapter advance v2, all connections remain same. Weight type ease in-ease out, everything else default.
    At 1:40: Note that the Models listing has been changed after the latest ComfyUI / Manager Update. Download both the ViT-H and ViT-bigG models from "Comfy Manager - Install Models - Search clipvision". Here is the chart for IP-Adpater with the compatible ClipVision model.
    ip-adapter_sd15 - ViT-H
    ip-adapter_sd15_light - ViT-H
    ip-adapter-plus_sd15 - ViT-H
    ip-adapter-plus-face_sd15 - ViT-H
    ip-adapter-full-face_sd15 - ViT-H
    ip-adapter_sd15_vit-G - ViT-bigG
    ip-adapter_sdxl - ViT-bigG
    ip-adapter_sdxl_vit-h - ViT-H
    ip-adapter-plus_sdxl_vit-h - ViT-H
    ip-adapter-plus-face_sdxl_vit-h - ViT-H

    • @ramondiaz5796
      @ramondiaz5796 9 місяців тому

      Do I have to download all these files or which ones do I download?

    • @controlaltai
      @controlaltai  9 місяців тому

      For the workflow, if using a sd 1.5 checkpoint - ip-adapter-plus_sd15 - ViT-H
      If using a SDXL checkpoint then this - ip-adapter-plus_sdxl_vit-h - ViT-H

    • @ramondiaz5796
      @ramondiaz5796 9 місяців тому

      I have these two downloaded, is it one of these or are they different from the ones you mentioned?
      CLIP-ViT-H-14-laion2B-s32B-b79K.safetensors [2.5GB] CLIPVision model (needed for IP-Adapter)
      CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors [3.69GB] CLIPVision model (needed for IP-Adapter)

    • @controlaltai
      @controlaltai  9 місяців тому

      They both are clip vision models, the first one is referred to as ViT-H. You need a matching ip adapter model as well.

    • @marilynlucas5128
      @marilynlucas5128 8 місяців тому

      @@controlaltai Please make this workflow available for free to thank new subscribers.

  • @enriqueicm7341
    @enriqueicm7341 11 місяців тому +5

    I believe ComfyUI is the better option for AI image generation; your tutorials on this is what makes me keep suscribed to your great tutorials, thank you very much, they keep me studying and bettering myself. Your channel is amazing!

    • @controlaltai
      @controlaltai  11 місяців тому +3

      Yup, I agree 👍, I used to use A1111 before. Comfy in the beginning was a bit hard. Once I got used to it, I found it much more comprehensive and flexible. Performance is way better as well in comparison to A1111

    • @rod-me8ey
      @rod-me8ey 11 місяців тому +1

      @@controlaltai I completely disagree. It is a true shame to see so many people wasting their time creating content for ComfyUI. I watched the development of ComfyUI from the start and the point was to simplify A1111's workflow which later turned into an endless mess of nodes and ridiculously large workflows that serve no purpose at all. A1111 + basic knowledge Adobe tools gives way more flexibly, simplicity and coherence than ComfyUI will ever do. At some point more comprehensive tools will be developed and ComfyUI won't be one of them, mark my words.

    • @luman1109
      @luman1109 8 місяців тому

      @@rod-me8ey skill issue

  • @minimalfun
    @minimalfun 10 місяців тому

    This channel is very good and truly underrated, Malihe is really good and her videos are awesome!
    Thanks a lot for sharing so many great tips.

  • @BrunoBissig
    @BrunoBissig 11 місяців тому +1

    Hi Malihe, great tutorial, well explained and very useful. Thank you!

  • @rbdesignguy
    @rbdesignguy 7 місяців тому

    This is a great workflow and if your wanting to only change the clothing and not the face or background simply dont upscale at the end or just use roop.

  • @ArielTavori
    @ArielTavori 11 місяців тому

    Lots of great tips and new info, many thanks!

  • @daan3898
    @daan3898 11 місяців тому

    Awesome workflow, great information.

  • @nodswal
    @nodswal 10 місяців тому +1

    I don't see the clipvision models, any help would be appreciated, or a manual way of getting it, thank you so much.

    • @controlaltai
      @controlaltai  10 місяців тому +1

      It’s available via install models (not custom nodes) in the main comfy manager ui. Search clipvision there. If you cannot find here you can download them manually from the official GitHub page github.com/cubiq/ComfyUI_IPAdapter_plus#installation (called as image encoders).

  • @dlep9221
    @dlep9221 11 місяців тому

    very detail job, very clear and efficiently, thanks a lot

  • @10186708
    @10186708 10 місяців тому

    Great video! is it possible to keep from the reference image all details? such as logo, text, patterns etc

    • @controlaltai
      @controlaltai  10 місяців тому +1

      Thanks!! Not accurately. AI is not there yet, but will be soon.

  • @ysy69
    @ysy69 8 місяців тому

    Hi, just wondering if you're planning to introduce a new version of this video using IP Adapter V2?

    • @controlaltai
      @controlaltai  8 місяців тому +1

      Hi, I will test and post results, it should work with the same ip adapter at the same level. If you are asking for improvement in results, I have to test out how capable it is, from what I understand it would not be that different, as nothing is changed in the way ip adapter works in the back end, probably the way the nodes do stuff is more streamlined, and some new functions are added in the nodes, but the back end ip adapter always had those, just the nodes were not taking advantage of. You confirm this by going to the original ip adapter repository and seeing their model release changelogs.
      I will look at it and post an update here….

    • @controlaltai
      @controlaltai  8 місяців тому

      @l-jn1nt do one thing, send me your workflow, I am lost as to why you get black images. I will check and revert mail @ controlaltai . com (without spaces )

    • @controlaltai
      @controlaltai  8 місяців тому +1

      Hi, I have tested the workflow, it works perfectly fine. This is what is required just replace IP Adapter with IP adapter advance v2, all connections remain same. Weight type ease in-ease out, everything else default.
      To the person who is getting black images, there is a problem with your workflow, recheck everything. My workflow is working fine.

  • @vaibhavthakor928
    @vaibhavthakor928 10 місяців тому

    The upscaler changes the face and background too much, is there any other alternative you would suggest for preserving face other than trained LoRa? Or Maybe applying upscaler in masked region while keeping rest of the part same as before but with high resolution?

    • @controlaltai
      @controlaltai  10 місяців тому

      Put the following setting 0.3 denoising and the hook target to 0.1. this will preserve details. I went high because of the clothing detail, but this should work. And yes masking face and upscaling works but the setup would be different using latent set noise mask node.

    • @vaibhavthakor928
      @vaibhavthakor928 10 місяців тому

      @@controlaltai Thanks for the settings, it is much better than before now. Is the similar setup for masked upscaling available in any other videos of yours? Or could you please describe how can I set it up here?

    • @controlaltai
      @controlaltai  10 місяців тому +1

      No that one is not there. It's different. Let me change these settings and create a new workflow. I will post the workflow link in the members community area. I need some time, will notify you here once posted.

    • @vaibhavthakor928
      @vaibhavthakor928 10 місяців тому

      @@controlaltai Thank you.

    • @controlaltai
      @controlaltai  10 місяців тому

      ​​@@vaibhavthakor928 I found a way to upscale without changing the face with a single node. Can you tell me what vram does your system has. This method is excellent and upscale everything but maintains the face. Its VRam extensive.

  • @moviecartoonworld4459
    @moviecartoonworld4459 11 місяців тому

    hello! I am always grateful for your wonderful lectures! I have one question.
    During the upscale process, an error occurred: 'AttributeError: 'NoneType' object has no attribute 'shape'. Is there any solution?

    • @controlaltai
      @controlaltai  11 місяців тому

      Thank You!! Yes I think I may know the issue. Make sure you use non inpainting checkpoint for upscaling. For masking and stylized output, inpainting checkpoint should be used. If you are already using the non in painting checkpoint for upscaling then let me know, will require some further details.

    • @moviecartoonworld4459
      @moviecartoonworld4459 11 місяців тому

      @@controlaltai Thank you! You are correct. I had made a mistake by using an inpainting-included model for the upscale-related model, which caused the error. I will learn your lectures more carefully in the future. Thank you for letting me know!

  • @fintech1378
    @fintech1378 11 місяців тому

    exactly the thing im looking for

  • @squirrelhallowino29
    @squirrelhallowino29 11 місяців тому

    This is really cool, i find it to be very hard to work with anime artstyles though. Do you have any Inpainting models ideas for anime? I'd really love for this to work with animated styles

    • @controlaltai
      @controlaltai  11 місяців тому

      Thank you. Can you send me two anime images, I will have a look with it with the workflows and see if any modifications is required.

  • @digitus78
    @digitus78 11 місяців тому

    this seg anything does not like cuda for some reason. Even with xformers and cuda alloc off it still says there is a driver error and only dino/seg brings the error.

    • @controlaltai
      @controlaltai  11 місяців тому

      I ran this on a 4090 with cuda, there was no mention of any specific version or requirements on the node GitHub’s page. What version of Cuda do you have? And what’s your VRam?

  • @DDBM2023
    @DDBM2023 10 місяців тому

    Hello, Just a quick question: Why don't you use SDXL inpainting model? Is it supposed to be better than SD 1.5?

    • @controlaltai
      @controlaltai  10 місяців тому

      Was not getting good results as the realistic vision 5.1. Suggest me some SDXL will try it out.

  • @thebensimon
    @thebensimon 11 місяців тому

    Amazing tut! thanks! is there a similar workflow for a video source as well?

    • @controlaltai
      @controlaltai  11 місяців тому

      Not made any for video. Probably plan to make one in the future.

  • @jeffg4686
    @jeffg4686 11 місяців тому

    This is cool! - Very easy crops.
    Does this type of workflow work well on an M1?
    This is probably my main use case for it - compositing essentially.

    • @controlaltai
      @controlaltai  11 місяців тому

      Thank you! Unfortunately I no experience with Mac. All I can suggest is you give it a try. Let me know the result if you do.

    • @jeffg4686
      @jeffg4686 11 місяців тому

      @@controlaltai - gotcha, will do. I'll probably try it out within the next few days. Still coming up to speed, but this is pretty amazing workflow here. thx for the share.

    • @jeffg4686
      @jeffg4686 9 місяців тому

      SAM is apparently a CUDA only model :(
      I'll have to wait for now.
      Mine is Mac Mini M1 8G. - It's not really built for AI...
      I'm waiting for the next set of computers with inference chips.

  • @fintech1378
    @fintech1378 11 місяців тому

    how can i use image of a clothing as an input? the fashion model should wear that exact cloth is it possible

    • @controlaltai
      @controlaltai  11 місяців тому

      That is not what the workflow is. The workflow is a style transfer for clothes and not clothes swap. There are other tools in the fashion industry for just clothes swapping.

    • @fintech1378
      @fintech1378 11 місяців тому

      @@controlaltai any possibility of doing such video? Many people are asking

    • @controlaltai
      @controlaltai  11 місяців тому

      I did research on this image to image swapping clothes. The problem is not possible in comfy ui unless someone comes up with a node or bunch of nodes where it can be done.
      The simplest solution would be to use a Clothing trained LoRA and generate images from it.
      Through a third party software I can reduce expand/contract/change hands/legs of the human and clothes adjust.
      In comfy it is static. So the ai has to take a specific posed model clothes and put it here. However it's not able to blend it with the current pose.
      Ultimately we are dealing with static images. So in the video this is the best I could come with.
      However I will do more research for clothes swap. If I get anything will make a video.

    • @Danish-x7z
      @Danish-x7z 9 місяців тому

      ​@@fintech1378Yes, many people are looking for clothes swap.
      It Would be a superhit, if we can make this on Comfyui.

  • @princeyadav0350
    @princeyadav0350 4 місяці тому

    How much vram needed i have 16GB, is it enough?

    • @controlaltai
      @controlaltai  4 місяці тому

      More than enough.....for the basic workflow.

  • @ysy69
    @ysy69 11 місяців тому

    Great tutorial, thank you. Would this work for SDXL ?

    • @controlaltai
      @controlaltai  11 місяців тому +1

      Thank you & welcome. Yes it should, however I found more success with realistic vision 5.1 in comparison to XL. I did try some checkpoints, the results were underwhelming. Maybe with proper Lora’s. I say give it a shot, the workflow is robust. If the results are not desirable switch to another checkpoint.

  • @awais6044
    @awais6044 11 місяців тому

    Can we use automatic 1111 for this same work?

    • @controlaltai
      @controlaltai  11 місяців тому +1

      Yes but not exactly like this.

    • @awais6044
      @awais6044 11 місяців тому

      @@controlaltai thanks for your time.😊

  • @3dcafe100
    @3dcafe100 10 місяців тому

    I tried to install "WAS node pack" it was failed while installing, then i tried to manual install also failed, Now Comfy not working. It get error "ImportError: tokenizers>=0.14,

    • @controlaltai
      @controlaltai  10 місяців тому

      Did you delete the was node suit from custom nodes? First delete the folder. Then start comfy. If error is still there then go to "ComfyUI_windows_portable\update" and run this file "update_comfyui_and_python_dependencies". It will tell you
      "If you just want to update normally, close this and run update_comfyui.bat instead.
      -
      Press any key to continue . . ."
      Press any key and let it run. Takes time. That should fix the issue for you. Let me know.

  • @runebinder
    @runebinder 10 місяців тому

    Great video, really like this workflow. Hopefully should be able to use it to create consistent outfits in D&D character pics, hopefully it can handle armour :)

  • @SLAMINGKICKS
    @SLAMINGKICKS 11 місяців тому

    I have two GPU's how do make sure comfyui is using the most powerful of the two nvidea cards?

    • @controlaltai
      @controlaltai  11 місяців тому

      right click and open in notepad "run_nvidia_gpu.bat"
      ".\python_embeded\python.exe -s ComfyUI\main.py --cuda-device 0"
      If thats not the correct gpu change 0 to 1.

    • @SLAMINGKICKS
      @SLAMINGKICKS 11 місяців тому

      thank you i will try it@@controlaltai

  • @matteobozzo4181
    @matteobozzo4181 11 місяців тому

    Hi, I'm using mac (without GPU) and i get this error" Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU"... how can I fix it?

    • @controlaltai
      @controlaltai  11 місяців тому +1

      Hi, I have zero experience with mac, do you have any command line launch arguments to force it to use cpu only, like --device cpu? Or --cpu? The error seems it is trying to use GPU instead. There is no information given on the ComfyUI port of Segment Anything.

  • @goliat2606
    @goliat2606 11 місяців тому

    I generated woman face that i very like. Is any way to apply this face to rest of images which i will generate? Reactor or other face replace nodes makes face very bluury when image is bigger than 512x512.

    • @controlaltai
      @controlaltai  11 місяців тому

      Do it in 512, then do image to image upscale.

    • @goliat2606
      @goliat2606 11 місяців тому

      @@controlaltai I am using juggernaut xl checkpoint. I generate 1024x1024 image then replace the face using ReActor node. Face is blurry already in this steap and the face is little different than on face source image. For example my "source face" is without makeup and replaced face is with makeup. But, if i upscale it using Upscale Latent By, face is restoret to random face generated by model.

    • @controlaltai
      @controlaltai  11 місяців тому

      This is the problem with the face swap model. Normally I would suggest after you generate the 1024, use something like topaz photo ai to sharpen and enhance the face. However that software is expensive. But it works. In comfy only I don't see a solution cause the problem is not the checkpoint or the method it is whatever is the face swap model.
      Ermm do one thing can you email me the 1024 face blurred image. Maybe I can create something from my end and see if I can sharpen it without adding details or changing face. No harm in trying.

    • @goliat2606
      @goliat2606 11 місяців тому

      @@controlaltai Yes. I will send you this after work :).

  • @warriortech6852
    @warriortech6852 11 місяців тому

    thanks, amazing tutorial, please create a tutorial ComfyUI workflow to fix hands for existing images generated by AI

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ 11 місяців тому

    Would love it if you try your hand at creating a wf to say make a double person portrait each person using a separate Lora. Maybe regional Lora using color masking and open pose? Seems to be a thing in auto1111 but can’t get it to work with Impact nodes. Anyway, that would sell me on your sub

    • @controlaltai
      @controlaltai  11 місяців тому +1

      Ohh this can be done, I will look into a Regional LoRA tutorial. I will try and use your example in one of the workflow. Just give me some time, will add it to the pipeline.

  • @___x__x_r___xa__x_____f______
    @___x__x_r___xa__x_____f______ 11 місяців тому

    Also, how do I get in touch with you? Are you on discord?

    • @controlaltai
      @controlaltai  11 місяців тому

      Yes both of us are there on Discord. You can take my ID: g.seth

  • @thienbao27071980
    @thienbao27071980 11 місяців тому

    Google pay không thanh toán được ở vietnam