IpAdapter Update: The PLus nodes should be replaced with the advanced nodes. Only change the weight value and keep the weight type to Linear or Ease in-out to get similar consistent results as per the tutorial. @ 5:57: Note that the Models listing has been changed after the latest ComfyUI / Manager Update. Download both the ViT-H and ViT-bigG models from "Comfy Manager - Install Models - Search clipvision". Here is the chart for IP-Adpater with the compatible ClipVision model. ip-adapter_sd15 - ViT-H ip-adapter_sd15_light - ViT-H ip-adapter-plus_sd15 - ViT-H ip-adapter-plus-face_sd15 - ViT-H ip-adapter-full-face_sd15 - ViT-H ip-adapter_sd15_vit-G - ViT-bigG ip-adapter_sdxl - ViT-bigG ip-adapter_sdxl_vit-h - ViT-H ip-adapter-plus_sdxl_vit-h - ViT-H ip-adapter-plus-face_sdxl_vit-h - ViT-H
I didn't have an IpAdapter folder, so I created a folder called ipadapter in models folder to put my downloaded models. Don't forget to add the line: --- folder_names_and_paths["ipadapter"] = ([os.path.join(models_dir, "ipadapter")], supported_pt_extensions) --- into your folder_paths.py file.
I tried attention masking again, similar to what you showed in this video(not same cause of the IP Adapter update), but when I generated a wide horizontal image with a mask applied to the center, I only got borders on the sides and the background didn't expand to fill the entire image size. Has this technique stopped working after an update, or could there be a mistake in my node setup? Would you mind checking this for me? 10:13
@@controlaltai Sorry, I was using an anime model(anima pencil), which is why it only output images with the background cropped out. When I switched to juggernaut it worked correctly!Sorry for the quick comment and Thank you for going out of your way to provide your email , offering to help
Hi....I've a question. With the 3 images you make in one with the colors red,green and blue. You show 3 images. how is it if you have 5 portraits and one background. How do you make this in one image?
Haven’t tested this. Three images were difficult enough, these model do have limitations. To answer your question, I really don’t know since I never went beyond three. Have to actually run test which would be time consuming to check feasibility and consistency.
I need help i get a error when working with an SDXL checkpoint. RuntimeError 3 K Sampler. It shows Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead
Do you have a gtx 1080, this issue happens when set of commands are different and received on separate floating points. Try running comfy in cpu mode you will not get the error. If that happens some issue with the gpu and comfy configuration.
@@controlaltai oh yeah you are right it workes with cpu but takes very very long to generate. Damn you helped me for the second time i thank u very much :)
@@controlaltai I want to ask you about some Errors i get with Comfyui. It has nothing to do with this video, but maybe you can help me: 1 . Working with Get Sigma ( from Comfyui Essentials) it shows this error : Error occurred when executing BNK_GetSigma: 'SDXL' object has no attribute 'get model_object 2working with ReActor i get this: Error occured when executing ReActorFaceSwap: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parametee when instantiating InferenceSession. For example, onnxrunyime.InferenceSession(..., providers=['TensorrtExecutionProvider', CUDAExecutionProvider', ' CPUExutionProvider'],...)
@ 5:57: Note that the Models listing has been changed after the latest ComfyUI / Manager Update. Download both the ViT-H and ViT-bigG models from "Comfy Manager - Install Models - Search clipvision". Here is the chart for IP-Adpater with the compatible ClipVision model. ip-adapter_sd15 - ViT-H ip-adapter_sd15_light - ViT-H ip-adapter-plus_sd15 - ViT-H ip-adapter-plus-face_sd15 - ViT-H ip-adapter-full-face_sd15 - ViT-H ip-adapter_sd15_vit-G - ViT-bigG ip-adapter_sdxl - ViT-bigG ip-adapter_sdxl_vit-h - ViT-H ip-adapter-plus_sdxl_vit-h - ViT-H ip-adapter-plus-face_sdxl_vit-h - ViT-H
my ipadapter didn't have plus in it. i did exactly as you show me. will there be any performance change? it only says apply IPadapter on the title box, it didn't have ComfyUI_IPAdapter_plus . what am i doing wrong?
That tech doesn’t exist as of now. 100% accurate clothes transfer. You can try this workflow method, it’s the closest you get to change outfits from existing images: ua-cam.com/video/YG6oif_nEGk/v-deo.html
Thanks! Music is at 5%. There is a UA-cam setting called Stable Volume, it’s enabled on by default, makes its unnecessarily loud when not speaking. Turning it off gives the original recorded level.
Every Comfyui tutorial loses me when the creator starts connecting nodes left and right and adding them all over the screen. Aggrivates my linear thinking. Good information though!
Hello. Congratulations on the channel. I have a problem. I get an error in the blip analyzy image node. It turns pink and gives me a series of errors. At Load checkpoint I have the sdxl and at load it also has sdxl. I hope you can guide me. Thank you so much.
Hi, Thank You!! Check the following please: 1. model_base_capfilt_large.pth file is located in "ComfyUI\models\blip\checkpoints". If not you can download it from here: storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_capfilt_large.pth 2. Go to ComfyUI\custom_nodes\was-node-suite-comfyui - click on install.bat 3. compatible transformers version is: transformers==4.26.1. Anything higher if installed will give error. If you still get an error let me know.
If you had that file already in the folder. Then I ran install.bat as you told me and it gave me this error: WARNING: The script f2py.exe is installed in 'C:\ESPACIO LIBRE\Herramientas IA\ComfyUI_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Scripts' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. Attempting uninstall: transformers Found existing installation: transformers 4.35.2 Uninstalling transformers-4.35.2: Successfully uninstalled transformers-4.35.2 WARNING: The script transformers-cli.exe is installed in 'C:\ESPACIO LIBRE\Herramientas IA\ComfyUI_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Scripts' which is not on PATH. Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location. Attempting uninstall: scikit-image Found existing installation: scikit-image 0.22.0 Uninstalling scikit-image-0.22.0: Successfully uninstalled scikit-image-0.22.0 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. clip-interrogator 0.6.0 requires transformers>=4.27.1, but you have transformers 4.26.1 which is incompatible. Successfully installed PyWavelets-1.5.0 numpy-1.24.4 scikit-image-0.20.0 tokenizers-0.13.3 transformers-4.26.1 [notice] A new release of pip is available: 23.3.1 -> 23.3.2 [notice] To update, run: C:\ESPACIO LIBRE\Herramientas IA\ComfyUI_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\python.exe -m pip install --upgrade pip Presione una tecla para continuar . . .@@controlaltai
The clip interrogator version you have is not compatible with transformers. You have a higher version of transformers than what is required by blip. I did a fresh install and tried was suit. By default, transformers is just not installed. Running install.bat fixed it for me. Second your scripts folder in the comfy main python folder is not added to path in about pc - view advance system info - environment variables. You have to manually add that there and try running the file again. Some custom nodes or you must have installed earlier that must have installed clip interrogator and compatible transformers. Or there was a manual install of the same. Do you have conda installed with some other tools using transformers or clip interrogator? Try downloading a separate copy of comfy and setup that new copy with was suit installed. Also upgrade your pip version. Let me know if you are able to fix this.
Check out the order and prompt (include subjects in prompt in order), it’s highly sensitive. Keep left color first, right color second and bg last. Secondly add 0.3 to 0.5 noise, reduce ip adapter weight as well. Also try keeping it in steps. First ip adapter x steps, second till x and lastly third.
Actually I followed all the keys you mentioned in video, like order, prompt, pure 0,0,255 colors, Noise, weights exactly like yours. But it doesn't get result even close. Always got twisted cats and dogs. I tried to play with parameters, until now, it doen't work for me. And my nodes and comfyui were updated to the latest. Thanks!@@controlaltai
Can you mail me the workflow and the images. I like to have a look at it and see why it is giving such outputs. Email is mail @ controlaltai . com (without spaces)
IpAdapter Update: The PLus nodes should be replaced with the advanced nodes. Only change the weight value and keep the weight type to Linear or Ease in-out to get similar consistent results as per the tutorial.
@ 5:57: Note that the Models listing has been changed after the latest ComfyUI / Manager Update. Download both the ViT-H and ViT-bigG models from "Comfy Manager - Install Models - Search clipvision". Here is the chart for IP-Adpater with the compatible ClipVision model.
ip-adapter_sd15 - ViT-H
ip-adapter_sd15_light - ViT-H
ip-adapter-plus_sd15 - ViT-H
ip-adapter-plus-face_sd15 - ViT-H
ip-adapter-full-face_sd15 - ViT-H
ip-adapter_sd15_vit-G - ViT-bigG
ip-adapter_sdxl - ViT-bigG
ip-adapter_sdxl_vit-h - ViT-H
ip-adapter-plus_sdxl_vit-h - ViT-H
ip-adapter-plus-face_sdxl_vit-h - ViT-H
This is the best tutorial for the IP adapter which covers every single aspect. Loved it.
Thank you!
And here Ladies and gentlemen, we see the perfect example on how to make a good tutorial, thank you, excellent work!
Good tutorial. At 8:40 you can right click on the Empty Latent node, convert width and height to input and connect the Resolution node that way.
Thank you! I know, did not want to add comfy math as an extra requirement. I try to minimize custom node requirements, as much as I can. 😃
Fantastic video. Your explanation and demonstration are clear and very helpful. Thank you for your contribution to helping the community learn more!
I know a bunch of comments have said it but, you nailed the hell out of this tutorial. Bravo.
great video. rather than just linking to a workflow you actually explained how and WHY it was set up like it is.
Amazing tutorial. So much value from the time invested to watch this.
Amazing tutorial on masking and the nuances of the different settings!
Thumbs up for showing the install directory
super well explained and also you have so much knowledge. thank you for sharing.
I didn't have an IpAdapter folder, so I created a folder called ipadapter in models folder to put my downloaded models. Don't forget to add the line: --- folder_names_and_paths["ipadapter"] = ([os.path.join(models_dir, "ipadapter")], supported_pt_extensions) --- into your folder_paths.py file.
thank you very much! most helpful video for IPAdapter!
Really good, as usual!
just perfect
thanks for your hard work
I tried attention masking again, similar to what you showed in this video(not same cause of the IP Adapter update), but when I generated a wide horizontal image with a mask applied to the center, I only got borders on the sides and the background didn't expand to fill the entire image size. Has this technique stopped working after an update, or could there be a mistake in my node setup? Would you mind checking this for me? 10:13
Sure email me the workflow, I will have a look. mail @ controlaltai . com (without spaces)
@@controlaltai Sorry, I was using an anime model(anima pencil), which is why it only output images with the background cropped out. When I switched to juggernaut it worked correctly!Sorry for the quick comment and Thank you for going out of your way to provide your email , offering to help
Hi....I've a question. With the 3 images you make in one with the colors red,green and blue. You show 3 images. how is it if you have 5 portraits and one background. How do you make this in one image?
Haven’t tested this. Three images were difficult enough, these model do have limitations. To answer your question, I really don’t know since I never went beyond three. Have to actually run test which would be time consuming to check feasibility and consistency.
Can't seem to find that SDXL Resolution node found in the Math dropdown, which I also don't seem to have.
Can you give me the timestamp in the video for reference, can tell you how to get it then.
Where did you find all those controlnet preprocessors? Specifically the AnimalPosePreprocessor? Great vid btw.
Thanks!! Search for ControlNet Auxiliary Preprocessors in manager and install the one from Fannovel16. All the pre processors come with it by default.
I need help i get a error when working with an SDXL checkpoint. RuntimeError 3 K Sampler.
It shows Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead
Do you have a gtx 1080, this issue happens when set of commands are different and received on separate floating points. Try running comfy in cpu mode you will not get the error. If that happens some issue with the gpu and comfy configuration.
@@controlaltai oh yeah you are right it workes with cpu but takes very very long to generate. Damn you helped me for the second time i thank u very much :)
@@controlaltai
I want to ask you about some Errors i get with Comfyui. It has nothing to do with this video, but maybe you can help me:
1 . Working with Get Sigma ( from Comfyui Essentials) it shows this error :
Error occurred when executing BNK_GetSigma:
'SDXL' object has no attribute 'get model_object
2working with ReActor i get this:
Error occured when executing ReActorFaceSwap: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parametee when instantiating InferenceSession. For example, onnxrunyime.InferenceSession(..., providers=['TensorrtExecutionProvider', CUDAExecutionProvider', ' CPUExutionProvider'],...)
I can´t find the IPADAPTER folder
ComfyUI\models\ipadapter, if there is none create it, or install from the manager any ip adapter model, it will create itself.
clip vision isn't showing up in my search within the manager. was it taken down?
Are you searching in install custom nodes or install models? It won’t show up in install custom nodes.
I net that's it! Thank you.@@controlaltai
thanks u. i download ip-adapter model but i dont find clipvision. love
@ 5:57: Note that the Models listing has been changed after the latest ComfyUI / Manager Update. Download both the ViT-H and ViT-bigG models from "Comfy Manager - Install Models - Search clipvision". Here is the chart for IP-Adpater with the compatible ClipVision model.
ip-adapter_sd15 - ViT-H
ip-adapter_sd15_light - ViT-H
ip-adapter-plus_sd15 - ViT-H
ip-adapter-plus-face_sd15 - ViT-H
ip-adapter-full-face_sd15 - ViT-H
ip-adapter_sd15_vit-G - ViT-bigG
ip-adapter_sdxl - ViT-bigG
ip-adapter_sdxl_vit-h - ViT-H
ip-adapter-plus_sdxl_vit-h - ViT-H
ip-adapter-plus-face_sdxl_vit-h - ViT-H
my ipadapter didn't have plus in it. i did exactly as you show me. will there be any performance change? it only says apply IPadapter on the title box, it didn't have ComfyUI_IPAdapter_plus . what am i doing wrong?
You can go here and download: huggingface.co/h94/IP-Adapter/tree/main
They Go in this folder: ComfyUI_windows_portable\ComfyUI\models\ipadapter
would you recommend to use this method to put custom clothes on an ai generated human ? will it work without changing the cloth details ?
That tech doesn’t exist as of now. 100% accurate clothes transfer. You can try this workflow method, it’s the closest you get to change outfits from existing images: ua-cam.com/video/YG6oif_nEGk/v-deo.html
Your videos are interesting and useful, but could you turn the music down ? Thanks for your work.🙂
Thanks! Music is at 5%. There is a UA-cam setting called Stable Volume, it’s enabled on by default, makes its unnecessarily loud when not speaking. Turning it off gives the original recorded level.
Lol I love the music, thanks for the great tutorials!
Every Comfyui tutorial loses me when the creator starts connecting nodes left and right and adding them all over the screen. Aggrivates my linear thinking. Good information though!
Hello. Congratulations on the channel.
I have a problem. I get an error in the blip analyzy image node. It turns pink and gives me a series of errors. At Load checkpoint I have the sdxl and at load it also has sdxl.
I hope you can guide me. Thank you so much.
Hi, Thank You!! Check the following please:
1. model_base_capfilt_large.pth file is located in "ComfyUI\models\blip\checkpoints". If not you can download it from here: storage.googleapis.com/sfr-vision-language-research/BLIP/models/model_base_capfilt_large.pth
2. Go to ComfyUI\custom_nodes\was-node-suite-comfyui - click on install.bat
3. compatible transformers version is: transformers==4.26.1. Anything higher if installed will give error.
If you still get an error let me know.
If you had that file already in the folder. Then I ran install.bat as you told me and it gave me this error:
WARNING: The script f2py.exe is installed in 'C:\ESPACIO LIBRE\Herramientas IA\ComfyUI_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Attempting uninstall: transformers
Found existing installation: transformers 4.35.2
Uninstalling transformers-4.35.2:
Successfully uninstalled transformers-4.35.2
WARNING: The script transformers-cli.exe is installed in 'C:\ESPACIO LIBRE\Herramientas IA\ComfyUI_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\Scripts' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
Attempting uninstall: scikit-image
Found existing installation: scikit-image 0.22.0
Uninstalling scikit-image-0.22.0:
Successfully uninstalled scikit-image-0.22.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
clip-interrogator 0.6.0 requires transformers>=4.27.1, but you have transformers 4.26.1 which is incompatible.
Successfully installed PyWavelets-1.5.0 numpy-1.24.4 scikit-image-0.20.0 tokenizers-0.13.3 transformers-4.26.1
[notice] A new release of pip is available: 23.3.1 -> 23.3.2
[notice] To update, run: C:\ESPACIO LIBRE\Herramientas IA\ComfyUI_cu121_or_cpu\ComfyUI_windows_portable\python_embeded\python.exe -m pip install --upgrade pip
Presione una tecla para continuar . . .@@controlaltai
The clip interrogator version you have is not compatible with transformers. You have a higher version of transformers than what is required by blip. I did a fresh install and tried was suit. By default, transformers is just not installed. Running install.bat fixed it for me. Second your scripts folder in the comfy main python folder is not added to path in about pc - view advance system info - environment variables. You have to manually add that there and try running the file again.
Some custom nodes or you must have installed earlier that must have installed clip interrogator and compatible transformers. Or there was a manual install of the same.
Do you have conda installed with some other tools using transformers or clip interrogator?
Try downloading a separate copy of comfy and setup that new copy with was suit installed.
Also upgrade your pip version. Let me know if you are able to fix this.
I tried to follow the step when using RGB mask, Iit turns out with really messy result. Don't know what's wrong.
Check out the order and prompt (include subjects in prompt in order), it’s highly sensitive. Keep left color first, right color second and bg last. Secondly add 0.3 to 0.5 noise, reduce ip adapter weight as well. Also try keeping it in steps. First ip adapter x steps, second till x and lastly third.
Actually I followed all the keys you mentioned in video, like order, prompt, pure 0,0,255 colors, Noise, weights exactly like yours. But it doesn't get result even close. Always got twisted cats and dogs. I tried to play with parameters, until now, it doen't work for me. And my nodes and comfyui were updated to the latest. Thanks!@@controlaltai
Can you mail me the workflow and the images. I like to have a look at it and see why it is giving such outputs. Email is mail @ controlaltai . com (without spaces)
so beautiful pleasant voice
Thank you, but that's not my real voice. Its an AI voice. Sound better for presentation style videos.
awesomely explained , use your voice instead of this
JSON File (UA-cam Membership): www.youtube.com/@controlaltai...
?? and where is the file itself)))
In members post on UA-cam. Only channel members can see it - under the Members tab
and how do I become a member, I'm subscribed to you
@@controlaltai
Membership is paid feature of UA-cam. You can join here: ua-cam.com/channels/gDNws07qS4twPydBatuugw.htmljoin
Apply IP adapter - there is no such node
Search for ip adapter advance. The guy who made the node broke everything. Names are changed.
@@controlaltai Thanks!
where to put ip-adapter-plus_sdxl_vit-h.bin?
Firstly, it should be a safetensor file and not .bin. And It goes here: ComfyUI\models\ipadapter
@@controlaltai Thanks for replying! I figured it out.