Dear all, due to the breaking changes in IPAdapter, some of the nodes used here are no longer available. I have updated the workflow to include the changes accordingly and also made some small changes to the workflow. Please download 'V2_ipadapter_face_clothes_controlnet.json' from the workflow link. Thank you!
Hello, for this workflow I’m getting the background of the clothes upload instead of the background from my positive prompt… Not sure what to do about it. The clothes portion is coming out great, so much appreciation for that! 🎉
Hi, I'm loading your workflow version 2 but the following error is appearing: Error occurred when executing IPAdapterTiled: insightface model is required for FaceID models File "F:\Confy\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\Confy\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\Confy\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\Confy\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 957, in apply_tiled model, _ = ipadapter_execute(model, ipadapter_model, clip_vision, **ipa_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\Confy\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 191, in ipadapter_execute raise Exception("insightface model is required for FaceID models")
Just wanted to comment on how much I appreciate your step by step process and explanations through the video and your help in the comment section! It seems with a number of the other creators in the space they create walk throughs which assume you already know a lot of the necessary setup and steps that novices simply don't know. Keep up the great work!
Dear all, I have made a mistake in understanding the attention mask element in the IPAdapter node. I thought it was used to increase the face weight but it was actually used for the positioning of the final output in the image. You could omit the step of masking out the head of the model in the 'Image Preprocessing' step. As for the clothes, you should still keep it only if you want the final output region to be the same as that of the input image.
just gonna say it here, you know you spend to much time in comfyui when you keep trying to click drag the screen up love your content this stuff is a massive help to get things working
I created a little workspace save state add-on for comfy UI. You can find it in the comfy UI manager. I called it multi-workspaces because that's what the original intent was, but as it sort of carried on, eventually just became save states. Still working on a concept for multiple work spaces, the idea would be maybe to make them work together and save in the same JSON blob. You should go check it out.
Little bit upset i spent all day installing everything needed just to have errors thrown at me "assertion error this and assertion that.." Also have scrolled through comments and realise that it won't replicate a face i have already created which is a bummer.. Great video dude, you've done well explaining everything and explaining it well. Very patient and precise tut. Hopefully i can get this mess sorted out but for now i'll have to stick with fooocus. Incredible job you've do for us. Thank you!
Oh no, hope you can solve the errors when you try again :) If you are using a human face that was created by AI, it usually replicates well in terms of likeliness, it only does not perform well if you are looking for a 100% face copy or if we use real photos of ourselves. (Unless you look like a celebrity...)
@@DataLeveling Yeah, i'm using one i created with insightfaceswap in discord. So in saying that I'll try figure it out and post the fix just in case anyone else is having the same issue.
This tutorial is excellent! The instructions provided are crystal clear, and the thorough explanations render it effortlessly comprehensible. Your efforts are truly commendable! 👍
When i type winter, cafe in the prompt, it only gives me a coffee shop but no human in it. Curious to know why it works in your case ? I'm on arm architecture (M2)
Thanks for this man . how to have exact clothing? also the face from the first checkpoint will completely change when changing to new check point in step 2
I cant do anything with IPAdapters, keep getting error after error and I've done everything, made sure models are in the right directory, made sure to use sdxl ipadapter models for sdxl models, there just seems to be nothing I can do; can anyone help. (Error while deserializing header: MetadataIncompleteBuffer)
Usually to debug errors I search google for the error message and my platform to see if there are any fixes posted on reddit or one of the other AI hangouts
Thanks for the video. However, the results are far from desirable for commercial uses mainly because the generated cloth deviate too much from the original. I think a ControlNet Tile inpainting workflow might yield more stable results. But I could be wrong. Thanks again.
can i know is it only the first step face generation use SD or all the steps use SD model? i wonder can i use a uploaded face to run the workflow where i dont have a SD model running locally as the hw requirement is high.
I am having a problem . when I try to add a new node "prepare image for insightface" i am not getting this node. I watched your video of 2min on insightface wheel.. did the installation .. Restarted comfyUI but still not getting the insight face node
@DataLeveling Hello, I've downloaded the V2 ipadapter workflow you've provided in the description. But I'm getting error in Ksampler. And getting memory error. Can your please help me out on this?
amazing, can you make a comfy UI guide? i love your voice and style of explanation. PLEASE!-like how to install, how to make flows, how to install plugins, best General settings,etc. PLEASE!
I will try to fit one in my timeline when I can :) but for now you can refer to this video: ua-cam.com/video/_C7kR2TFIX0/v-deo.html His explanation is very good.
Hey man, thanks for the great tutorial. I was using your workflow but u have removed the masks and some other things, are they necessary or its fine without them as well
Han, thank you so much for your videos. They are very helpful and your workflow have help me immensely. I'm trying to see how to improve the upscaling for skin details at 4k and 8k. Could you help me?
Hi Han, thanks for the video! I received the following message at IPAdapter FaceID, Error occurred when executing IPAdapterFaceID: expected scalar type Float but found Half File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) May I know how to resolve this?
Error occurred when executing IPAdapterTiled: insightface model is required for FaceID models; -> Maybe because the IPAdapterTiled not does not have insightfacemodel docker like in the IPAdapter FaceID?
thanks for the tutorial. 1 problem I encoutered was controlnet. IpAdaptor works fine to keep the face consistant. But once i connected controlnet, the face changed into a very different generic one. I tried to use the same workflow you uploaded. Does it have anything to do with the aspect ratio of the controlnet source image? or the controlnet model?
great video, great content, subscribed and liked! easy to follow along, thanks for the awesome content! one question: everything was working fine until i added the pose and now the face is hard to keep consistent. any tips?
Hi, yes as we add more elements like controlnet / extra ipadapters, the faceid ipadapter would become harder to maintain consistency. Could try lowering the weights of the controlnet / increasing the weights of the faceid ipadapter. Requires a bit of trial and error to find that sweet spot!
Thanks for that! Nice and working workflow with easy explanation :) Do you know, is there a way how to use IPAdapter and create image with not a direct looking face? Something like "looking over her shoulder" or from sice or something like that. When I tried add it to the prompt it always put the direct face there, so it looks like the character has a broken neck completely :(
Hi, do i just import my own clothes image into your new ipv2 workflow and dont require to mask like you do in the video? and the settings and presets are already selected by you right? thanks !
Hi all, (IMPORT FAILED) UltimateSDUpscale im facing this issue and tried , installing it again , reinstalling , tried fix . Its not installing still. do let me know how to fix this. thanks in advance
Thanks a lot, really helpful! Just a question... why to use a sd15 LoRA on a SDXL workflow? There is ip-adapter-faceid-plusv2_sdxl_lora.safetensors available... (instead using ip-adapter-faceid-plus_sd15_lora.safetensors)?
Hihi, I only used sd15 lora when I changed the ckpt model to sd15, as this workflow is designed to be able to use sd15/sdxl. So when we use sdxl ckpt, we will use ip-adapter-faceid-plusv2_sdxl_lora.safetensors w sdxl controlnet and when using sd15 ckpt we will use ip-adapter-faceid-plusv2_sd15_lora.safetensors w sd15 controlnet.
Fantastic tutorial ,great content and video mate,subbed ! Say,is it possible to use and existing face that I have to generate new faces here ? Any comment would be appreciated !
Do you have a discord server for the channel/ this topic? I think a lot of people would love it. I’m trying to pursue this right now and most stable diffusion forums hate on you for trying it.
hello there, first thank you for your work. In your video description you posted alink to 'vit_H' model at 'h94> IP-Adapter>models>image_encoder>model.safetensors - my question is, where to put and apply this model? the only models with 'vit_H' in your workflow I see 1 - ip-adapter-plus-face_sdxl_vit-h and 2 - CLIP-ViT-H-14-laion2B-s32B-b79K -> which both are different from it, so I am a little bit confused. Can you please help me figure out this mess?
Hi, no problem. The vit_h model in my description is to be placed in the models/clip_vision folder. I did not mention that as in the video, I used the comfyUI manager to install that model and it will be automatically placed there. The vit_h model in my description is the same as CLIP-ViT-H-14-laion2B-s32B-b79K. As for 'ip-adapter-plus-face_sdxl_vit-h', the model is to be placed in the models/ipadapter folder.
@@DataLeveling Thank you. Allthought I have comfy manager, still I rather download and place all models (just models, not nodes) manualy from HF... my bad habit, so.. thank you again!
can anyone help me I got this error : Sampler : have the same dtype, but got query.dtype: float key.dtype: struct c10::Half and value.dtype: struct c10::Half instead.
When loading the graph, the following node types were not found: InsightFaceLoader IPAdapterApply PrepImageForInsightFace IPAdapterApplyFaceID Nodes that have failed to load will show as red on the graph.
Hello! After following the tutorials from your videos, i can't seem to find the ipadapter in the add node part. Should i paste the checkpoints, loras, ipadapter files on the different model folder you indicated on your past video (separate from the original models folder)?
Hi, if you don't see the ipadapter nodes after installing it from the ComfyUI Manager,you have to restart your ComfyUI and it should be there. You should use the default models folder if you are not sharing the models across different UIs. :)
Hi Data Leveling. I work on colab, have fixed the insightface-issue, but i am struggeling on that: Error occurred when executing IPAdapterApplyFaceID: mat1 and mat2 shapes cannot be multiplied (257x1664 and 1280x2048). Ive changed the dimensions several times and also the checkpoint-models, but with no results. Do you have any suggestion on that? Thanks Bruno
@@DataLeveling Hi Data Leveling. I got it to work finaly, it was the wrong controlnet model, now with control-lora-openposeXL2-rank256.safetensors it works! Thank you.
This video is great! I'm having a problem though. I cant get a full body shot and there is always some kind of object blocking her at the base of the generation, sort of like shes hiding behind it. any thoughts? I'm using sdxl.
Hi, the secret lies in the IPAdapter clothes, if you use a full body fashion model picture and mask the entire body, and add 'full body shot' to the prompt with a portrait aspect ratio, it will work! However, do note that for full body shots, changing the pose is a little bit challenging to maintain the clothing style.
Hi, if you are using the IPAdapter clothes one, the produced output usually follows the shape of the model. You could also use other controlnet models like DensePose to control the body shape of the final output.
Hi, yes I have seen another with the same issue when loading my workflow, but it works for majority of others. I am guessing it is perhaps a ComfyUI / ComfyUI Manager ver difference, could you try updating it to the latest version? (I have tried loading my workflow from a laptop and it works)
hello sir, i am not able to find 'prep for insightface' menu in ipadapter menu of add node ? whereas i installed insightface from your previous video which you said to follow before starting this. PLZ HELP . 🙏
Hi sir, yes the dev removed it from the repository in the breaking update. It would work just fine without it if your face is in the center of the image :) All that node does is add a white padding and crop center to the image.
@@DataLeveling Sir im getting this error while in Clip Vision loader node : Error occurred when executing CLIPVisionLoader: 'NoneType' object has no attribute 'lower' File "D:\Work Space\AI Art Generator\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Work Space\AI Art Generator\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Work Space\AI Art Generator\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Work Space\AI Art Generator\ComfyUI_windows_portable\ComfyUI odes.py", line 865, in load_clip clip_vision = comfy.clip_vision.load(clip_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Work Space\AI Art Generator\ComfyUI_windows_portable\ComfyUI\comfy\clip_vision.py", line 113, in load sd = load_torch_file(ckpt_path) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Work Space\AI Art Generator\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 13, in load_torch_file if ckpt.lower().endswith(".safetensors"):
@@sadikrizvi6468 Hey sorry for the late response, it seems to be an error where your clip vision model is not detected correctly, make sure to choose the right clip vision :)
Hi, yes you can do that with another ipadapter for the background, you can check out an example from this video: ua-cam.com/video/vqG1VXKteQg/v-deo.html
Perhaps training a diffusion model in the same way that Amazon did to change the entire outfit without altering the details could help with this process. Are you able to training a model like that just by reading the paper they made available?
@@DataLeveling Diffuse to Choose: Enriching Image Conditioned Inpainting in Latent Diffusion Models for Virtual Try-All. It is the article name on hugging face. I'm trying to search for a code to see if it is possible to do the same on comfy ui
I love this workflow, but I'm now trying it with a picture of myself, so I have bypassed the get face, and just uploaded a picture of myself into the "image preprocessing" area, also bypassed the clothing and control net, and can't get the result to look like me, it's not matching my face very well, at all. I know sometimes AI faces are much easier to duplicate, is that just the case here? or any hints otherwise?
Hi, yes I believe that might be a limitation, as these models are trained on celebrity/human dataset faces and if your face is too different from those datasets, it will only guide the model to its closest approximation. I have also tested with their latest Portrait model and it is slightly better but still far from satisfactory. If you want your face in the images, maybe can try to create the vision you have of your image, then use reactor face swap to swap it in and upscale the image. This method gave me the best results out of all that involves my own face.
Hey man, i run to this error when i try to use ip adapter with sdxl ( Error occurred when executing IPAdapterApply: Error(s) in loading state_dict for ImageProjModel: copying a param with shape torch.Size([8192, 1024]) from checkpoint, the shape in current model is torch.Size([5120, 1024]) ) i have tried with both sd1.5 and sdxl encoder, same error. Love your videos btw
It’s super cool thank you!! But im still facing an issue tho, i can't see " Load InsightFace " node from the insightface module of the " Apply IPadapter FaceID " and im running on python 3.10.11. If anyone got an update or solution i'd HIGHLY appreciate, thanks a lot!
Hello, could you try to update both ComfyUI and IPAdapter to the latest version from the ComfyUI manager. If you are not using the manager, then run update_comfyui.bat and for IPAdapter, go to custom_nodes/ComfyUI_IPAdapter_plus, right click a blank area, select 'Open In Terminal' -> 'git pull' Hope this helps!
Can this be combined with your newest video load images from batch so , I have a variety of posses or a varity of outfits or a variety of faces or possibly I can do 2 or maybe more of above in one workflow
We could, but to a certain extent as IPAdapter accepts batches of images differently, if you want to iterate through a variety of faces, have to use the ImageList. I have tried with Variety of poses + Either Variety of Faces or Outfits, could only choose 1 else your vram will explode haha.
@@DataLeveling thanks for the response was thinking about over night generation or bulk batch of mupitle influences in mupitle posses backgrounds or Mupitle faces & outfits, each face will have 5 or more mupitle generation per a face No need to respond
Hi, you could try to chain up 3-5 images of the human in different facial expressions in a batch for the IPAdapter (using their new portrait model), then the facial expression in the output can be more flexible when using the prompt.
This is Gold! I have a doubt, I have seen a lot of consistent face tutorials, but what about a consistent background? is it possible? like, the same background with the model in different poses.
Hi, I think if you want a fixed background, you could use a background image as the latent image and only allow inpainting to the masked area of that image.
@@DataLeveling I want to improve consistency of clothes, Can you please suggest what shall I do. I can inpaint face later but importance of copying clothes exactly is priority for me.
@@Prince.Dhankhar Hi, I have not made a video on this yet, but you could try this out on your own first :) There is a project called OOTDiffusion that might be what you are looking for. The ComfyUI implementation is here: github.com/AuroBit/ComfyUI-OOTDiffusion It would be better to install this manually instead of ComfyUI Manager as you will have to perform a branch switch from the repository. The steps can be found here: github.com/AuroBit/ComfyUI-OOTDiffusion/issues/27
how much of a image generation cost in terms of s/it or it/s should one expect to see from this? I am taking minutes to generate images with this flow when I can do "normal" generations in seconds.
Mine takes around 90s for initial load, but subsequent regeneration is around 10s I believe. My specs are the following: 24GB vram, 24core cpu, 32GB ram
Error occurred when executing KSampler: mat1 and mat2 shapes cannot be multiplied (462x2048 and 768x320)~~~ I ran exactly the same as you, and got this message from the sampling - ksampler side.
Hi, make sure the controlnet model version you use is same as your checkpoint model version. That error usually means one is using sd1.5 and one is using sdxl.
@@DataLeveling checkpoint-Juggernaut XL v7 /ip-adapter-faceid-plusv2_sdxl.bin , plusface_sdxl_vit-h.safetensors/ Error occurred when executing IPAdapterModelLoader: PytorchStreamReader failed reading zip archive: failed finding central directory This time, we get this error messageI have the exact same settings as you in the video, but it's not working.I'm running GoogleColab and Google Drive together.
@@lilillllii246 Hmm I have never seen this error message before, but you could try this solution: stackoverflow.com/questions/71617570/pytorchstreamreader-failed-reading-zip-archive-failed-finding-central-directory
Should be enough if you do not have many heavy graphics running on the background. Mine is 24gb but it utilizes 30% from background and goes up to 80% when running the workflow, so the usage is estimated to be around 10-12gb vram on sdxl.
Hi, I'm not sure I understand your question correctly, do you mean to inpaint a segment of a face from A and merge it with a body of B OR use A face in faceid and B body in ipadapter.
@@offmybach If its the latter, then when using the workflow, take the picture of B, mask the head region and send it to the ipadapter faceid attn mask. And for the B's ipadapter, only mask the body region and send it to the ipadapter attn mask. Hope this helps!
Thanks for the video -- good information, but please don't leave the same music loop repeating over and over and over even while you are talking. It became too annoying to continue concentrating on what you were saying with that tick, tick, tick, under your voice.
@@ukaszgwizdaa5712 Previously there was also another comment with the same issue but we couldn't fix it.. not too sure what went wrong either. I just tried on my laptop and it loads up properly..
Hi, yes unfortunately the dev removed the node from the repository. It should work just fine too if your face is cropped nicely at the center of the image.
sadly it didnt... i clicked on "load" and then on the downloaded json file, nothing appears... maybe i downloaded it wrong? I went to the link below this video on github and saved the json file there into my comfyuiportable folder as "somename.json" @@DataLeveling
loading it via menu didnt work either... i think i will just have to manually replicate what you did in the video. it is weird though... first time I have had this issue... @@DataLeveling
You can try that by masking the dress, invert the mask to select everything but the dress, use that inverted mask as the latent image then sample it with a face you want. If the image looks okay, then use controlnet to change the pose on a separate sampling, but this will take more steps and requires a lot of trial and error.
Which GPU are you using ? I have an RTX 2060 ( 6GB VRAM) and I'm getting an Out of Memory issue once I add the IpAdapter Clothes and I run it. Any ideas on how to solve this ? Thanks
Hi, I am using a RTX 4090 (32GB VRAM). 6GB VRAM might not be sufficient for SDXL, maybe could try using the 1.5 versions. For the clothes, if the image is too big, you might want to downscale it to 520 pixels using the UpscaleImage node.
@@DataLevelingThanks for the reply. SDXL can run fine on my GPU it's just adding multiple IPAdapeter nodes that loads a lot of RAM on my GPU. Do you know any way to counter that ? Everything before the Clothes part worked just fine.
@@MrDonald911 I see, alternatively, you could run once to get the face in a model. Then run again but this time we use inpainting, load that image as the latent for the sampler while only masking the body region and bypass the faceid nodes. Also remember to send the masked photo to the Apply IPAdapter node attn mask. Hope that helps!
I am so confused. When you cover something with the mask I was under the impression that means you do NOT want that to be used. Example for your jacket. If you masked the jacket in black that means you do not want the jacket. (I am asking). That lost me. To make it worse I was doing the opposite and it was working.
Hi, for IPAdapters and most functions, if you mask the area it means you want only that portion to be used. If want everything except for the masked area, then you could use an invert mask node to reverse it and feed the inverted mask. I think what happened on your end may be due to the text prompts or other factors. You can learn more about attention masking from this video: ua-cam.com/video/vqG1VXKteQg/v-deo.html
@@DataLeveling actually, it does not masking the face, insightface is doing it, masking is for the image that u will generate, it is defining where the ip adapter would apply at generated img
OHHH damn, I always thought the masked region would give more weight to the model because of the attention keyword, will clarify that in a pinned comment. Thanks a lot for clarifying! :) @@cemal6950
Hard to tell from your comfyi UI video, but IPAdapter based face transfer looks a lot more like a head cut/paste job in photoshop (wrong size head, lighting not transfer correctly) than it does an actual photo, or even a generative image?
Hii haha does the head size looks weird to you? It looks okay to me! Maybe only for those that uses DWpose estimation, the head looks a little off as my controlnet reference image is too zoomed in, should have cropped more parts of the body. For the lightning, I think it could be tuned further with using precise text prompts as I am using a simple one for a base line demo :)
I am getting an error message of load clip vision and i don't know what to do please help me Edit- after doing so much research i saw that I haven't put any clip vision file in it 😂😂
INFO: InsightFace detection resolution lowered to (512, 512). ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 155, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 85, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 78, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 636, in apply_ipadapter self.ipadapter = IPAdapter( ^^^^^^^^^^ File "F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 272, in __init__ self.image_proj_model.load_state_dict(ipadapter_model["image_proj"]) File "F:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch n\modules\module.py", line 2152, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}: \t{}'.format( RuntimeError: Error(s) in loading state_dict for ProjModelFaceIdPlus: size mismatch for proj.2.weight: copying a param with shape torch.Size([8192, 1024]) from checkpoint, the shape in current model is torch.Size([5120, 1024]). size mismatch for proj.2.bias: copying a param with shape torch.Size([8192]) from checkpoint, the shape in current model is torch.Size([5120]). size mismatch for norm.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]). size mismatch for norm.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]). size mismatch for perceiver_resampler.proj_in.weight: copying a param with shape torch.Size([2048, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280]). size mismatch for perceiver_resampler.proj_in.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]). size mismatch for perceiver_resampler.proj_out.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1280, 1280]). size mismatch for perceiver_resampler.proj_out.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]). size mismatch for perceiver_resampler.norm_out.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]). size mismatch for perceiver_resampler.norm_out.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]). size mismatch for perceiver_resampler.layers.0.0.norm1.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]). size mismatch for perceiver_resampler.layers.0.0.norm1.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]). size mismatch for perceiver_resampler.layers.0.0.norm2.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]). size mismatch for perceiver_resampler.layers.0.0.norm2.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]). size mismatch for perceiver_resampler.layers.0.0.to_q.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1280, 1280]). size mismatch for perceiver_resampler.layers.0.0.to_kv.weight: copying a param with shape torch.Size([4096, 2048]) from checkpoint, the shape in current model is torch.Size([2560, 1280]). size mismatch for perceiver_resampler.layers.0.0.to_out.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1280, 1280]). size mismatch for perceiver_resampler.layers.0.1.0.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]). size mismatch for perceiver_resampler.layers.0.1.0.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]). size mismatch for perceiver_resampler.layers.0.1.1.weight: copying a param with shape torch.Size([8192, 2048]) from checkpoint, the shape in current model is torch.Size([5120, 1280]). size mismatch for perceiver_resampler.layers.0.1.3.weight: copying a param with shape torch.Size([2048, 8192]) from checkpoint, the shape in current model is torch.Size([1280, 5120]). Hello, the above error occurred. I changed the Load lPAdapter Model to ip-adapter-faceid-plusv2_sd15.bin. It worked. What is ip-adapter-faceid-plusv2_sdxl.bin Cannot run?
I can't be sure, but I think some model mismatch happened between SDXL and SD1.5. Just make sure to check that if you are using SD1.5, these must also use the 1.5 version: 1. checkpoint model 2. lora model 3. ipadapter faceid 4. ipadapter 5. controlnet model And if you want to switch to SDXL, all of the above have to changed to SDXL version as well.
As someone who works daily with AI and SD image generation processes like you, would you say this method is like the current state of the art in creating consistent AI characters, while controlling their pose and clothing? Is there also a method to get a consistent location/ environment for the character? Thanks btw, this is the best comfyui tutorial video I have watched for the topic addressed, keep up the good work!
As for creating a consistent face, all the methods are claiming to be sota with comparisons using different metrics so its a little bit hard to say which is the best, but if not looking for 1-1 copy, this would be one of the best method. For the location and environment, you could use another ipadapter node for the image while masking out the already generated character (face, clothes, pose) with inpainting.
can anyone please help me with this error in comfyui Error occurred when executing InsightFaceLoader: module 'cv2.gapi.wip.draw' has no attribute 'Text' i tried reinstalling opencv-python and opencv-contribe but still get same error
Hi there! I`m trying to follow your guide here but I can't find anywhere the following nodes: - InsightFaceLoader - IPAdapterApply - PrepImageForInsightFace - IPAdapterApplyFaceID What am I missing?
Hi, yes there is a breaking update in the IPAdapter custom nodes just yesterday... I am still working on making changes to my workflow and will probably make a follow up video on it. You could check out this video to see the update changes: ua-cam.com/video/_JzDcgKgghY/v-deo.html
So I did everything to install, however I get this error message, even though "pip list" shows that InsightFace is installed. Error occurred when executing InsightFaceLoader: No module named 'insightface' File "C:\dev\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\dev\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\dev\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\dev\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 627, in load_insight_face raise Exception(e)
Error occurred when executing IPAdapterApply: InsightFace must be provided for FaceID models. File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 698, in apply_ipadapter raise Exception('InsightFace must be provided for FaceID models.')
an error:Error occurred when executing IPAdapterApplyFaceID: Error(s) in loading state_dict for ProjModelFaceIdPlus: size mismatch for proj.2.weight: copying a param with shape torch.Size([8192, 1024]) from checkpoint, the shape in current model is torch.Size([5120, 1024]). size mismatch for proj.2.bias: copying a param with shape torch.Size([8192]) from checkpoint, the shape in current model is torch.Size([5120]). size mismatch for norm.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]). size mismatch for norm.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]). size mismatch for perceiver_resampler.proj_in.weight: copying a param with shape torch.Size([2048, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
Hi, this error usually occurs when the controlnet model is not in the right version. If you are running on SDXL, make sure the controlnet you are using is also the SDXL version.
@@DataLeveling got it, it's worked well,thanks,but there's another problem: Error occurred when executing GroundingDinoModelLoader (segment anything): File "F:\Blender_ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "F:\Blender_ComfyUI\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) File "F:\Blender_ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) File "F:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_segment_anything ode.py", line 286, in main dino_model = load_groundingdino_model(model_name) File "F:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_segment_anything ode.py", line 117, in load_groundingdino_model get_local_filepath( File "F:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_segment_anything ode.py", line 111, in get_local_filepath download_url_to_file(url, destination) File "F:\Blender_ComfyUI\python_embeded\lib\site-packages\torch\hub.py", line 620, in download_url_to_file u = urlopen(req)
Dear all, due to the breaking changes in IPAdapter, some of the nodes used here are no longer available.
I have updated the workflow to include the changes accordingly and also made some small changes to the workflow.
Please download 'V2_ipadapter_face_clothes_controlnet.json' from the workflow link. Thank you!
where do i update the directory path
Hello, for this workflow I’m getting the background of the clothes upload instead of the background from my positive prompt… Not sure what to do about it.
The clothes portion is coming out great, so much appreciation for that! 🎉
Hi, I'm loading your workflow version 2 but the following error is appearing:
Error occurred when executing IPAdapterTiled:
insightface model is required for FaceID models
File "F:\Confy\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Confy\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Confy\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Confy\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 957, in apply_tiled
model, _ = ipadapter_execute(model, ipadapter_model, clip_vision, **ipa_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\Confy\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 191, in ipadapter_execute
raise Exception("insightface model is required for FaceID models")
@@Lenovicc reinstall reactor node
@@johnny2bi4 means?
Just wanted to comment on how much I appreciate your step by step process and explanations through the video and your help in the comment section! It seems with a number of the other creators in the space they create walk throughs which assume you already know a lot of the necessary setup and steps that novices simply don't know. Keep up the great work!
Thanks for your kind words! :)
Came from AI Jason, glad I did! Thanks for the great explanations.
Dear all, I have made a mistake in understanding the attention mask element in the IPAdapter node. I thought it was used to increase the face weight but it was actually used for the positioning of the final output in the image.
You could omit the step of masking out the head of the model in the 'Image Preprocessing' step. As for the clothes, you should still keep it only if you want the final output region to be the same as that of the input image.
just gonna say it here, you know you spend to much time in comfyui when you keep trying to click drag the screen up
love your content this stuff is a massive help to get things working
I was stuck on how to use IPAdappter.. your video really helps me alot! keep up the good work. can't wait for your next drop
So many good tips I couldn't keep up writing them all down ill be rewatching this for sure.
I created a little workspace save state add-on for comfy UI. You can find it in the comfy UI manager. I called it multi-workspaces because that's what the original intent was, but as it sort of carried on, eventually just became save states. Still working on a concept for multiple work spaces, the idea would be maybe to make them work together and save in the same JSON blob.
You should go check it out.
Little bit upset i spent all day installing everything needed just to have errors thrown at me "assertion error this and assertion that.."
Also have scrolled through comments and realise that it won't replicate a face i have already created which is a bummer..
Great video dude, you've done well explaining everything and explaining it well. Very patient and precise tut.
Hopefully i can get this mess sorted out but for now i'll have to stick with fooocus.
Incredible job you've do for us. Thank you!
Oh no, hope you can solve the errors when you try again :)
If you are using a human face that was created by AI, it usually replicates well in terms of likeliness, it only does not perform well if you are looking for a 100% face copy or if we use real photos of ourselves. (Unless you look like a celebrity...)
@@DataLeveling Yeah, i'm using one i created with insightfaceswap in discord. So in saying that I'll try figure it out and post the fix just in case anyone else is having the same issue.
I'm glad I found this video, very well explained on the IPadapter workflow. Subscribed to the channel.. btw, love the Singaporean accent :)
This tutorial is excellent! The instructions provided are crystal clear, and the thorough explanations render it effortlessly comprehensible. Your efforts are truly commendable! 👍
Thanks for your kind words :)
Keep up the good work! Really nice videos! Would love to see a video about InstantID as well.
Thanks for the encouragement! :)
Prepare Image For InsightFace is not there, and it looks quite different on my end.
When i type winter, cafe in the prompt, it only gives me a coffee shop but no human in it. Curious to know why it works in your case ? I'm on arm architecture (M2)
Found the problem: I used an attention mask with a full body. Removing it seemed to have corrected the problem
Thanks for this man . how to have exact clothing? also the face from the first checkpoint will completely change when changing to new check point in step 2
This is the best video I learned
I would love to see you add animatediff to your workflow too. Keep up good work !
I cant do anything with IPAdapters, keep getting error after error and I've done everything, made sure models are in the right directory, made sure to use sdxl ipadapter models for sdxl models, there just seems to be nothing I can do; can anyone help. (Error while deserializing header: MetadataIncompleteBuffer)
Usually to debug errors I search google for the error message and my platform to see if there are any fixes posted on reddit or one of the other AI hangouts
Can't seem to get the clothes portion working but seems like everything else is working perfectly
Thank you so much!! I have applied and it works perfectly!
I am not getting the node prepare image for insightface node….When i try to add this node…I don’t see it in my options what to do
hello i need help i cant find Prepare Image For InsightFace on confy interface
this guy is the G.O.A.T!
Thanks for the video. However, the results are far from desirable for commercial uses mainly because the generated cloth deviate too much from the original. I think a ControlNet Tile inpainting workflow might yield more stable results. But I could be wrong. Thanks again.
why i dont have prepare image for insightface
can i know is it only the first step face generation use SD or all the steps use SD model? i wonder can i use a uploaded face to run the workflow where i dont have a SD model running locally as the hw requirement is high.
This is so helpful! thank you so much for preparing!!
I am having a problem . when I try to add a new node "prepare image for insightface" i am not getting this node. I watched your video of 2min on insightface wheel.. did the installation .. Restarted comfyUI but still not getting the insight face node
Thank You. Thank You. Thank You. Latent Vision brought me here.
@DataLeveling Hello, I've downloaded the V2 ipadapter workflow you've provided in the description. But I'm getting error in Ksampler. And getting memory error. Can your please help me out on this?
amazing, can you make a comfy UI guide? i love your voice and style of explanation. PLEASE!-like how to install, how to make flows, how to install plugins, best General settings,etc. PLEASE!
I will try to fit one in my timeline when I can :) but for now you can refer to this video: ua-cam.com/video/_C7kR2TFIX0/v-deo.html
His explanation is very good.
@@DataLeveling THANKS!
Hello I installed ComfyUI InsightFace but, it doesn't show '' Prepare Image for InsightFace'' in the options how to fix?
amazing video, thank you so much for showing the whole process
Hey man, thanks for the great tutorial. I was using your workflow but u have removed the masks and some other things, are they necessary or its fine without them as well
Han, thank you so much for your videos. They are very helpful and your workflow have help me immensely. I'm trying to see how to improve the upscaling for skin details at 4k and 8k. Could you help me?
Hi Han, thanks for the video! I received the following message at IPAdapter FaceID,
Error occurred when executing IPAdapterFaceID:
expected scalar type Float but found Half
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
May I know how to resolve this?
hi is there a guide how to setup sd model ? the video seems to assume sd already exist
Error occurred when executing IPAdapterTiled:
insightface model is required for FaceID models; -> Maybe because the IPAdapterTiled not does not have insightfacemodel docker like in the IPAdapter FaceID?
thanks for the tutorial. 1 problem I encoutered was controlnet. IpAdaptor works fine to keep the face consistant. But once i connected controlnet, the face changed into a very different generic one. I tried to use the same workflow you uploaded.
Does it have anything to do with the aspect ratio of the controlnet source image? or the controlnet model?
great video, great content, subscribed and liked!
easy to follow along, thanks for the awesome content!
one question: everything was working fine until i added the pose and now the face is hard to keep consistent. any tips?
Hi, yes as we add more elements like controlnet / extra ipadapters, the faceid ipadapter would become harder to maintain consistency.
Could try lowering the weights of the controlnet / increasing the weights of the faceid ipadapter.
Requires a bit of trial and error to find that sweet spot!
Thanks for that! Nice and working workflow with easy explanation :) Do you know, is there a way how to use IPAdapter and create image with not a direct looking face? Something like "looking over her shoulder" or from sice or something like that. When I tried add it to the prompt it always put the direct face there, so it looks like the character has a broken neck completely :(
Hi, do i just import my own clothes image into your new ipv2 workflow and dont require to mask like you do in the video? and the settings and presets are already selected by you right? thanks !
Hi all, (IMPORT FAILED) UltimateSDUpscale im facing this issue and tried , installing it again , reinstalling , tried fix . Its not installing still. do let me know how to fix this. thanks in advance
Thanks a lot, really helpful! Just a question... why to use a sd15 LoRA on a SDXL workflow? There is ip-adapter-faceid-plusv2_sdxl_lora.safetensors available... (instead using ip-adapter-faceid-plus_sd15_lora.safetensors)?
Hihi, I only used sd15 lora when I changed the ckpt model to sd15, as this workflow is designed to be able to use sd15/sdxl.
So when we use sdxl ckpt, we will use ip-adapter-faceid-plusv2_sdxl_lora.safetensors w sdxl controlnet and when using sd15 ckpt we will use ip-adapter-faceid-plusv2_sd15_lora.safetensors w sd15 controlnet.
why do i get NO MODULE GOOGLE error on the ipadapter step. Can you help me?
Amazing content! Keep it up 💪🏻
Perfect video :) Great pace!
Fantastic tutorial ,great content and video mate,subbed !
Say,is it possible to use and existing face that I have to generate new faces here ? Any comment would be appreciated !
Do you have a discord server for the channel/ this topic? I think a lot of people would love it. I’m trying to pursue this right now and most stable diffusion forums hate on you for trying it.
great video, thanks man
hello there, first thank you for your work. In your video description you posted alink to 'vit_H' model at 'h94> IP-Adapter>models>image_encoder>model.safetensors - my question is, where to put and apply this model? the only models with 'vit_H' in your workflow I see 1 - ip-adapter-plus-face_sdxl_vit-h and 2 - CLIP-ViT-H-14-laion2B-s32B-b79K -> which both are different from it, so I am a little bit confused. Can you please help me figure out this mess?
Hi, no problem. The vit_h model in my description is to be placed in the models/clip_vision folder. I did not mention that as in the video, I used the comfyUI manager to install that model and it will be automatically placed there. The vit_h model in my description is the same as CLIP-ViT-H-14-laion2B-s32B-b79K.
As for 'ip-adapter-plus-face_sdxl_vit-h', the model is to be placed in the models/ipadapter folder.
@@DataLeveling Thank you. Allthought I have comfy manager, still I rather download and place all models (just models, not nodes) manualy from HF... my bad habit, so.. thank you again!
can anyone help me
I got this error : Sampler : have the same dtype, but got query.dtype: float key.dtype: struct c10::Half and value.dtype: struct c10::Half instead.
When loading the graph, the following node types were not found:
InsightFaceLoader
IPAdapterApply
PrepImageForInsightFace
IPAdapterApplyFaceID
Nodes that have failed to load will show as red on the graph.
Hi, yes please download the workflow with 'V2' in the filename :)
Thank you so much for leveling up my game
Thank you so much, this is a great video tutorial
Hello! After following the tutorials from your videos, i can't seem to find the ipadapter in the add node part. Should i paste the checkpoints, loras, ipadapter files on the different model folder you indicated on your past video (separate from the original models folder)?
Hi, if you don't see the ipadapter nodes after installing it from the ComfyUI Manager,you have to restart your ComfyUI and it should be there.
You should use the default models folder if you are not sharing the models across different UIs. :)
how to fix extra finger also in this workflow??
can you do a video about sparse control using IPadapter face ID pls. im trying that now but not sure why my vids generated are orange in color 😭
Will do a few vids on AnimateDiff soon!
Hi Data Leveling. I work on colab, have fixed the insightface-issue, but i am struggeling on that: Error occurred when executing IPAdapterApplyFaceID: mat1 and mat2 shapes cannot be multiplied (257x1664 and 1280x2048). Ive changed the dimensions several times and also the checkpoint-models, but with no results. Do you have any suggestion on that? Thanks Bruno
Hi Bruno, could you send a link to a picture of your workflow, maybe I can help to take a look and see where went wrong.
@@DataLeveling Hi Data Leveling. I got it to work finaly, it was the wrong controlnet model, now with control-lora-openposeXL2-rank256.safetensors it works! Thank you.
Excellent video! Thank you
This video is great! I'm having a problem though. I cant get a full body shot and there is always some kind of object blocking her at the base of the generation, sort of like shes hiding behind it. any thoughts? I'm using sdxl.
Hi, the secret lies in the IPAdapter clothes, if you use a full body fashion model picture and mask the entire body, and add 'full body shot' to the prompt with a portrait aspect ratio, it will work!
However, do note that for full body shots, changing the pose is a little bit challenging to maintain the clothing style.
@@DataLeveling thanks. i was bypassing the clothing part, but i'll trying using it
I cant find note name "prepare image for insightface"? why? i just see "prep image for clipvison"
Hi, yes that node has been removed by the developer of IPAdapter. I have updated with a V2 workflow if you are using the newer versions of IPAdapter.
Thanks for sharing this!
Really interesting stuff.
How do you keep the body shape consistent?
Hi, if you are using the IPAdapter clothes one, the produced output usually follows the shape of the model.
You could also use other controlnet models like DensePose to control the body shape of the final output.
Hi. When I try to import your workflow it just does nothing. Not to sure if the workflow is working as intended
Hi, yes I have seen another with the same issue when loading my workflow, but it works for majority of others.
I am guessing it is perhaps a ComfyUI / ComfyUI Manager ver difference, could you try updating it to the latest version? (I have tried loading my workflow from a laptop and it works)
hello sir, i am not able to find 'prep for insightface' menu in ipadapter menu of add node ? whereas i installed insightface from your previous video which you said to follow before starting this. PLZ HELP .
🙏
Hi sir, yes the dev removed it from the repository in the breaking update. It would work just fine without it if your face is in the center of the image :)
All that node does is add a white padding and crop center to the image.
@@DataLeveling Sir im getting this error while in Clip Vision loader node :
Error occurred when executing CLIPVisionLoader:
'NoneType' object has no attribute 'lower'
File "D:\Work Space\AI Art Generator\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Work Space\AI Art Generator\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Work Space\AI Art Generator\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Work Space\AI Art Generator\ComfyUI_windows_portable\ComfyUI
odes.py", line 865, in load_clip
clip_vision = comfy.clip_vision.load(clip_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Work Space\AI Art Generator\ComfyUI_windows_portable\ComfyUI\comfy\clip_vision.py", line 113, in load
sd = load_torch_file(ckpt_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Work Space\AI Art Generator\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 13, in load_torch_file
if ckpt.lower().endswith(".safetensors"):
@@sadikrizvi6468 Hey sorry for the late response, it seems to be an error where your clip vision model is not detected correctly, make sure to choose the right clip vision :)
Thanks. how can I change the background to a photo background file of my choice instead of changing the background with a prompt?
Hi, yes you can do that with another ipadapter for the background, you can check out an example from this video: ua-cam.com/video/vqG1VXKteQg/v-deo.html
Perhaps training a diffusion model in the same way that Amazon did to change the entire outfit without altering the details could help with this process. Are you able to training a model like that just by reading the paper they made available?
I have not seen the paper yet, but it sounds like what they are doing is inpainting, a slightly different process from what I am doing with IPAdapter.
@@DataLeveling Diffuse to Choose: Enriching Image Conditioned Inpainting in Latent Diffusion Models for Virtual Try-All. It is the article name on hugging face. I'm trying to search for a code to see if it is possible to do the same on comfy ui
I love this workflow, but I'm now trying it with a picture of myself, so I have bypassed the get face, and just uploaded a picture of myself into the "image preprocessing" area, also bypassed the clothing and control net, and can't get the result to look like me, it's not matching my face very well, at all. I know sometimes AI faces are much easier to duplicate, is that just the case here? or any hints otherwise?
Hi, yes I believe that might be a limitation, as these models are trained on celebrity/human dataset faces and if your face is too different from those datasets, it will only guide the model to its closest approximation. I have also tested with their latest Portrait model and it is slightly better but still far from satisfactory.
If you want your face in the images, maybe can try to create the vision you have of your image, then use reactor face swap to swap it in and upscale the image.
This method gave me the best results out of all that involves my own face.
Hey man, i run to this error when i try to use ip adapter with sdxl ( Error occurred when executing IPAdapterApply: Error(s) in loading state_dict for ImageProjModel: copying a param with shape torch.Size([8192, 1024]) from checkpoint, the shape in current model is torch.Size([5120, 1024]) )
i have tried with both sd1.5 and sdxl encoder, same error.
Love your videos btw
i have same error, you are fix it ?
@@ziyuebuke not yet
It’s super cool thank you!! But im still facing an issue tho, i can't see " Load InsightFace " node from the insightface module of the " Apply IPadapter FaceID " and im running on python 3.10.11. If anyone got an update or solution i'd HIGHLY appreciate, thanks a lot!
Hello, could you try to update both ComfyUI and IPAdapter to the latest version from the ComfyUI manager.
If you are not using the manager, then run update_comfyui.bat and for IPAdapter, go to custom_nodes/ComfyUI_IPAdapter_plus, right click a blank area, select 'Open In Terminal' -> 'git pull'
Hope this helps!
Your tutorial is awesome - could you tell us what specs you are running this on? like cpu, gpu and ram? Thanks a lot!!
Hi, sure thing, my specs are the following: 24GB vram, 24core cpu, 32GB ram.
@@DataLeveling thanks!
Can this be combined with your newest video load images from batch so , I have a variety of posses or a varity of outfits or a variety of faces or possibly I can do 2 or maybe more of above in one workflow
We could, but to a certain extent as IPAdapter accepts batches of images differently, if you want to iterate through a variety of faces, have to use the ImageList.
I have tried with Variety of poses + Either Variety of Faces or Outfits, could only choose 1 else your vram will explode haha.
@@DataLeveling thanks for the response was thinking about over night generation or bulk batch of mupitle influences in mupitle posses backgrounds or
Mupitle faces & outfits, each face will have 5 or more mupitle generation per a face
No need to respond
Thanks. Is there any way to copy the facial expression to a specific photo?
Hi, you could try to chain up 3-5 images of the human in different facial expressions in a batch for the IPAdapter (using their new portrait model), then the facial expression in the output can be more flexible when using the prompt.
Thankyou so much ❤
I am getting "No module named insightface" error with the Load InsightFace node. Please help.
Do you solve your problem?
This is Gold! I have a doubt, I have seen a lot of consistent face tutorials, but what about a consistent background? is it possible? like, the same background with the model in different poses.
Hi, I think if you want a fixed background, you could use a background image as the latent image and only allow inpainting to the masked area of that image.
That make sense! thank you@@DataLeveling
@@DataLeveling I want to improve consistency of clothes, Can you please suggest what shall I do. I can inpaint face later but importance of copying clothes exactly is priority for me.
@@Prince.Dhankhar Hi, I have not made a video on this yet, but you could try this out on your own first :)
There is a project called OOTDiffusion that might be what you are looking for.
The ComfyUI implementation is here: github.com/AuroBit/ComfyUI-OOTDiffusion
It would be better to install this manually instead of ComfyUI Manager as you will have to perform a branch switch from the repository.
The steps can be found here: github.com/AuroBit/ComfyUI-OOTDiffusion/issues/27
how much of a image generation cost in terms of s/it or it/s should one expect to see from this? I am taking minutes to generate images with this flow when I can do "normal" generations in seconds.
Mine takes around 90s for initial load, but subsequent regeneration is around 10s I believe. My specs are the following: 24GB vram, 24core cpu, 32GB ram
Error occurred when executing KSampler:
mat1 and mat2 shapes cannot be multiplied (462x2048 and 768x320)~~~ I ran exactly the same as you, and got this message from the sampling - ksampler side.
Hi, make sure the controlnet model version you use is same as your checkpoint model version. That error usually means one is using sd1.5 and one is using sdxl.
@@DataLeveling checkpoint-Juggernaut XL v7 /ip-adapter-faceid-plusv2_sdxl.bin , plusface_sdxl_vit-h.safetensors/
Error occurred when executing IPAdapterModelLoader:
PytorchStreamReader failed reading zip archive: failed finding central directory
This time, we get this error messageI have the exact same settings as you in the video, but it's not working.I'm running GoogleColab and Google Drive together.
@@lilillllii246 Hmm I have never seen this error message before, but you could try this solution: stackoverflow.com/questions/71617570/pytorchstreamreader-failed-reading-zip-archive-failed-finding-central-directory
Is gpu 16gb ram like 4080 enough for this kind of work or sdxl?
Should be enough if you do not have many heavy graphics running on the background.
Mine is 24gb but it utilizes 30% from background and goes up to 80% when running the workflow, so the usage is estimated to be around 10-12gb vram on sdxl.
Thank you for reply@@DataLeveling
How can we use the workflow to segment the face and the clothed body of another and combine the 2?
Hi, I'm not sure I understand your question correctly, do you mean to inpaint a segment of a face from A and merge it with a body of B
OR use A face in faceid and B body in ipadapter.
@DataLeveling the latter
@@offmybach If its the latter, then when using the workflow, take the picture of B, mask the head region and send it to the ipadapter faceid attn mask.
And for the B's ipadapter, only mask the body region and send it to the ipadapter attn mask.
Hope this helps!
Thanks for the video -- good information, but please don't leave the same music loop repeating over and over and over even while you are talking. It became too annoying to continue concentrating on what you were saying with that tick, tick, tick, under your voice.
nearly crying because of this, so hard to follow ..
great video! thanks
How well does this do with cartoons and anime?
Works even better!
i downlaod your workflow for example and it not working, You know what i'm doing wrong? I can't open it in window comfyUI SD 1.5 A1111
Hi, I'm not sure the reason but you may have to update your ComfyUI to latest version.
@@DataLevelingI did that before I asked :). All files on your disk do not open :/ i try another PC and come back with feedback
@@ukaszgwizdaa5712 Previously there was also another comment with the same issue but we couldn't fix it.. not too sure what went wrong either.
I just tried on my laptop and it loads up properly..
6:23 prepare image for insight face doesnt appear for me
Hi, yes unfortunately the dev removed the node from the repository. It should work just fine too if your face is cropped nicely at the center of the image.
Hm this looks really useful... i downloaded the json file from github, dragdrop it into comfyui and nothing happens? What am I doing wrong?
For JSON file you have to load it from the menus tab.
@@DataLeveling really? I always just drag dropped them into the comfyui Browser Tab and that worked...
@@matyourin my bad, I thought that shortcut only worked for embed images. Does loading it manually works for you?
sadly it didnt... i clicked on "load" and then on the downloaded json file, nothing appears... maybe i downloaded it wrong? I went to the link below this video on github and saved the json file there into my comfyuiportable folder as "somename.json" @@DataLeveling
loading it via menu didnt work either... i think i will just have to manually replicate what you did in the video. it is weird though... first time I have had this issue... @@DataLeveling
Thanks for the video.
Is it possible to put the exact same dress and not "redream" it?
You can try that by masking the dress, invert the mask to select everything but the dress, use that inverted mask as the latent image then sample it with a face you want.
If the image looks okay, then use controlnet to change the pose on a separate sampling, but this will take more steps and requires a lot of trial and error.
Which GPU are you using ? I have an RTX 2060 ( 6GB VRAM) and I'm getting an Out of Memory issue once I add the IpAdapter Clothes and I run it. Any ideas on how to solve this ? Thanks
Hi, I am using a RTX 4090 (32GB VRAM). 6GB VRAM might not be sufficient for SDXL, maybe could try using the 1.5 versions. For the clothes, if the image is too big, you might want to downscale it to 520 pixels using the UpscaleImage node.
@@DataLevelingThanks for the reply. SDXL can run fine on my GPU it's just adding multiple IPAdapeter nodes that loads a lot of RAM on my GPU. Do you know any way to counter that ? Everything before the Clothes part worked just fine.
@@MrDonald911 I see, alternatively, you could run once to get the face in a model. Then run again but this time we use inpainting, load that image as the latent for the sampler while only masking the body region and bypass the faceid nodes.
Also remember to send the masked photo to the Apply IPAdapter node attn mask.
Hope that helps!
I am so confused. When you cover something with the mask I was under the impression that means you do NOT want that to be used. Example for your jacket. If you masked the jacket in black that means you do not want the jacket. (I am asking). That lost me. To make it worse I was doing the opposite and it was working.
Hi, for IPAdapters and most functions, if you mask the area it means you want only that portion to be used. If want everything except for the masked area, then you could use an invert mask node to reverse it and feed the inverted mask. I think what happened on your end may be due to the text prompts or other factors.
You can learn more about attention masking from this video: ua-cam.com/video/vqG1VXKteQg/v-deo.html
i think u are misunderstood the attention masking, masking the face of the woman on the beggining is meaningless.@@DataLeveling
Agreed, since the face is already cropped nicely. @@cemal6950
@@DataLeveling actually, it does not masking the face, insightface is doing it, masking is for the image that u will generate, it is defining where the ip adapter would apply at generated img
OHHH damn, I always thought the masked region would give more weight to the model because of the attention keyword, will clarify that in a pinned comment. Thanks a lot for clarifying! :) @@cemal6950
Hard to tell from your comfyi UI video, but IPAdapter based face transfer looks a lot more like a head cut/paste job in photoshop (wrong size head, lighting not transfer correctly) than it does an actual photo, or even a generative image?
Hii haha does the head size looks weird to you? It looks okay to me! Maybe only for those that uses DWpose estimation, the head looks a little off as my controlnet reference image is too zoomed in, should have cropped more parts of the body.
For the lightning, I think it could be tuned further with using precise text prompts as I am using a simple one for a base line demo :)
I am getting an error message of load clip vision and i don't know what to do please help me
Edit- after doing so much research i saw that I haven't put any clip vision file in it 😂😂
how to change the clothes
Specified clothing
INFO: InsightFace detection resolution lowered to (512, 512).
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 155, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 85, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\ComfyUI\execution.py", line 78, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 636, in apply_ipadapter
self.ipadapter = IPAdapter(
^^^^^^^^^^
File "F:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 272, in __init__
self.image_proj_model.load_state_dict(ipadapter_model["image_proj"])
File "F:\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch
n\modules\module.py", line 2152, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:
\t{}'.format(
RuntimeError: Error(s) in loading state_dict for ProjModelFaceIdPlus:
size mismatch for proj.2.weight: copying a param with shape torch.Size([8192, 1024]) from checkpoint, the shape in current model is torch.Size([5120, 1024]).
size mismatch for proj.2.bias: copying a param with shape torch.Size([8192]) from checkpoint, the shape in current model is torch.Size([5120]).
size mismatch for norm.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for norm.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for perceiver_resampler.proj_in.weight: copying a param with shape torch.Size([2048, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for perceiver_resampler.proj_in.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for perceiver_resampler.proj_out.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for perceiver_resampler.proj_out.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for perceiver_resampler.norm_out.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for perceiver_resampler.norm_out.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for perceiver_resampler.layers.0.0.norm1.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for perceiver_resampler.layers.0.0.norm1.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for perceiver_resampler.layers.0.0.norm2.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for perceiver_resampler.layers.0.0.norm2.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for perceiver_resampler.layers.0.0.to_q.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for perceiver_resampler.layers.0.0.to_kv.weight: copying a param with shape torch.Size([4096, 2048]) from checkpoint, the shape in current model is torch.Size([2560, 1280]).
size mismatch for perceiver_resampler.layers.0.0.to_out.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
size mismatch for perceiver_resampler.layers.0.1.0.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for perceiver_resampler.layers.0.1.0.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for perceiver_resampler.layers.0.1.1.weight: copying a param with shape torch.Size([8192, 2048]) from checkpoint, the shape in current model is torch.Size([5120, 1280]).
size mismatch for perceiver_resampler.layers.0.1.3.weight: copying a param with shape torch.Size([2048, 8192]) from checkpoint, the shape in current model is torch.Size([1280, 5120]).
Hello, the above error occurred. I changed the Load lPAdapter Model to ip-adapter-faceid-plusv2_sd15.bin. It worked. What is ip-adapter-faceid-plusv2_sdxl.bin Cannot run?
I can't be sure, but I think some model mismatch happened between SDXL and SD1.5.
Just make sure to check that if you are using SD1.5, these must also use the 1.5 version:
1. checkpoint model
2. lora model
3. ipadapter faceid
4. ipadapter
5. controlnet model
And if you want to switch to SDXL, all of the above have to changed to SDXL version as well.
can this be done in a1111
Hi I'm not too sure, haven't really tested out IPAdapter in a1111.
As someone who works daily with AI and SD image generation processes like you, would you say this method is like the current state of the art in creating consistent AI characters, while controlling their pose and clothing?
Is there also a method to get a consistent location/ environment for the character?
Thanks btw, this is the best comfyui tutorial video I have watched for the topic addressed, keep up the good work!
As for creating a consistent face, all the methods are claiming to be sota with comparisons using different metrics so its a little bit hard to say which is the best, but if not looking for 1-1 copy, this would be one of the best method.
For the location and environment, you could use another ipadapter node for the image while masking out the already generated character (face, clothes, pose) with inpainting.
r u from sg brother? 😂
Yes brother I'm from sg
can anyone please help me with this error in comfyui
Error occurred when executing InsightFaceLoader:
module 'cv2.gapi.wip.draw' has no attribute 'Text'
i tried reinstalling opencv-python and opencv-contribe but still get same error
Hi there! I`m trying to follow your guide here but I can't find anywhere the following nodes:
- InsightFaceLoader
- IPAdapterApply
- PrepImageForInsightFace
- IPAdapterApplyFaceID
What am I missing?
Hi, yes there is a breaking update in the IPAdapter custom nodes just yesterday... I am still working on making changes to my workflow and will probably make a follow up video on it.
You could check out this video to see the update changes: ua-cam.com/video/_JzDcgKgghY/v-deo.html
I have updated the workflow accordingly to the new IPAdapter version, please download the V2 workflow to see the new nodes changes. :)
So I did everything to install, however I get this error message, even though "pip list" shows that InsightFace is installed.
Error occurred when executing InsightFaceLoader:
No module named 'insightface'
File "C:\dev\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\dev\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\dev\StableDiffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\dev\StableDiffusion\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 627, in load_insight_face
raise Exception(e)
I did fix it by installing it for the embedded python and not on my local python version
Error occurred when executing IPAdapterApply:
InsightFace must be provided for FaceID models.
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 698, in apply_ipadapter
raise Exception('InsightFace must be provided for FaceID models.')
Hi, the model names with faceid have to go to apply ipadapter faceid node and those without goes to apply ipadapter node.
an error:Error occurred when executing IPAdapterApplyFaceID:
Error(s) in loading state_dict for ProjModelFaceIdPlus:
size mismatch for proj.2.weight: copying a param with shape torch.Size([8192, 1024]) from checkpoint, the shape in current model is torch.Size([5120, 1024]).
size mismatch for proj.2.bias: copying a param with shape torch.Size([8192]) from checkpoint, the shape in current model is torch.Size([5120]).
size mismatch for norm.weight: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for norm.bias: copying a param with shape torch.Size([2048]) from checkpoint, the shape in current model is torch.Size([1280]).
size mismatch for perceiver_resampler.proj_in.weight: copying a param with shape torch.Size([2048, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280]).
Hi, this error usually occurs when the controlnet model is not in the right version.
If you are running on SDXL, make sure the controlnet you are using is also the SDXL version.
@@DataLeveling got it, it's worked well,thanks,but there's another problem: Error occurred when executing GroundingDinoModelLoader (segment anything):
File "F:\Blender_ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "F:\Blender_ComfyUI\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "F:\Blender_ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "F:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_segment_anything
ode.py", line 286, in main
dino_model = load_groundingdino_model(model_name)
File "F:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_segment_anything
ode.py", line 117, in load_groundingdino_model
get_local_filepath(
File "F:\Blender_ComfyUI\ComfyUI\custom_nodes\comfyui_segment_anything
ode.py", line 111, in get_local_filepath
download_url_to_file(url, destination)
File "F:\Blender_ComfyUI\python_embeded\lib\site-packages\torch\hub.py", line 620, in download_url_to_file
u = urlopen(req)
@@petpo-ev1ydhmm it seems to be having an error in the custom node: comfyui_segment_anything, I don't use that one so I can't really advise on it :/
@@DataLeveling Thanks, I'll Try to solve it