Important BUG FIXES: I have updated the model list for all those who have received error messages (the new models are green). Download these models and add them to the folders. Let me know in the comments to this post if you still have any issues! For those who don't see the manager I have added another download link. Try to downloading / installing the manager from this link. Thanks for all your feedback and help with troubleshooting!
Error occurred when executing ACN_AdvancedControlNetApply: 'NoneType' object has no attribute 'copy' File "C:\Users\misha\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\misha\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\misha\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\misha\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control odes.py", line 173, in apply_controlnet c_net = convert_to_advanced(control_net.copy()).set_cond_hint(control_hint, strength, (start_percent, end_percent)) ^^^^^^^^^^^^^^^^ i still get the error
Hey there, followed your advice, but i still get this error Error occurred when executing ControlNetLoaderAdvanced: Weights only load failed. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution.Do it only if you get the file from a trusted source. WeightsUnpickler error: Unsupported operand 118 File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control odes.py", line 90, in load_controlnet controlnet = load_controlnet(controlnet_path, timestep_keyframe) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 512, in load_controlnet controlnet_data = comfy.utils.load_torch_file(ckpt_path, safe_load=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 20, in load_torch_file pl_sd = torch.load(ckpt, map_location=device, weights_only=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1039, in load raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None
@@tompilot4574 Can you go into the ComfyUI manager, install custom nodes and check if "ComfyUI's ControlNet Auxiliary Preprocessors" is installed? Then go to manager "Update All", restart and try again.
@@mickmumpitz hey there, thanks for the support , i updated the node you said, because it was already installed using the manager, but now i still get this error Requested to load SD1ClipModel Loading 1 new model ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control odes.py", line 90, in load_controlnet controlnet = load_controlnet(controlnet_path, timestep_keyframe) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 512, in load_controlnet controlnet_data = comfy.utils.load_torch_file(ckpt_path, safe_load=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 20, in load_torch_file pl_sd = torch.load(ckpt, map_location=device, weights_only=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1039, in load raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None _pickle.UnpicklingError: Weights only load failed. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution.Do it only if you get the file from a trusted source. WeightsUnpickler error: Unsupported operand 118 Prompt executed in 43.01 seconds
@@mickmumpitz Hey there ! i found a fix maybe? so Instead of having the "control_v11p_sd15_canny" in the box that sais "Load Advanced Control Net Model" i instead put the "control_v11f1p_sd15_depth" one inside this box and now its working... But you had the canny one in the box. I hope me choosing the depth one will not affect my output differntly than yours? is this the right thing to have here? because its the only thing that works in the box the canny one you had there doesnt work for me
Man you really have the best channel for this type of content everyone here just marketing a ai tools costing a lot of money and not usable things too you are the best ♥️
Unfortunately, it doesn't work. I always get Error occurred when executing IPAdapterApply: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]). Doesn't really matter what kind of resolution my video has, it always halts on this step.
Can you download the new ClipVision (I updated the link in the drive document) Model and try it out again? It should work now! Make sure the new one is selected in the "Load CLIP Vision" node in the IPAdapter setup.
wow, this is an AMAZING tutorial. Very well explained, with workflow and list of models to download (plus their path position in the folder which more often than not isn't included, yet so important!). THANKS SO MUCH
it is a great tut to be sure - one of the most concise out there - but some of those linked models appear to be pickle tensors which are not as safe as safetensors?
Hey thanks for the video! I got a problem: "Error occurred when executing MediaPipe-FaceMeshPreprocessor:...etc...etc" Do you know how to fix this? Thx!
This is the smoothest AI animation I've ever seen. You're amazing man. I have suggestion for the next video: 1. how to make animation based on your face, but the output is a different face (an anime girl face with horns for example) 2. How to capture the AI animation of just the actor (without background) by using green screen. thnks again btw.
Tried it but keep on getting this error: Error occurred when executing IPAdapterApply: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]). Any help please.
@@mickmumpitz I tried the new Clip Vision model and now I get this error: Error occurred when executing ACN_AdvancedControlNetApply: 'NoneType' object has no attribute 'copy'
@@magneticgrid7998 I updated the list again. Can you try it with the new ControlNet models that you'll find there? Thank you so much for helping me fix this!
Not working( What resolution for uploaded video? Error occurred when executing IPAdapterApply: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
Sorry if you are seeing multiple replies from me here. Its a checkpoint and clip vision model training size mismatch. Not to do with the video size you are using. I think my last posts were rejected or something. Use this Clip Vision model instead. You can find it on hugging face: IPAdapter_image_encoder_sd15
@@SimonePixel Thank you for your reply! I change it and get this message Error occurred when executing ADE_LoadAnimateDiffModel: 'control_sd15_canny.pth' is not a valid SD1.5 nor SDXL motion module - contained 0 downblocks.
I've got IPAdapter error, workflow not usable. Checked the comments, they are mostly very vague. I think the reason is the wrong VAE model or one of the controlnets, I noticed that list of nodes is different, not the same as in your worlflow. Even comparing it to video, some names mismatch.
@@mickmumpitz Hi, thanks and great tutorial :))) but it gives me error, ("ClipVision model not found.") , maybe it should be renamed to some other name?
Wow, what a wonderful tutorial! Do you know of some way to do prompt schedualing with this workflow? I've never been able to find a way to do so with SD1.5 animatediff.
it shows an error when I press queue prompt. How can I fix it? My VRAM is 8GB, everything has been put to the folder including the IPadapter. Error occurred when executing IPAdapterApply: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
You're amazing! Been looking for days now for something that's coherent without being completely overwhelming and this seems like a great guide! Thanks so much for the workflow...can't wait to try it out. Would you happen to a workflow version or have advice for how one might use a lora stack as well as facedetailer with your workflow? Thanks again!
Error occurred when executing IPAdapterUnifiedLoader: ClipVision model not found. File "E:\comfy ui ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfy ui ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfy ui ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\comfy ui ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 506, in load_models raise Exception("ClipVision model not found.")
Hi, I like your workflow and your video but I need help understanding how to generate longer videos of 30 seconds or 1 minute, 5 minutes help me please
Error occurred when executing IPAdapterApply: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]). does anyone know how I can fix this
Great video but i tried to download the tools and install it but the manager button left of the screen does not appear.. I do not have invidia card. so I click on run_cpu.bat but it opens but the manager do not appear. i tried several times. I tried also the new link you gave us but it does not make the manager appear. can you help me? thank you
Thank you! I have a pretty old GPU (for this kind of stuff) with only 8 GB of VRAM (NVIDIA GeForce RTX 2070 SUPER). Video generation for the examples took between 2 and 15 minutes.
Error occurred when executing InsightFaceLoader: No module named 'insightface' could this be issue this is with just the IP adapter going to try a runpod to see If it solve the issue
Hey Mick I am able to run the whole thing but I am going through problems 1) sometimes it gives an error memory ran out so can you tell me what config I need to run this workflow 2)the results I am getting are not the same as the video you shared secondly I am getting only 1 second of the video I don't know why 3) I think you didn't update the adapter models and directory after the update (not confirmed about this) any help would be appreciated hoping for a reply 😀 I think you should make a second part of video since many things are changed
Hello thank you so much for the workflow it's working great i'm wondering if you can add an upscale step to fix some problems like eyes not showing correctly ?
I suspect it wont be too long before we start to see Ai able to completely change a game's graphics on the fly, similar to how we can currently use a shader in Reshade for simple effects. We'll be able to completely change the look of a game in realtime, bring an old game up to modern graphical standards (or surpass), or retheme etc. Anyone want to take a guess at when we'll have consumer grade dedicated Ai chips that will perform at the level required for very low latency processing on a local machine? Perhaps it will initially be a cloud based service.
This workflow is very nice! Do you have any recommendations on extra tactics to help the eyes be more consistent? I feel like I'm getting a lot of eye drift throughout my animations, and turning on the eye and pupil settings in the face mesh setup you have really just messes up the output. Maybe another set of control nets or is there a way to run multiple masks -- one for the mouth, and one for the eyes?
"When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph." i cant get rid of this error, loaded all models checked the list, updated everything...
Hey I am getting an error When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph. can anybody help me with that i don't know anything about comfy UI and how it works any help would appreciated
When loading the graph, the following node types were not found: Integer IPAdapterApply Nodes that have failed to load will show as red on the graph. I have this issue, i didn't find them in the manager...any help please?
fantastic stuff. you're doing some great work. Thank you for your tut! one question some of your models used are pickletensors? I assume that using the safetensor versions / alternatives should be fine?
Hey, I get this error: "No faces detected in controlnet image for Mediapipe face annotator." I made sure I installed the latest models on your word document. please advise!
Great workflow. Is it possible to disconnect the character's look from the video, like using a video of you but prompt "Disney Princess" or is it always connected to the look of the person in the video?
hey mick i am able convert this perfectly can but i can only get 14 frames and if i increase the frames it gives me memory error i have ryzen 5 1500x 16gb ram rtx 2060 6gb if you dont mind can you tell me how i can convert atleast 30sec to 1 min clip in single shot it will be really helpful
Hi! it's working! Thanks.. But i want to ask about what if i have 1min up to 2min video? i understand i need to set the skip_first_frame.. but you said the frame_load_cap need's to be set in 15? how can i do batch by batch? can you enlighten me please?
Hello, I had a problem with a part and it gives me this error When loading the graph, the following node types were not found: Integer IPAdapterApply Nodes that have failed to load will show as red on the graph.
control_v2p_sd15_mediapipe_face, there is 4 files with the same name but different extensions, which extension should i download and put ComfyUI_windows_portable\ComfyUI\models\controlnet ???
Hi new sub here, your tutorial is very good, one question pls, what if i want the person in the video generated to have a different clothing, image style and background from another reference image but still has the same movement as the person in the video uploaded, like if i want to Animate a person from an image to move like the person in the uploaded video and also in a different background, pls do you understand 😢
Great tutorial thanks for sharing. Did you try using multiple ip adapters with attention masks? I figured the ip adapter plus model is great for anything else than the plus face or full face just for the face itself.
Hi, I've got "Error occurred when executing KSamplerAdvanced: mixed dtype (CPU): expect parameter to have scalar type of Float" How can I fix this? Thank you!
quick question, in the load ip adapter, there is a "model name" where do i have to put the models to be shown and where do i download them? im using the version 3 of your WorkFlow
this is super powerful was able to get fixed yeah wrong control net it looks really good all 3 on to doing stuff for my sons birthday but I notice lip sync is off I will review the video see if you mention but anyone know what is good to like possible get lips put them on it in realtime also was thinking be fun me walk around and attempt to talk script then AI voice change on it for a series start getting experimental with it
Hi there :) Not yet dived into Your stuff, but I already ofcourse know that is great. Just wanted as also Blender Artist, and Cinematic Creator, etc for You to know that I really appreciate Your effort, knowledge and the will to share. I would totally use it, test it, and ask later maybe We can create something. One more time THX, too much things around Comfy and this channels make it into My direction
Thank you so much for this grat workflow; unfortu ately I have the same syntax error each time I try to run this workflow:(( SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5) Do you have a solution for this ? Thanks
Hi. I don't know a lot about computers but after i clicked to run the comfyui managers, it says it can't find an NVidia graphics card...i think i have one on my other computer and will try that one. Question- is there anything else I would need to run this? Either computer hardware related or software related. I heard you mention stable diffusion. Do I need to have a subscription to that? Thanks.
Hey man, thanks for this incredible content, I'd love that so much, with my computer I can't handle with this much, my GPU is a GTX 1660Ti, 16GB ram, and I5 7th gen, but I bought a new PC with a RTX 4070 Super, how is the configuration of you PC? and, this workflow still work?
Important BUG FIXES: I have updated the model list for all those who have received error messages (the new models are green). Download these models and add them to the folders. Let me know in the comments to this post if you still have any issues!
For those who don't see the manager I have added another download link. Try to downloading / installing the manager from this link.
Thanks for all your feedback and help with troubleshooting!
Error occurred when executing ACN_AdvancedControlNetApply:
'NoneType' object has no attribute 'copy'
File "C:\Users\misha\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\misha\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\misha\Downloads\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\misha\Downloads\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control
odes.py", line 173, in apply_controlnet
c_net = convert_to_advanced(control_net.copy()).set_cond_hint(control_hint, strength, (start_percent, end_percent))
^^^^^^^^^^^^^^^^ i still get the error
Hey there, followed your advice, but i still get this error
Error occurred when executing ControlNetLoaderAdvanced:
Weights only load failed. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution.Do it only if you get the file from a trusted source. WeightsUnpickler error: Unsupported operand 118
File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control
odes.py", line 90, in load_controlnet
controlnet = load_controlnet(controlnet_path, timestep_keyframe)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 512, in load_controlnet
controlnet_data = comfy.utils.load_torch_file(ckpt_path, safe_load=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 20, in load_torch_file
pl_sd = torch.load(ckpt, map_location=device, weights_only=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1039, in load
raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None
@@tompilot4574 Can you go into the ComfyUI manager, install custom nodes and check if "ComfyUI's ControlNet Auxiliary Preprocessors" is installed? Then go to manager "Update All", restart and try again.
@@mickmumpitz hey there, thanks for the support , i updated the node you said, because it was already installed using the manager, but now i still get this error
Requested to load SD1ClipModel
Loading 1 new model
ERROR:root:!!! Exception during processing !!!
ERROR:root:Traceback (most recent call last):
File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control
odes.py", line 90, in load_controlnet
controlnet = load_controlnet(controlnet_path, timestep_keyframe)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Advanced-ControlNet\adv_control\control.py", line 512, in load_controlnet
controlnet_data = comfy.utils.load_torch_file(ckpt_path, safe_load=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 20, in load_torch_file
pl_sd = torch.load(ckpt, map_location=device, weights_only=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\tompi\Documents\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch\serialization.py", line 1039, in load
raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) from None
_pickle.UnpicklingError: Weights only load failed. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution.Do it only if you get the file from a trusted source. WeightsUnpickler error: Unsupported operand 118
Prompt executed in 43.01 seconds
@@mickmumpitz Hey there ! i found a fix maybe? so Instead of having the "control_v11p_sd15_canny" in the box that sais "Load Advanced Control Net Model" i instead put the "control_v11f1p_sd15_depth" one inside this box and now its working... But you had the canny one in the box. I hope me choosing the depth one will not affect my output differntly than yours? is this the right thing to have here? because its the only thing that works in the box the canny one you had there doesnt work for me
I love how you've implemented these things, explained these things, and that you're willing to share them with us in the first place! ^‿^
Would love a comfy UI course for begginers
Are there any?
What are you looking to do with ComfyUI as a beginner?
That's what this is tbh ^^ But maybe start with image generation rather than videos
Man you really have the best channel for this type of content everyone here just marketing a ai tools costing a lot of money and not usable things too you are the best ♥️
bro, you give every possible detail, you`re amazing!!
Unfortunately, it doesn't work. I always get Error occurred when executing IPAdapterApply:
Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
Doesn't really matter what kind of resolution my video has, it always halts on this step.
Can you download the new ClipVision (I updated the link in the drive document) Model and try it out again? It should work now! Make sure the new one is selected in the "Load CLIP Vision" node in the IPAdapter setup.
@@mickmumpitzThanks, man! Now it works like a charm!
@@AlexanderKitchenkoHi can you please tell me which pc setup are you using as I am going to buy a new computer to run this workflow.. thanks
ty, one of the few that is not selling web ai services, subscribed to see what more you can do
wow, this is an AMAZING tutorial. Very well explained, with workflow and list of models to download (plus their path position in the folder which more often than not isn't included, yet so important!). THANKS SO MUCH
it is a great tut to be sure - one of the most concise out there - but some of those linked models appear to be pickle tensors which are not as safe as safetensors?
you are my favorite AI UA-camr
Thanks for the shout out! This is a brilliant approach to lip sync!!
Thank you! This would not have been possible without your fantastic work!
You are so good at making videos. And teaching us how to do it. Thank you for always sharing nice stuff with your audience.
Hey thanks for the video! I got a problem: "Error occurred when executing MediaPipe-FaceMeshPreprocessor:...etc...etc" Do you know how to fix this? Thx!
MediaPipe-FaceMeshPreprocessor
No module named 'mediapipe' : I got this error please help
Without any doubt, you are my favorite AI UA-camr.
1:52 it doesnt show up the button Manager here, do you know how to solve it ?
I'm having the same issue, the manager button is missing. But, I'll try it again and see what happens...
drage the icon(6 dots) on the left of "Queue size:0", now u can find "Manager" button...have fun...
thank u so much for Workflow sharing as free
😍
Unfortunately I can't get the workflow to work, I get the error that I have missing Node Typers (Integer and IPAdapterApply). Can you help me out?
This is the smoothest AI animation I've ever seen. You're amazing man.
I have suggestion for the next video:
1. how to make animation based on your face, but the output is a different face (an anime girl face with horns for example)
2. How to capture the AI animation of just the actor (without background) by using green screen.
thnks again btw.
Thanks so much for sharing this!
Tried it but keep on getting this error:
Error occurred when executing IPAdapterApply:
Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
Any help please.
Can you try downloading the new Clip Vision model (I updated it in the drive document), switch it out in the folder and try running it again?
@@mickmumpitz I tried the new Clip Vision model and now I get this error:
Error occurred when executing ACN_AdvancedControlNetApply:
'NoneType' object has no attribute 'copy'
@@magneticgrid7998 I updated the list again. Can you try it with the new ControlNet models that you'll find there? Thank you so much for helping me fix this!
Not working( What resolution for uploaded video?
Error occurred when executing IPAdapterApply:
Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
Sorry if you are seeing multiple replies from me here. Its a checkpoint and clip vision model training size mismatch. Not to do with the video size you are using. I think my last posts were rejected or something. Use this Clip Vision model instead. You can find it on hugging face: IPAdapter_image_encoder_sd15
@@SimonePixel Thank you for your reply! I change it and get this message
Error occurred when executing ADE_LoadAnimateDiffModel:
'control_sd15_canny.pth' is not a valid SD1.5 nor SDXL motion module - contained 0 downblocks.
I've got IPAdapter error, workflow not usable. Checked the comments, they are mostly very vague. I think the reason is the wrong VAE model or one of the controlnets, I noticed that list of nodes is different, not the same as in your worlflow. Even comparing it to video, some names mismatch.
Could you please try to download the CLIP VISION model again (there is now a new link in the document!) and replace it?
@@mickmumpitz Hi, thanks and great tutorial :))) but it gives me error, ("ClipVision model not found.") , maybe it should be renamed to some other name?
Hi can you please make video 2 video to make anime animation like real anime, I am sure many people want this as well , thanks 💕👀
does this work on mac?
Wow, what a wonderful tutorial! Do you know of some way to do prompt schedualing with this workflow? I've never been able to find a way to do so with SD1.5 animatediff.
Yet again amazing work from you!❤
very crisp and on point . thanks for video really great explanation
Error occurred when executing IPAdapterUnifiedLoader:
ClipVision model not found. 😭
the animatediff models got renamed. So if you load them you have to either rename them in the workflow or in the folders
Thank you very much for the video!
Hi I have a question please. Is this video too old to use or do U have a new/better version of the Idea of d workflow? I find great what U do really
absolutely phenomenal guide - the best!
it shows an error when I press queue prompt. How can I fix it?
My VRAM is 8GB, everything has been put to the folder including the IPadapter.
Error occurred when executing IPAdapterApply:
Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
Got the same error
Stunning content !
You're amazing! Been looking for days now for something that's coherent without being completely overwhelming and this seems like a great guide! Thanks so much for the workflow...can't wait to try it out. Would you happen to a workflow version or have advice for how one might use a lora stack as well as facedetailer with your workflow? Thanks again!
awesome, second times I stumble upon your video. great content.
A brilliant guide as always! Clear, concise... perfect. Thanks for pushing things forward as you always do, keep up the great work.
This is great! 🙌🏻
Error occurred when executing IPAdapterUnifiedLoader:
ClipVision model not found.
File "E:\comfy ui
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfy ui
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfy ui
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\comfy ui
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 506, in load_models
raise Exception("ClipVision model not found.")
How can I solve this problem🥲
@@Royu-i4g have same problem
Pls help me urgent
Error occurred when executing IPAdapterAdvanced:
Missing CLIPVision model.
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 317, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 192, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 767, in apply_ipadapter
raise Exception("Missing CLIPVision model.")
Hi, I like your workflow and your video but I need help understanding how to generate longer videos of 30 seconds or 1 minute, 5 minutes help me please
Were you able to get this workflow working?
You are a king 🙌
Error occurred when executing IPAdapterApply:
Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
does anyone know how I can fix this
It’s the wrong vae model. Look through the comments for a pointer towards the right one.
Great video but i tried to download the tools and install it but the manager button left of the screen does not appear.. I do not have invidia card. so I click on run_cpu.bat but it opens but the manager do not appear. i tried several times. I tried also the new link you gave us but it does not make the manager appear. can you help me? thank you
i have same issue how to fix it
This is so good..subscribed..by the way what GPU are you using? and how long it took ?
Thank you! I have a pretty old GPU (for this kind of stuff) with only 8 GB of VRAM (NVIDIA GeForce RTX 2070 SUPER). Video generation for the examples took between 2 and 15 minutes.
Error occurred when executing InsightFaceLoader:
No module named 'insightface'
could this be issue this is with just the IP adapter going to try a runpod to see If it solve the issue
First (from playlist) 🙋♂ Great channel, keep it up!
An additionnal step to correct the eyes would be neat. I wonder if there's any tip out there
Try different words for the prompting, try using some more detailed descriptions
Hey Mick I am able to run the whole thing but I am going through problems
1) sometimes it gives an error memory ran out so can you tell me what config I need to run this workflow
2)the results I am getting are not the same as the video you shared secondly I am getting only 1 second of the video I don't know why
3) I think you didn't update the adapter models and directory after the update (not confirmed about this)
any help would be appreciated hoping for a reply 😀
I think you should make a second part of video since many things are changed
bro your latest version is also not working..plz support (there are errors related to clip vision and ipadapter)
do you have the ps1 template on your patreon?
amazing video, can you tell me the ps1 style model name please ' ?
I combined the two Lora’s ICBIN64 and ps1graphicsRW to get this look
Can you please make a video on how to upscale and enhance a video( i.e. add new details) like krea ai in stable diffusion?
Thank you so much, this is insane 🔥🔥🔥
Hello thank you so much for the workflow it's working great i'm wondering if you can add an upscale step to fix some problems like eyes not showing correctly ?
Is there a way to turn yourself in a specific character with a character sheet instead of a prompt
I suspect it wont be too long before we start to see Ai able to completely change a game's graphics on the fly, similar to how we can currently use a shader in Reshade for simple effects. We'll be able to completely change the look of a game in realtime, bring an old game up to modern graphical standards (or surpass), or retheme etc. Anyone want to take a guess at when we'll have consumer grade dedicated Ai chips that will perform at the level required for very low latency processing on a local machine? Perhaps it will initially be a cloud based service.
hey bro good work. thank you!! 👍👌
SyntaxError:Unexpected non-whitespace character after JSON at position 4 (line column 5 ....sir how can i fix it 😢😢😢😢
would it make sense to add a loras or hypernetworks to this workflow as well? if so at what point would I add them?
Can you tell me wich model did you use for the PS1 graphics please ?
oh i want that model too
This workflow is very nice! Do you have any recommendations on extra tactics to help the eyes be more consistent? I feel like I'm getting a lot of eye drift throughout my animations, and turning on the eye and pupil settings in the face mesh setup you have really just messes up the output. Maybe another set of control nets or is there a way to run multiple masks -- one for the mouth, and one for the eyes?
why when a run "run_cpu" a don't have comfyUI manager ?
"When loading the graph, the following node types were not found:
IPAdapterApply
Nodes that have failed to load will show as red on the graph."
i cant get rid of this error, loaded all models checked the list, updated everything...
Same thing here... Did you find a solution?
@@Andyax no
Hey I am getting an error
When loading the graph, the following node types were not found:
IPAdapterApply
Nodes that have failed to load will show as red on the graph.
can anybody help me with that i don't know anything about comfy UI and how it works any help would appreciated
Hey! IPAdapter pushed a new update that broke the old workflows. But I updated them now! :)
@@mickmumpitz thanks alot bro
When loading the graph, the following node types were not found:
Integer
IPAdapterApply
Nodes that have failed to load will show as red on the graph.
I have this issue, i didn't find them in the manager...any help please?
SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5) HOW TO FIX THIS SIR THANK YOU
fantastic stuff. you're doing some great work. Thank you for your tut! one question some of your models used are pickletensors? I assume that using the safetensor versions / alternatives should be fine?
Hey, I get this error: "No faces detected in controlnet image for Mediapipe face annotator."
I made sure I installed the latest models on your word document.
please advise!
Great workflow. Is it possible to disconnect the character's look from the video, like using a video of you but prompt "Disney Princess" or is it always connected to the look of the person in the video?
Would this work on something not filmed in real life? I mean using comfyUi on top of an animated film or game cutscene etc?
hey mick i am able convert this perfectly can but i can only get 14 frames and if i increase the frames it gives me memory error
i have ryzen 5 1500x
16gb ram
rtx 2060 6gb
if you dont mind can you tell me how i can convert atleast 30sec to 1 min clip in single shot
it will be really helpful
Hi! it's working! Thanks.. But i want to ask about what if i have 1min up to 2min video? i understand i need to set the skip_first_frame.. but you said the frame_load_cap need's to be set in 15? how can i do batch by batch? can you enlighten me please?
Hello, I had a problem with a part and it gives me this error
When loading the graph, the following node types were not found:
Integer
IPAdapterApply
Nodes that have failed to load will show as red on the graph.
control_v2p_sd15_mediapipe_face, there is 4 files with the same name but different extensions, which extension should i download and put ComfyUI_windows_portable\ComfyUI\models\controlnet ???
Bro my IPAdapter is updated and now showing red colour, through which node I should replace the old IPAdapter Apply node?
Make sure to download the newest version of the workflow! (v03)
Hi new sub here, your tutorial is very good, one question pls, what if i want the person in the video generated to have a different clothing, image style and background from another reference image but still has the same movement as the person in the video uploaded, like if i want to Animate a person from an image to move like the person in the uploaded video and also in a different background, pls do you understand 😢
Yo this cool!
Great tutorial thanks for sharing. Did you try using multiple ip adapters with attention masks? I figured the ip adapter plus model is great for anything else than the plus face or full face just for the face itself.
Thank you, love your work :), btw Could you please tell me which model checkpoint or lora did you used for Zelda BoTW look? :)
NIce tutorial, How can I get video by frames in output?
Hi, I've got
"Error occurred when executing KSamplerAdvanced:
mixed dtype (CPU): expect parameter to have scalar type of Float"
How can I fix this? Thank you!
quick question, in the load ip adapter, there is a "model name" where do i have to put the models to be shown and where do i download them? im using the version 3 of your WorkFlow
Hi. Thanks for the video. When I connect any other models, I always get an error. Why is this happening? Do I have to choose some special models?
Strange, any 1.5 checkpoint model should work.
@@mickmumpitz Thank you for your reply.
And in doing so, I don't need to change the LORA, VAE or anything else?
this is super powerful was able to get fixed yeah wrong control net it looks really good all 3 on to doing stuff for my sons birthday but I notice lip sync is off I will review the video see if you mention but anyone know what is good to like possible get lips put them on it in realtime also was thinking be fun me walk around and attempt to talk script then AI voice change on it for a series start getting experimental with it
Hi there :)
Not yet dived into Your stuff, but I already ofcourse know that is great. Just wanted as also Blender Artist, and Cinematic Creator, etc for You to know that I really appreciate Your effort, knowledge and the will to share. I would totally use it, test it, and ask later maybe We can create something. One more time THX, too much things around Comfy and this channels make it into My direction
Great tutorial man, subbed as well. Just a question, can this convert longer length videos too like 3 4 min?
Thank you! It is possible, but of course it takes quite a long time if you don't have a good graphics card.
Awesome! Thanks a lot to share your knowledge with the community
Haha yeah me too! Just go to settings (gear symbol next to the queue symbol) and set "Link Render Mode to "straight"
Thank you so much for this grat workflow; unfortu ately I have the same syntax error each time I try to run this workflow:((
SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5)
Do you have a solution for this ?
Thanks
How long video can we make in Thia ai software
Its in rral time changer??
hello i need some help please i dont no why i cant use certain models for style i dont know why ? does anyone has an answer to my problem ?
Error occurred when executing CheckpointLoaderSimple:
'model.diffusion_model.input_blocks.0.0.weight'
Can we use something like this in Google colab?
How much time it will take to render 1 min long clip
For tgose who hasn't a gpu pc, can we use this in google colab?
Hi. I don't know a lot about computers but after i clicked to run the comfyui managers, it says it can't find an NVidia graphics card...i think i have one on my other computer and will try that one. Question- is there anything else I would need to run this? Either computer hardware related or software related. I heard you mention stable diffusion. Do I need to have a subscription to that? Thanks.
manager not showing.. what to do ??
can you try getting it from the other link I added to the video description?
thanks for share!
Amazing thanks a lot
Hey man, thanks for this incredible content, I'd love that so much, with my computer I can't handle with this much, my GPU is a GTX 1660Ti, 16GB ram, and I5 7th gen, but I bought a new PC with a RTX 4070 Super, how is the configuration of you PC? and, this workflow still work?