Need help? Check out our Discord channel: bit.ly/44Qtkin Use these workflows to add more than 4 images: bit.ly/45lDiZD I've added some solutions and tips, the community is also very helpful, so don't be shy to ask for help
I just wanted to express my immense appreciation for your ComfyUI Animatediff tutorial! It was incredibly clear and well-paced, making a complex topic feel so approachable. Your detailed explanations and step-by-step guidance were exactly what I needed to grasp the concepts fully. Thanks to you, I feel much more confident in implementing these animations in my projects. Looking forward to learning more from your expertise!
You are such a master at Comfy UI.. but also just as an educator! Having spent so many hours on youtube, your approach to teaching is just so concise, easy to follow, and generally brilliant... Thank you so much for taking the time to share your knowledge with the world.. You legend!
Hey, Thank you a lot for this tutorial. The workflow works for me, except that the generated video is too fast, not smooth, as if there is no interpolation but just a rapid succession of images. Thanks in advance for your help.
Thank you for this! Any other morph workflows keep the image "exact" rather than reimagining it? I love these morphing loops but would love it to follow my initial images completely. I though I saw a 2 image morph that seemed to not "reimagine" the inputs. Thank you for the detailed settings walkthrough. Improved my results considerably.
these tutorials are great they just completely skip crucial steps for the truly uninitiated .... i keep having problems installing all the models and no one is providing a clear instruction ive never used github before and im not a developer ...... maybe its gatekeeping, maybe its just me ... but this is truly the most frustrating learning experience ive ever had
Great vid! Did you ever find a way to keep the likeness of the celebrities you were morphing between? I know you said you were looking into it. Thanks!!!
Hello my friend!! I am following the Morph tutorial from the video: "Create Morphing AI Animations | AnimateDiff: IPIV’s Morph img2vid Tutorial" I did all the steps as shown in the video, but when I click "Queue Prompt," it starts running in the terminal (I am using a Mac M1), and at the end, the message I attached here appears, and it just stays at 0%, even though I left the upscale nodes deactivated as instructed in the video. Can someone help me solve this issue? In the terminal, it only shows 0% as in the image. Thank you in advance!
Great guide thanks. I managed to produce something and this basically is what Krea ai is offering but their output is bit dark and unpolished. Really appreciate the points on using vram.
I can't get rid of the red notifications, and when I try to update or install anything (at video 1:13) I get errors. Reinstalled and uninstalled the program several times already and still errors. Can you please advise me what I am doing wrong?
I have a question that other people might have too. I am new to the AI world and don’t know how things work. In your video, you show us how to do everything step by step. But if I want to try new things or use other models, how do I do that? I think we can do more fun stuff in ComfyUI besides just changing the video. Can you make a video about that or write back to explain? This will help many people like me. Thank you for all your hard work!
Thanks for your amazing workflow! Please;I have two questions: 1) In the Samplers group there is "Upscale Image By" node: scale_by 3.75 = 1080p and you say also is possible to set scale_by 2.5 = 720p . - How do you calculate the factors (3.75 and 2.5) for 1080p and 720p? 2) If we choose scale_by 3.75 in the Upscale /w Model group in the node "Upscale Image" we need to set width 1080 height 1920. If we were to choose scale_by 2.5 in the node "Upscale Image" we should change it to width 720 height 1280 ?
I later found out that both the scale ratio and final resolution are independent from each other, you can use the ratio to do a first upscale, then the final resolution to upscale again. and as for the calculation, simply multiply the ratio to the batch size and you'll get the upscaling resolution
@@MDMZ Thanks for the answer. Simply multiply the ratio to the batch size and you'll get the upscaling resolution: In your original workflow: batch size = 96 512 x 288 = 16:9 ratio scale_by 1.75 1) 16:9 = 1.777777777777778 * 96 = 170.6666666666667 2) 96 * 16 = 1.536 96 * 9 = 864 how do you get scale_by 1.75?
thank you brother , it was working perfect but just today there's a problem showed up in the simple math node in the qr code group , would you please help with it ?
Hi @mdmz, I found the final animation output is wildly different in style & aesthetic from the initial input images. Any tips for retaining overall style? Also have you got this workflow to work with SDXL?
Hello, I’m following the ipiv's Morp tutorial, and everything is going well, but I’m using reference images without humans, just hallways or structures, and yet a human always appears at the end. Is there something I’m doing wrong? I’m using the same models and LoRAs that come by default. The only thing I’ve adjusted is the motion scale to add more movement to the animation.
perhaps you can try to use another model thats trained on images similar to what you're trying to achieve ? example: if you wanna generate buildings, get a model that's trained on building images
@@MDMZ Thank you for the response. I tried some more architectural models, but I don't think they were that good. In the end, I believe what helped was increasing the weight in the IPAdapter Advanced (haha, but I'm not sure that's the reason). Thank you very much for the effort put into this tutorial; it's very good.
Thank you brother for being so kind doing this amazing vid ❤ , sadly i still can't get any results.. i followed all the steps and every file is in it's right place , but i always get an error once i reach to ksampler , would you please help ?
Hi! I regularly use this workflow, but lately, I've encountered a few issues with it. All the problems started after the Comfy update. Initially, the issue was that instead of smooth morphing, a bunch of images similar to what I inserted into the IP adapter were generated, and they would rapidly switch between each other (restarting helped, but only for one generation). However, the biggest problem appeared today (also after the update): there’s an issue with the "Simple Math" node, and honestly, I don’t know what to do. There are just two red circles around "A" and "B" that are highlighted. I’d really appreciate your help-I have no one else to turn to
@@MDMZ After the recent update, the issue with the IP adapter has been completely resolved, but the workflow still isn't working due to the Simple Math node.
@@MDMZ I've fixed everything. In case anyone else encounters this issue, you just need to replace the "Simple Math" node with "Math Expression" and make sure to write "a/b".
Hi There! This is so sick, do you do anything paid or know anyone who does this for commision? Just recently have been exploring AI art and am totally new to the field. Thank you so much!
anyway to set video combine to download in h264 format because the uncompress is too big also for the images anyway to always save as jpeg or webp not png? ps im not talking about previewing.
When I use a real person's image, it completely changes that person's face to a different man. Is there any way to fix that to maintain the same face? By the way, great video. Keep it up.🔥🔥
I went according to the same links you posted and download *ed the required files, but it gave me an error again, what's the problem? Error occurred when executing ADE_LoadAnimateDiffModel: 'Hyper-SD15-8steps-lora. safetensors' is not a valid SD1.5 nor SDXL motion module - contained 0 downblocks. File "D:\ComfyUI_windows_portable\ComfyUI\execution. py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj. FUNCTION, allow_interrupt=True) File "D:\ComfyUI_windows_portable\ComfyUI\execution. py", line 74, in map_node_over_list results. append(getattr(obj, func)( ** slice_dict(input_data_all, i))) File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ ComfyUI-AnimateDiff- Evolved\animatediff odes_gen2.py", line 178, in load_motion_model motion_model = load_motion_module_gen2(model_name=model_name, motion_model_settings=ad_settings) AAAN File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff- Evolved\animatediff\model_injection.py", line 1084, in load_motion_module_gen2 mm_state_dict, mm_info = normalize_ad_state_dict(mm_state_dict=mm_state_dict, mm_name=model_name) AAA File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ ComfyUI-AnimateDiff- Evolved\animatediff\motion_module_ad.py", line 136, in normalize_ad_state_dict raise ValueError(f"'{mm_name}' is not a valid SD1.5 nor SDXL motion module - contained {down_block_max} downblocks.")
hey, it's possible that the file 'Hyper-SD15-8steps-lora. safetensors' is corrupted, try re-downloading it, you can also share this on discord for more help
@@MDMZ Nope. Patreon is blocked from their side, they decide which nation have privilege to join... If workflow is free, maybe u can link it via google drive?
Can u help me? Error occurred when executing IPAdapterUnifiedLoader: IPAdapter model not found. File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 535, in load_models raise Exception("IPAdapter model not found.")
Hey i am using think-diffusion for this. When i am uploading 2 files which named as model.safetensors inside ComfyUI / clip_vision folder. I am not able to rename it to "CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors"
I didn't solve this error. If anybody can help. Thanks in advance. Error occurred when executing IPAdapterUnifiedLoader: ClipVision model not found. File "E:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 529, in load_models raise Exception("ClipVision model not found.")
How to make it if from Video? from 1 video or multi video for example 6 video faces? please make a tutorial, if possible sync with the mouth in the video.
No matter what I do, I always get the, "cannot find IPAdapter model" when I try to use Plus(High Strength), I've got the model, several times, and renamed it properly; but it's NEVER found. Thoughts?
@@MDMZ I've got it In the /ComfyUI/models/clip_vision folder. Same spot as where I have the medium Strength model that IS functioning. Looks like I may need a hardware upgrade or something though; using a medium strength model, my project fails at the second Ksampler "torch.cuda.OutOfMemoryError: Allocation on device" Running an RTX 4070TI Super, 16Gig VRAM, I feel that SHOULD be enough.
@@MDMZ I'm putting the model in the /ComfyUI/models/clip_vision folder, same folder as the medium strength model which is working. I Get a couple "Allocation on device" errors; Running an I-9, RTX 4070TI Super 16 G VRAM and 32 GIG ram, I'm wondering if I need more RAM for this workflow?
@@NWO_ILLUMINATUS that's not the correct folder for IPAdapter models, it should be placed in the IPAdapter models folder, and you might need more VRAM depending on how high you're pushing your settings
@@MDMZ Saddly, didn't work. Still model not found. Also, the notes in the workflow say to add the models to the clip vision folder, and the medium model works in the clip vision folder. odd
To use juggernaut_reborn, where in the ComfyUI folder structure did you put it? I downloaded it and tried a bunch of different places but it wouldn't show up in the "Load checkpoint" box
Need help? Check out our Discord channel: bit.ly/44Qtkin
Use these workflows to add more than 4 images: bit.ly/45lDiZD
I've added some solutions and tips, the community is also very helpful, so don't be shy to ask for help
Hey man, the link says its invalid. Could you update it please? :)
@@grovemonk fixed
@AillusoryOfficial thanks for letting me know, just updated the link
i cant access your discord server for list of models, any ideas?
hi how can i download all the models if i cant join your discord says unable to accept invite
I gave up on comfyui forever until I saw your tutorial. Yours is truly the best one on youtube! Thank you, and keep up your amazing work!
Wow, thank you!
I just wanted to express my immense appreciation for your ComfyUI Animatediff tutorial! It was incredibly clear and well-paced, making a complex topic feel so approachable. Your detailed explanations and step-by-step guidance were exactly what I needed to grasp the concepts fully. Thanks to you, I feel much more confident in implementing these animations in my projects. Looking forward to learning more from your expertise!
You are such a master at Comfy UI.. but also just as an educator! Having spent so many hours on youtube, your approach to teaching is just so concise, easy to follow, and generally brilliant... Thank you so much for taking the time to share your knowledge with the world.. You legend!
Wow, thank you!
I like how the link "List of nessesary links" leads to your Discord server with no clear way to get the file
The list is there, with full instructions, check the pinned msg in the discord channel
@@MDMZ thanks, now i see it!
Very helpful and thank you so much. I would recommend this to my friends who asked me before about these ai morph transitions. again thank you.
Thanks for sharing!
As always the best tutorial ever, helped reaching dope crazy results thanks bro 🙏
Happy to help!
Thanks! Been looking for a tut on AnimateDiff!!!
Awesome!
Just discovered this workflow today, thanks for the tips!
Happy to help!
Hey,
Thank you a lot for this tutorial.
The workflow works for me, except that the generated video is too fast, not smooth, as if there is no interpolation but just a rapid succession of images.
Thanks in advance for your help.
strange, can u share more context on discord ?
Great tuttorial as always! Thank you!
Glad you liked it!
fantastic tutorial. Instant results
Great to hear!
Thank you for this! Any other morph workflows keep the image "exact" rather than reimagining it? I love these morphing loops but would love it to follow my initial images completely. I though I saw a 2 image morph that seemed to not "reimagine" the inputs. Thank you for the detailed settings walkthrough. Improved my results considerably.
I would love to have that too, I don't know of a way to do it yet
hey! i'm also interesting on what you describe! did you find something ?
I think its because the IPA, althought you set it to Strong, it will still reshape the image, not sure if anyone have the solution yet?
these tutorials are great they just completely skip crucial steps for the truly uninitiated .... i keep having problems installing all the models and no one is providing a clear instruction ive never used github before and im not a developer ...... maybe its gatekeeping, maybe its just me ... but this is truly the most frustrating learning experience ive ever had
can you head to our discord and share what specific issues u ran into? we'll be happy to help
@@MDMZyou are a gentleman, soooo patient😂
Great vid! Did you ever find a way to keep the likeness of the celebrities you were morphing between? I know you said you were looking into it. Thanks!!!
Not yet! It didnt work
The king has answered our prayers. Just upgraded to a 4060 ti cant wait to get better quality outputs!
Congrats!! 8 or 16 VRAM ?
@@MDMZ 16!
@@ComfyCott Power 💪
Hello my friend!! I am following the Morph tutorial from the video:
"Create Morphing AI Animations | AnimateDiff: IPIV’s Morph img2vid Tutorial"
I did all the steps as shown in the video, but when I click "Queue Prompt," it starts running in the terminal (I am using a Mac M1), and at the end, the message I attached here appears, and it just stays at 0%, even though I left the upscale nodes deactivated as instructed in the video. Can someone help me solve this issue? In the terminal, it only shows 0% as in the image. Thank you in advance!
Subscribed, very complete tutorial, what video card are you using?
4090
Great guide thanks. I managed to produce something and this basically is what Krea ai is offering but their output is bit dark and unpolished. Really appreciate the points on using vram.
interesting, I'm gonna try Krea ai
Thank you man! This is dope
Glad you like it!
I can't get rid of the red notifications, and when I try to update or install anything (at video 1:13) I get errors. Reinstalled and uninstalled the program several times already and still errors. Can you please advise me what I am doing wrong?
that's werid, can you share more context on discord please ? easier to share screenshots and resources over there
I have a question that other people might have too. I am new to the AI world and don’t know how things work. In your video, you show us how to do everything step by step. But if I want to try new things or use other models, how do I do that? I think we can do more fun stuff in ComfyUI besides just changing the video. Can you make a video about that or write back to explain? This will help many people like me. Thank you for all your hard work!
I get you, I think that comes with experience, try different workflows, you can also look up tutorials on specific nodes and what they're used for
Thanks for your amazing workflow! Please;I have two questions:
1) In the Samplers group there is "Upscale Image By" node: scale_by 3.75 = 1080p and you say also is possible to set scale_by 2.5 = 720p .
- How do you calculate the factors (3.75 and 2.5) for 1080p and 720p?
2) If we choose scale_by 3.75 in the Upscale /w Model group in the node "Upscale Image" we need to set width 1080 height 1920.
If we were to choose scale_by 2.5 in the node "Upscale Image" we should change it to width 720 height 1280 ?
I later found out that both the scale ratio and final resolution are independent from each other, you can use the ratio to do a first upscale, then the final resolution to upscale again. and as for the calculation, simply multiply the ratio to the batch size and you'll get the upscaling resolution
@@MDMZ Thanks for the answer. Simply multiply the ratio to the batch size and you'll get the upscaling resolution:
In your original workflow: batch size = 96 512 x 288 = 16:9 ratio scale_by 1.75
1) 16:9 = 1.777777777777778 * 96 = 170.6666666666667
2) 96 * 16 = 1.536 96 * 9 = 864
how do you get scale_by 1.75?
@eltalismandelafe7531 wanted to go from 288 to 504
504/288 is 1.75, and that's how i found the ratio
@@MDMZ yes, you have rounded it off by 288 * 1.75 = 504 although in Empty latent Image you have written Width 288 Height 512 , not 504
The upscaling keeps getting stuck and won't generate anything
no errors at all ? did u try upscaling to 720 or 1080 ? trying a lower res might help
I get too fast transitions between images. I did not find where you can adjust the transition time. I will be grateful for the advice.
There's some math and numbers involved, but i can tell u that making the transition longer can produce bad results
@@MDMZ I understand. But I want to try it myself. Is it possible to find out in which node you can play with numbers?
Thanks for the tutorial, do you perhaps know why the face of me in the picture is getting deformed?
it doesn't work well with real faces, I talked about it in the video
Great tutoria! Thanx MDMZ!
happy to help!
thank you brother , it was working perfect but just today there's a problem showed up in the simple math node in the qr code group , would you please help with it ?
I will check
Hi @mdmz, I found the final animation output is wildly different in style & aesthetic from the initial input images. Any tips for retaining overall style? Also have you got this workflow to work with SDXL?
That's normal, it doesnt stay 100% true to the input, i tried with sdxl, couldnt gey good results
This looks similar to sparse ctrl workflows, i'll see how they compare
Hello, I’m following the ipiv's Morp tutorial, and everything is going well, but I’m using reference images without humans, just hallways or structures, and yet a human always appears at the end. Is there something I’m doing wrong? I’m using the same models and LoRAs that come by default. The only thing I’ve adjusted is the motion scale to add more movement to the animation.
perhaps you can try to use another model thats trained on images similar to what you're trying to achieve ? example: if you wanna generate buildings, get a model that's trained on building images
@@MDMZ Thank you for the response. I tried some more architectural models, but I don't think they were that good. In the end, I believe what helped was increasing the weight in the IPAdapter Advanced (haha, but I'm not sure that's the reason). Thank you very much for the effort put into this tutorial; it's very good.
Can you create on a video how we can increase the video length i.e adding more images then 4
I will be experimenting with that
Thank you brother for being so kind doing this amazing vid ❤ , sadly i still can't get any results.. i followed all the steps and every file is in it's right place , but i always get an error once i reach to ksampler , would you please help ?
you can share the error on discord, u might be able to get help if u provide more context
How long does your full render take and what is your gpu? It takes my 3080 about 1hr to render 720p but fails on the upscale. Any suggestions?
I use 4090, it takes around 20-30 mins to do the whole thing, try reducing the upscaling ratio, don't use your computer when it's upscaling
The only preset I can get to work is ViT-G (medium strength)?!
You need to download ALL of the ipadapter models
Followed the original video and cant work out why my outputs look extremely low quality
perhaps you need to adjsut the resolution, upscaling ratio, and steps
I’ve done this 2 times and keep coming out with errors. Cannot execute because VHS node doesn’t exist. node id #53. Any ideas how to fix?
try re importing the workflow
Hi! I regularly use this workflow, but lately, I've encountered a few issues with it. All the problems started after the Comfy update. Initially, the issue was that instead of smooth morphing, a bunch of images similar to what I inserted into the IP adapter were generated, and they would rapidly switch between each other (restarting helped, but only for one generation). However, the biggest problem appeared today (also after the update): there’s an issue with the "Simple Math" node, and honestly, I don’t know what to do. There are just two red circles around "A" and "B" that are highlighted. I’d really appreciate your help-I have no one else to turn to
that sucks, some things tend to break after updating, I will test it out again and see if it works for me
@@MDMZ After the recent update, the issue with the IP adapter has been completely resolved, but the workflow still isn't working due to the Simple Math node.
@@MDMZ I've fixed everything. In case anyone else encounters this issue, you just need to replace the "Simple Math" node with "Math Expression" and make sure to write "a/b".
There is no manager option in my comfy ui what should i do now?
did you install the manager ? check this video: ua-cam.com/video/E_D7y0YjE88/v-deo.html
@@MDMZ much respect for you bro to be very honest you and your community is great i love to be a part of your community.
how can i control clip vision on this workflow my friend ?
why does the final output video turn super dark when i use super bright images??
make sure you use the right settings and models, if it persists, try reducing the steps down to 15-20
I use same settings as you and use a Geforce RTX 3070. Is it normal that a full render will take 7 hours???
I've replied to u on discord
@@MDMZ Thank you sir!
Hi There! This is so sick, do you do anything paid or know anyone who does this for commision? Just recently have been exploring AI art and am totally new to the field. Thank you so much!
u might be able to find some talent on our discord server
Can I use the real photos. eg : from my dad, change to me and change to my son ?
Hi there, this question was covered in the video
hi, what is the resolution of the uploaded photos?
for this particular video 1024x1024, but so far I havent had restrictions with resolution or aspect ration, better quality helps tho
@@MDMZ Are these images publicly available? I can't achieve your result.
@@WreckageWonder yes they are, together with the workflow
Can u do tutorial on krea similar options may be easier for many
Krea is awesome, but I don't think u can use it to do smth like this
I can't find the list of models, when I click the link for discord.
check the pinned message in the discord channel
hmmm. I must be missing something because I can't seem to get the video to look anything like the original images...any tips?
Fixed it, Lora's did not auto pull in settings! noobing my way through, thanks for this tut!
Wow Thanks
anyway to set video combine to download in h264 format because the uncompress is too big also for the images anyway to always save as jpeg or webp not png? ps im not talking about previewing.
u can increase the crf to reduce file size, I think the combine node has options to change codec as well
I have a problem where images from the previous generation are saved, and even though I remove them, they still appear in the generation
that's strangem try restarting comfyui, and set the seed to randomize
When I use a real person's image, it completely changes that person's face to a different man. Is there any way to fix that to maintain the same face?
By the way, great video. Keep it up.🔥🔥
check 5:00
@@MDMZ Yes, I caught this the second time I watched the video. Thank you for clarifying this, though.
love your content 🔥🔥
@@MDMZ hi sir, i know this tutorial just came out but i want to know if this is possible
I don´t have a color correct node in my workflow. how do I get it?
make sure you've installed all the missing custom nodes
@@MDMZ can´t find a color correct node
I keep getting the IPAdapter model not found error. Any solution?
Make sure you place the files in the correct path
Error occurred when executing ImageSharpen why always like this?
help me please
what does the error say
There is no manager option in my comfy ui manger what should i do now?
You need to install it, check my comfyui installation video for instructions
is there a way to add more pictures to the process? and how can I make a linger video out of this?
yes, check the pinned comment
I'm getting a error that says control net object has no attribute latent_format
hi, please check the pinned comment
Nice
how to increase the number of frames?
check the pinned comment
I went according to the same links you posted and download *ed the required files, but it gave me an error again, what's the problem?
Error occurred when executing ADE_LoadAnimateDiffModel:
'Hyper-SD15-8steps-lora. safetensors' is not a valid SD1.5 nor SDXL motion module - contained 0 downblocks.
File "D:\ComfyUI_windows_portable\ComfyUI\execution. py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "D:\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj. FUNCTION, allow_interrupt=True)
File "D:\ComfyUI_windows_portable\ComfyUI\execution. py", line 74, in map_node_over_list
results. append(getattr(obj, func)( ** slice_dict(input_data_all, i)))
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ ComfyUI-AnimateDiff-
Evolved\animatediff
odes_gen2.py", line 178, in load_motion_model
motion_model = load_motion_module_gen2(model_name=model_name, motion_model_settings=ad_settings)
AAAN
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-
Evolved\animatediff\model_injection.py", line 1084, in load_motion_module_gen2
mm_state_dict, mm_info = normalize_ad_state_dict(mm_state_dict=mm_state_dict, mm_name=model_name)
AAA
File "D:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ ComfyUI-AnimateDiff-
Evolved\animatediff\motion_module_ad.py", line 136, in normalize_ad_state_dict
raise ValueError(f"'{mm_name}' is not a valid SD1.5 nor SDXL motion module - contained {down_block_max}
downblocks.")
hey, it's possible that the file 'Hyper-SD15-8steps-lora. safetensors' is corrupted, try re-downloading it, you can also share this on discord for more help
can someone send me a link to the IPAdapter model please. I think the link mentioned here is not good. thanks.
What happens when you click on the link ? Seems to be working fine for me
How do you print it out in a 16:9 ratio resolution!!!???????????????????? plz
just swap the dimensions, it's actually explained in the video
Any suggestions on how to do a longer video? I want to use more than 4 images, how do i add nodes?
you can duplicate the image group nodes to add extra images
hello why me eror ipaadapter loaded sir can u help me
hi, check the pinned comment
Can I easily create AI animation with Animate Diff/Comfy UI's help using an Nvidia Geforce 1050TI 4GB graphics card?
4GB is a bit too low
Amazing results indeed but wow at 1:00min lost me as wayyy to complex to use unfortunately
haha that was my exact reaction when I first saw it, don't get discouraged, it gets easier 😉
영상 잘보고 가빈다
응원드립니다
Can u please post your workflows somewhere else? Cuz Patreon not available in many countries...
I believe the unavailability issue affects the payment stage only, I put the workflow there for FREE, can you check if you're able to see the post ?
@@MDMZ Nope. Patreon is blocked from their side, they decide which nation have privilege to join... If workflow is free, maybe u can link it via google drive?
Ipadaptor folder not found in model folder what I do
you can create it
@@MDMZ thanks sir 🥰🥰
raise Exception("ClipVision model not found.")
make sure you download the correct clipvision files AND... rename them as described in the list
@@MDMZ it's solve but still i get error on cv2
How to open comfyui after installing it all??
open run_nvidia_gpu
Donde es esta lista de modelos necesarios. No la veo en discord. Ayuda.
anclado en la chincheta arriba a la derecha.
@@johnlonggone gracias.
AMAZING❤️
Thank you!
Can u help me?
Error occurred when executing IPAdapterUnifiedLoader:
IPAdapter model not found.
File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\user\Downloads\comfyui\ComfyUI_windows_portable\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 535, in load_models
raise Exception("IPAdapter model not found.")
me too
make sure you download all the models from the list, and place them in the right folders
How do I do looping?
it loops by default
The motion graphic site is down, how can I get the Video? Thx
seems to be working fine now
@@MDMZ I try many times, still can't link the site, pls help, if upload the video to somewhere else(maybe the site block some IP address)
Invalid invitation to discord, would it be possible to update the link? Congratulations on the work!!
Just fixed it, sorry about that
@@MDMZ thanks
2.5 for 720p and 3.75 for 1080p, what about 4k?
how long is your rendertime with 3.75? thx
7.5 try at your own risk 😅
Better then deforum?
depends who u ask, both can be used for different things
Hey i am using think-diffusion for this.
When i am uploading 2 files which named as model.safetensors inside ComfyUI / clip_vision folder. I am not able to rename it to "CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors"
can you please help me?
you dont have to rename it, just make sure you load the correct file in the node
I didn't solve this error. If anybody can help. Thanks in advance.
Error occurred when executing IPAdapterUnifiedLoader:
ClipVision model not found.
File "E:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\AI\ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 529, in load_models
raise Exception("ClipVision model not found.")
hi, you need to place the clipvision model at the right folder, check the pinned comment for more help
👏👏👏👏
Can we have a Google colab for this please?
🔥🔥
Im learning too much from you sir 🙏🏻💯🫡
Glad to hear it
3.75 is insane
U don't have to go that high, 720p is still good
You can go lower and use topaz labs for pixel upscaling which does an excellent job
to bad Tutorial missing folder this Tutorial is more better ua-cam.com/video/mecA9feCihs/v-deo.html
Great video!
Can u specify which folder is missing ?
can you do this for free?
it is free!
People with links to Patreon should be banned for advertising.
You literally copied his channel content and voiced over it.
this IS comfyUI
Error occurred when executing ADE_LoadAnimateDiffModel:
'NoneType' object has no attribute 'lower'
File "C:\Users\piyus\OneDrive\Desktop
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\piyus\OneDrive\Desktop
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\piyus\OneDrive\Desktop
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\piyus\OneDrive\Desktop
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff
odes_gen2.py", line 178, in load_motion_model
motion_model = load_motion_module_gen2(model_name=model_name, motion_model_settings=ad_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\piyus\OneDrive\Desktop
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-AnimateDiff-Evolved\animatediff\model_injection.py", line 1066, in load_motion_module_gen2
mm_state_dict = comfy.utils.load_torch_file(model_path, safe_load=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\piyus\OneDrive\Desktop
ew_ComfyUI_windows_portable_nvidia_cu121_or_cpu\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 13, in load_torch_file
if ckpt.lower().endswith(".safetensors"):
^^^^^^^^^^
facing this error
Why I am getting this - loading in lowvram mode 571.8972654342651
0%| | 0/11 [00:00
Can anyone help with this?
you probably have a low VRAM GPU
Is 8GB Gpu not enough for this?
Great tutorial but Why is the simple math node not working for me? i haven't touched it but its highlighting the b input after trying to generate. 😮💨
I saw your comment on discord, responded
How to make it if from Video? from 1 video or multi video for example 6 video faces? please make a tutorial, if possible sync with the mouth in the video.
How many vram and ram should i have for that? I have 32 ram- 8vram
I recommend atleast 12GB of VRAM, you can still give it a try
you are pure excellence.
No matter what I do, I always get the, "cannot find IPAdapter model" when I try to use Plus(High Strength), I've got the model, several times, and renamed it properly; but it's NEVER found. Thoughts?
In which folder are u placing the model ?
@@MDMZ I've got it In the /ComfyUI/models/clip_vision folder. Same spot as where I have the medium Strength model that IS functioning.
Looks like I may need a hardware upgrade or something though; using a medium strength model, my project fails at the second Ksampler "torch.cuda.OutOfMemoryError: Allocation on device"
Running an RTX 4070TI Super, 16Gig VRAM, I feel that SHOULD be enough.
@@MDMZ
I'm putting the model in the /ComfyUI/models/clip_vision folder, same folder as the medium strength model which is working.
I Get a couple "Allocation on device" errors; Running an I-9, RTX 4070TI Super 16 G VRAM and 32 GIG ram, I'm wondering if I need more RAM for this workflow?
@@NWO_ILLUMINATUS that's not the correct folder for IPAdapter models, it should be placed in the IPAdapter models folder, and you might need more VRAM depending on how high you're pushing your settings
@@MDMZ Saddly, didn't work. Still model not found. Also, the notes in the workflow say to add the models to the clip vision folder, and the medium model works in the clip vision folder. odd
Great video and I look forward to trying it. But, do you have a link to the model list that does not require discord?
To use juggernaut_reborn, where in the ComfyUI folder structure did you put it? I downloaded it and tried a bunch of different places but it wouldn't show up in the "Load checkpoint" box
hi, all the correct placements of models are included in the full list (link in the description) make sure you use the correct path