From Stills to Motion - AI Image Interpolation in ComfyUI!
Вставка
- Опубліковано 10 лют 2025
- Steerable Motion is an amazing new custom node that allows you to easily interpolate a batch of images in order to create cool videos. Turn cats into rodents, people into cars or whatever you fancy!
Image interpolate has never been so easy and fun :)
== Links ==
ComfyUI Workflows: github.com/ner...
== More Stable Diffusion Stuff! ==
Faster Stable Diffusions with the LCM LoRA - • LCM LoRA = Speedy Stab...
How do I create an animated SD avatar? - • Create your own animat...
Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
Add anything to your AI art in seconds - • 3 Amazing and Fun Upda...
Video-to-Video AI using AnimateDiff - • How To Use AnimateDiff...
Consistent Character in ANY pose - • Reposer = Consistent S...
== Support ==
Want to support the channel?
/ nerdyrodent
Thank you for simplifying this so beautifully. The other workflows I found for Steerable Motion were so complex and gave so many errors, it was hard to know where to start. This just worked perfectly for me.
Exactly the workflow I was looking for! And very well presented, Mister Rodent. Thanks!
I love this channel so much, it's my go to for latest info on AI!
😀
Hello my hero. I've been looking for an AI Morph solution for months and never found it, until now. You have already given me a lot of knowledge about automatic1111 and ComfyUI and I am doing a lot of research myself. Yes, I can't see python console anymore but since ComfyUI everything has become so much easier. Merry and relaxing Christmas from the bottom of my heart. AlbertoSono
This is amazing! I'm using images with very little variation for creating consistent animations. I'm loving this workflow. Thanks for the tutorial!
Great to hear - thanks!
@@NerdyRodent Hello ! Does it ask to install custom nodes ?
@@jeanrenaudviers yup! You can click “install missing custom nodes” if you’re missing any custom nodes
Great video mate! Clear, precise and a free workflow too? What's not to like!
Much appreciated!
Thank you for the tutorials!
I am stuck at the Batch Creative Node...
Its saying "Error occurred when executing BatchCreativeInterpolation:
'ControlNet' object has no attribute 'load_device' "
What did i miss? I have the controlNet we need installed already and have used controlNet before.
Never seen that one! I'd work through the troubleshooting section as 90% of the time, errors mean that Comfy needs updating.
Same thing is happing to me too!!!
I can confirm that the error does not appear after updating comfyui.
Awesome video, thanks I'm stuck with an error on the STMFNet VFI node.
"Error occurred when executing STMFNet VFI:
Error(s) in loading state_dict for STMFNet_Model: Missing key(s) in state_dict: "gauss_kernel", "feature_extractor.conv1.resnext_small.conv1.weight"... and a list too long to copy on the comment
Looks great, where is the specific workflow ? There are so many in the link (github) !
BatchImageAnimate.png
Second one from the bottom, with the video link that matches this video 😉
Hey again nerdy, this workflow stopped working for me, so I consulted a friend who suggested that I update to the newest ip adapter which has had an update recently (I didn't update comfy itself because I'm using run diffusion which it seems just goes based on the most current comfy version there is anyway). But now, with the new ip adapter in there, and with all my models installed (through the run diffusion manager panel), it can't get past the batch interpolation node. I get this error:
Error occurred when executing BatchCreativeInterpolation:
'ModelPatcher' object has no attribute 'get_model_object'
I've checked that all the models are there, and I also had a friend test out the workflow (with the new ip adapter) while running comfy locally on his machine, and it worked for him! So I'm confused at why the same things wouldn't work while running comfy in the cloud. So strange! any advice is welcome 🙏
Somehow the KSampler keeps giving me errors. I'm quite new to this so I'm not sure what to do from here. "Fix node (recreate.)" did not help.
On further inspection, it says "Out of Memory: Allocation On Device" I'm running a fairly decent i5 and RTX 3060Ti, and tried both the Nvidia and CPU modes, same error. Are there some parameters I can tweak to rectify this?
Edit: Might've been my input pictures being thousands of pixels in resolution.
Mine is stalling at the box right before output with STMFNet VFI - what is this? I can't find a reference to it in the manager. Thank you!
Awesome workflow!!!
Does this workflow still work?? As another user mentioned... my"Batch Creative Interpolation" node looks different, than the one in the tutorial - there is no "control_net_name" entry to select safetensors file. I tried with loaders but it seems Batch Creative Interpolation does not have an input for that.
Yup, still works fine!
Hi, thank you for the video, I'm a bit late to the morphing party, so one year later, is there a better way to do this effect ? Because even though it's cool and all, it doesn't really use the inputs images in the morphing, it recreates some images that kind of look alike, but it's not reliable it we want real morphing on our images.
You can always try this one - ua-cam.com/video/W-QuNjP_08U/v-deo.htmlsi=EzC7Sh06_fiBg1m8 ;)
I gave up on getting this to work many months ago. But stumbled upon this and it worked so much better than the option workflows I tried. But, how do I change the resolution and aspect ratio of the output? I don't necessarily always want 512*512
Hey, I wanted to try out your workflow and after installing every model and every node it stops at the "Batch Creative Interpolation point. It says something along the lines of: ""ipa_weight"] for x in bin.weight_schedule] (...) in apply_ipadapter" Do you know what to do to fix this problem and begin animating images?
The most common issue is trying to use any other ip adapter model, as that will produce an error
@@NerdyRodent thanks for the fast reply! It seems to work out now :)
Hey, i'm trying to run this on google colab however im getting multiple issues: i get 8 identical images generates (with a batch size of 8) and the STMFNet VFI is not working - it's trying to download the model from multiple directories but all of them are 404. i found one on huggingface, but when i try to run it i get this: Error(s) in loading state_dict for STMFNet_Model:
Missing key(s) in state_dict: and a bunch of parameters. what could be wrong?
I have the same issue
Hey, great video :)
I'm trying to find the workflow, I've followed the link, I see loads of workflows but not this one, do you have i direct link to it please? :) Thank you!
Hello i am looking for a solution for morphing transitions between 2 videos i need the video to start with the last frame of video 1 and then morph into the first frame of video 2. I am trying to tweak the settings and the graph but i dont seem to find a solution. Also i am having trouble understanding how to tweak the length of the whole animatio
great tutorial! But I keep ketting a Ksampler error at the end. It seems my 3090 is running out of memory! it's my card allready out dated? Or Am I somehow loading in a wrong model?
I’ve got an old 3090 as well and not had any issues yet! Perhaps if you’re doing more than 500 frames?
Just seed if you use >12 images then it will need more than 24 GB, so that's another option
so i should not try on a 3060 i get same error with original settings@@NerdyRodent
AnimateDiff can use a lot of VRAM so I personally suggest 12GB+, though I think people run it with less
you are the best thank you i needed a picture to picture morph been looking for months the one you made was the best i seen @@NerdyRodent
I was going to ask if there was a way to download the work flow but decided to wait until the end of the video.
Glad I did!
8ve got to try this.
aaaand? where is it i cant find it via ling, tons of workflows but not this one
great workflow thanks! tips on how to make the video a little smoother, maybe slowing down the interpolations ?
One easy way is to ensure less the similarity between the images. Other than that, it’s just a matter of playing with the curves to get what you want.
Thanks how to increase the length of video, It always export to 8 second video, even I 've increase the max frame , add 8 image, change the batch interpolation to 32 keyframe to 4, etc..but it doesn't work.
Hi, I keep receiving the error: "The size of tensor a (4096) must match the size of tensor b (8192) at non-singleton dimension 0" on the ksampler. It seems to be related to the model, though I tried to download the exact one you are using in your video. I have also tried many other models. All search results say that the issue may be linked to the image size that the sampler was trained on, but changing the images doesn't have any effect on the outcome, which is always the same error. Please reach out! - A
Error like that usually mean you’re trying to mix various models in ways that don’t work - such as an Sdxl ControlNet with sd1.5
@@NerdyRodent Thank you for the speedy response, I really didn't think that you'd respond this quickly!
I also figured that was the case. I attempted to match all the models with the ones that were in the workflow, but it's possible that I may have accidentally downloaded the wrong one somewhere. I'll let you know if I can get it working after replacing some of the models
I have a question. Can you manipulate the aspect ratio of the images and the overall output? For example, I have a few AI Gen Images that were created on Midjourney in 9:16 aspect ratio. Can I input those images to receive the output in the same 9:16 aspect ratio? If not, how do we manipulate that?
Hi Rodent! As soon as I install steerable, I won't get the workflow at all. How do I get the same like you have?
Hey thank you for this, but i am getting an error "Error occurred when executing BatchCreativeInterpolation: insightface model is required for FaceID models" which is weird because I had ipadapter working fine before (in different workflows) and I have all ipadapter models loaded and ready. Also not sure why in the clipVision loader you have the 1.5 model checkpoint? I tried that too but still getting the same error :(
Personally, I avoid using any insightface things due to the non-commercial license
@NerdyRodent OK thanks, It was there because I downloaded this graph with it there from your github 🤷🏻♂️
It's unclear which versions to get and where to put them because comfyui has a mixture of flux and sd. Can you update your table to be more specific? Thanks. Looks like a great workflow!
Thankfully flux was released well after this, so all is good 😃
Hello, I have modified the workflow a little and added an upscale image. And I had a question: how to make an upscale using supir? Will this work with video? I don't have much experience.
Yup, you can upscale the output too! Also remember though that supir isn’t for commercial use.
POM updated this workflow and the node, which kinda breaks this way of doing it. The new version uses sparsectrl rgb. Def worth a look :) And as always, thank you for all that you do! This workflow helped me out a ton.
Thanks for the info!
would it be possible to create a new video that uses the new workflow/node? I'm having a hard time figuring out how to get it work @@NerdyRodent
I’ll pop a new workflow up on patreon which also uses sparse rgb in a day or so!
Hi Nerdy Rodent, just discovered your stuff, amazing! I'm getting an error when i try run the model. The error is: Error occurred when executing IPAdapterModelLoader: xxx missing 1 required positional argument: 'ipadapter_file'
HEEEELP PLEEEASE i have this error RuntimeError: The size of tensor a (257) must match the size of tensor b (577) at non-singleton dimension 1 IT happens on all workflows with BATCH CREATIVE INTERPOLATION NODE
Thank you very much for this.
Do you know the reason why if I run the workflow with 2 images it takes only 5 minutes, but if I run it with 5 images it takes about 3 hours. The time needed seems to be exponential... :/
Thank you for this amazing tutorial. Unfortunately, my personal laptop was not bought keeping in mind that at some point I will be interested in AI image creation. Now, after installing Stable Diffusion and after that ComfyUI, seeing that generating one single image at 512x512 takes about one hour while others do it in seconds... I kinda want to bang my head against a wall. There is a web version of ComfyUI where I could test some workflows. I tried the one from your description but I don't think I managed to find the right models for all the nodes. I'll try more and hopefully, I'll manage to test it. I know you have a simpler version in your Patreon account. I'll try that too the second my salary hits my bank account. :))) "See you" in a week or so! Once again, thank you for everything you post!
What is the name of the workflow u use in this video?
what settings would you use if you wanted to keep the output as close as possible to the output image? I've tried to play with the batch creative interpolation across many settings, but no matter what, the image meaningfully changes - just curious if you've been able to accomplish this?
Very interesting. Thank you for sharing workflow and tutorial. Unfortunately something is strange: my "Batch Creative Interpolation" node looks different, than the one in the tutorial - there is no "control_net_name" entry to select safetensors file. I tried with loaders but it seems Batch Creative Interpolation does not have an input for that. On SteerableMotion github I noticed there is no input for control net in the INPUT_TYPES - did you do your own modification of SteerableMotion? Or is there sometinh I need to do to make it visible?
I’ll pop a new workflow up on patreon in a day or so which also uses sparse rgb!
I have the same issue
if you wanted to use this to do a video with a script that requires images to change at an specific time of the video, could you use this tool and have each image have your desired lenght or you just cant?
Comfyui is powerful, but I spend many hours trying to get nodes to work that Manager does not correct. I can't seem to find much info on where or how to install STMFNET. I manually copied the pth file but it must not be right because I keep getting RuntimeError: Error(s) in loading state_dict for STMFNet_Model. As others have indicated, when executing the workflow, it throws an error that the path to STMFNet cannot be found. Manual installation has not worked. Any suggestions?
Everything worked automatically for me, no manual copies or what not. Could be an out of date install at a guess!
@@NerdyRodent For anyone struggling with ST-MFNet node, replace it with FILM VFI node and everything works great!
@@DerekShenk Thank you! I was having this error too and it made me give up on this workflow. Gonna go try out FILM VFI right now.
have I told you I loved you lately?
Have I told you there's no one else above you?
You fill my heart with gladness?
Thanks for the video! There is one problem. Gives an error message. Gives an error message. How to fix it properly? Also, instead of previewing the image, a black screen is constantly displayed. What to do with it?
""Error occurred when executing BatchCreativeInterpolation:
Error(s) in loading state_dict for ImageProjModelImport:
size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 1280]).""
I updated everything, so the problem is not in the old version.
Hi! Drop me a dm on www.patreon.com/NerdyRodent and I’ll see what I can do 😃
@@NerdyRodent +
@@NerdyRodent I wrote to you
updating the ip-adapter models fixed this for me (make sure you had the sd 1.5 plus one as he has in the video)
@@carlherner4561 Thanks, I've tried a lot. currently working
hi there, excellent tutorial, i keep finding an error about "Ipa Weight" when running the creative batch node, any ideas? i already updated comfy and nodes but the error keeps appearing.
ipa_weight for ipadapter should be a float value, so I'd start by checking that nothing says "NaN" or has text where a number should be
Great tutorial thank you so much! just 1 question is possible to do interpolation just prompting without images?
Yup, just use the batch prompt schedule
Thanks so much for this! I had a lot of errors in the beginning but it's working well now. I had a couple of basic questions: what are the best ways to work with this workflow and change the video resolution for the final output? Same with aspect ratio - is it possible to do different ones (ie 16:9) using this workflow? tysm!!
i also would love to know how to make the video resolution super duper low because i think it would be so cool to try this with pixel art as the inputs images!
Yup - you can pick any size you like with the SD1.5 range! It's best if it matches your image size though :)
@@sugartivi2126 to change the output resolution, move the mouse pointer over the resolution and then left click on the width and height to change the values, which are at 512 by default. You can also use the little arrows to increase or decrease the value.
@@NerdyRodent thank you!!
hello! Where can I find this workflow?? I don't see it in the link provided.
anyone else getting this error 'ControlNet' object has no attribute 'load_device'?
Yes I'm also
Oh thank you, clear and steady. The last tutorial i just watched ran the entire video at 2X speed (at least) AND used keyboard shortcuts AND has an east Asian accent explaining the actions but was wildly out of sync with them. Bloody frustrating, I nearly gave up.
How did you get your comfy ui to look all colourful?
You can right-click on any node change it’s colour 😃
It looks almost like Deforum extension for A1111 was converted for Comfy 👍
Question!! My son is doing a project on ''The Mesolithic period'', he wants to use some examples of AI art for his talk on the subject. The problem is, all my attempts using Comfyui are a mutated group of cavemen, he's nervous as it is, he said my AI art will make him the laughing stock. So, are there any simple PROMPTS that i could use to produce good results?. cheers
I downloaded the ZIP folder, but it's not really clear to me which file is the workflow itself.
As a first time user of ComfyUI, I would suggest you start off with the more basic workflows first before getting into the more advanced things
can't find this workflow on your site!
so, how long it takes you to complete . I have 4070 super, mine stuck at last ksampler for while, nothing changes. in cmd it shows loading 4 new models but nothing loading
Couple of minutes, depending on length
Great video:) when I loaded the workflow there was no graph image in the preview, and running the prompt it keeps stopping at Batch Creative Interpolation node. any thoughts ? tried loading all missing nodes and restarting,? thanks again and great videos,
The graph should appear fairly quickly, like when I bypass the Ksampler nodes, etc? If it's not outputting a graph, I can only guess that it' is outputting an error of some sort? If so, the node developer may be able to provide some sort of clue, as I've not had that happen as yet!
thanks for the reply, I shall keep investigating:) may try a fresh install and run it all again. Cheers . @@NerdyRodent
So strange. totally did a clean reinstall of Comfi, all the dependencies, models, missing nodes and still not graph and getting stuck at
Ahhh the update everything worked ,!! I got past the spot and got the graph :) !! then ran out of memory, hah Progress !:)
Error occurred when executing STMFNet VFI:
================================================================
Failed to import CuPy.
great.. thank you
I keep getting this error when it runs through the batch interpolation node "BatchCreativeInterpolationNode.combined_function() got an unexpected keyword argument 'cn_start_at'" Anyone have any thoughts??
I’ll drop an updated version which also uses sparse control on patreon in a day or two 😀
getting this error: Loop (437,506) with broadcast (465) - not submitting workflow
Try updating ComfyUI + all custom nodes to their current release
So, I watched the video and see this is for SD 1.5?, can you clarify....
Hi, I am fascinated by this workflow! Can I use it on m1 MacBook Pro? Every time I try, ComfyUI disconnects in the AnimateDiff process.
I don't have a MacBook, I'm afraid :/
Hi there, Thanks a lot for your great tutos! I think there must have been an update that causes the following error ("Error occurred when executing BatchCreativeInterpolation:
BatchCreativeInterpolationNode.combined_function() got an unexpected keyword argument 'positive'). The "Batch Creative Interpolation" node seems a bit messed up after the update.
Make sure to update ComfyUI as well as your custom nodes!
me too! there are not updates visible, did you work it out?
@@santobosco5008 It was working fine until a recent update which seems to have messed up the "Batch Creative Interpolation" node.
I’ll upload a new workflow to patreon in a day or so which also uses sparse control!
😍Thank you so much! It is Amazing tutorial and thank you for workflow!
You're so welcome!
Very interesting, but I can not find the workflow using the link above, any clue?
For support, see www.patreon.com/NerdyRodent 😀
hi i have a problem: Error occurred when executing KSampler Adv. (Efficient):
'NoneType' object has no attribute 'to'
NoneType means nothing is being used (hence it not having any attributes), so make sure you’re loading the correct models and that none of the model files are corrupted
@@NerdyRodent Hello thanks, I got it working but the picture is totally blurry
I’ll drop a new version which also uses sparse control in patreon in a day or so!
nerdy. how do you make the time between transitions longer?
Looks amazing! Unfortunately I'm getting an error regarding the IP adapterModelLoader: 'NoneType' object has no attribute 'lower' And a bunch of lines in a file called 'execution.py' that are faulty. Any idea? I updated to the latest version
For support, see www.patreon.com/NerdyRodent 😃
Done 😁@@NerdyRodent
Hi I am having the same problem I can't get your solution please let me know as well 🙏!
Keep em coming!!! 😁 Great one again!! 🤩
Hi! This is two parts question.. Can I use lora in prompt? Like "0" :"". Probably not... So how I can use lora in this workflow? And then, sec question - can I use multiple loras with a connection to timeline "0", "4", "20", "36" etc. in your prompt? Probably not either, then maybe just separate lora for each image? ( &if that possible then probably different model to each other image is possible too? ) Thank you
You can load as many Loras you like - just use the Lora loaders
@@NerdyRodent So applying different lora to each specific image is not possible? if I have 2 input images, Julia Roberts and Tom Cruise, the faces I'm getting in output video is neither of them - model changes them both to something else. And so I have to apply loras.
@@NerdyRodent btw, I didn't know you could manually add lora loader to any random workflow... for some reason I always thought that this would require workflow changes on the code level... thanks for the tip! :)
Out of all your workflows which would be the best first one for a beginner in comfyUI to get started with using? I’m also using an amd 7900xt.
I would just use the default workflow to start with!
Does this do a good job interpolating images of the same scene/shot? I just want something to animate my images for a movie
I choose close loop by true, but my video still not loop.. How to do that ?
Which one of your workflows is the one you presented here, you have a million different .png files...
Error(s) in loading state_dict for ResamplerImport:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
no idea what this means or how to fix it?
As a guess, it could be that you're trying to use an SDXL model?
It’s realisticvision (1.5). I think it’s to do with the controlnet in the batch creative interpolation. Trying to download the CN that shows upon loading your workflow
I had this exact error. My fix was to use the vit-H clip vision model as that was the one used in the original bandoco steerable motion workflow.
amazing workflow! which clip vision model get used here? also get an error that it failed to import CuPy any hints? or is it because of my 8 gb gpu?
It’s the usual Sd1.5 clip vision model. For CuPy it may be that you don’t have an Nvidia card, in which case you can just bypass that node. For more help drop me a dm on www.patreon.com/NerdyRodent 😀
thank you! works without the frame interpolation; strange I have an NVIDIA card (3070 mobile)@@NerdyRodent
for the record: works all like a charm, tested it on a 4090 now, my 3070 laptop GPU seems to be to weak for this workflow
@@NerdyRodent can you please send a direct link to the Clip model to make this work? I'm literally on the verge of losing it, everything else works is just that I miss, thank you!
@@TheCcamera can you please send a direct link to the Clip model to make this work? I'm literally on the verge of losing it, everything else works is just that I miss, thank you!
cool stuff 😊
do you need to install cupy seperately?
I'm stuck on CuPy not installing too. I've added CUDA and cuTENSOR & NCCL. cuDNN wants credentials. Feel like I'm going down the wrong path. comfyui has it's own python / (conda?) environment so I don't know.
@@clenzen9930 I've not tried installing it yet. Was looking at it earlier, think I need to install cuda tool kit then cupy but was gonna see if anyone had some wisdom to share first.
@@clenzen9930 I also got these problems. It was because of the framerate increase section (STMF Net VFI). I just bypassed it by connecting the KSample directly to the saving section ("video combine") instead of going from KSample to "split image batch". Worked for me. Got good results though.
I got stuck at the same spot here, Error occurred when executing STMFNet VFI: Failed to import CuPy, Ill try your hack bypassing the split image Batch. fingers crossed. but Id love to know how to get it to work like Nerdy has it in his video :)
bro… those girls in TANK TOPS with RAT HEADS are freaking HORRIFYING… 😂
Seems normal to me! 😂
Thanks for the lesson. But I can't get the workflow to work
Error in the sampler, replacing it with a standard one does not help.
Error occurred when executing KSampler Adv. (Efficient):
Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)
Remember to check the troubleshooting at the top. 90% of the time you need to update your ComfyUI install or custom nodes
SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5) it's a png so having a hard time opening a direct json file to fix it even if i save workflow as json it just opens the image
did you solve this?
I have the same problem, did you solve it?
This is a great tutorial, thank you :) but my question is, is it possible to have this video like a loop? I mean getting a smooth transition from the last image to the first.
You can. You can select "closed_loop" to be true in the AnimateDiff group "Uniform context Options" in the upper left.
where do i place the controlnet model in the batch creative interpolater?
You can drop me a dm on patreon if you need more help!
I'm getting an error regarding the IP adapterModelLoader: 'NoneType' object has no attribute 'lower' :(
NoneType = the node hasn’t been able to load the model, so make your download isn’t corrupted and you’re using the correct model!
Thank you very much! Now it says Error occurred when executing STMFNet VFI, and Failed to import CuPy ☹ It's not my day 😀
@iozsoo cupy is for Nvidia cards so you can just bypass it. There is experimental Linux ROCm support, but I don’t have an AMD card 🫤
@@NerdyRodent Unfortunately, I have an RTX 3060, and I've just installed cupy, but it's still failing to import it. Same error, while executing STMFNet VFI ☹
The 3060 should work fine ^^ You can check if others have any similar issues with their Comfy install at github.com/Fannovel16/ComfyUI-Frame-Interpolation/issues
Seems great, but i really struggle to make anything out of it. Tried the settings from video as well as many others and im getting some abominations, flashing images and nothing like source pictures. Any universal settings, that you could recommend?
ok, I managed to solve it, so I'll post for anyone with similar problems. There was something wrong with my IP adapter and clip_vision models. I just redownloaded them and it works fine now.
@@Herman_HMS can you please point me to the correct Clip Vision model to use ? I can't find how to download the SD1.5/model.safetensors that nerdy rodent uses in the video
@@giancarloorsi4124 If you find just let me know :d it I'm waiting for it too
@@giancarloorsi4124 same cant find it, but might be a case of im just looking past it in the references.
I managed to work through errors on about 5 different nodes (updating comfy in the manager fixed most of them) but am hung up at the end where the animate diff loader feeds into the K-sampler. I have mm_sd_v15_v2.ckpt and sqrt_linear (Animate Diff) set which matches the video. the error is-
"Error occurred when executing ADE_AnimateDiffLoaderWithContext: module 'comfy.ops' has no attribute 'Linear'..."
I wonder if the K sampler it's plugged into has already been updated because I have an additional field called "sampler state" above "add noise" that I don't see in this video.
Remember to check the troubleshooting at the top. 90% of the time you need to update your ComfyUI install or custom nodes
I got the same error, I needed to select Update all, not only Comfy Ui in order to work ;)
the workflow is different than that from the video. having trouble with Batch Creative Interpolation
Yes. The video was done in the past. We are now in the future! Woohoo!
@@NerdyRodent care to offer any advice regarding the new batch creative interpolation? Ive come back after several months and now it doesnt work.
@@KhaosDubz This is for the now ancient sd1.5, but works great in ComfyUI!
thank you! was waiting for this one. finally got it working but still trying to tame the beast .... not there yet
🔥🔥
i wanted this for a long time, and i really hoped it would work this time. I even installed everything fresh two times. but still an error :-(... Error: Can't find a usable init.tcl in the following directories... can anyone help? That would be great.... - after hours of trying i found the problem, i am using matrix as the sd/comfy installer, another version of comfy works fine. thx for the workflow!
Always love your vids, awesome! Seeing this error which is similar to other comments, but a little diff, everything's been updated,
Error occurred when executing BatchCreativeInterpolation:
'NoneType' object has no attribute 'lower'
You can drop me a dm on patreon for help, plus I’ll also be uploading a new workflow version using sparse rgb too!
@@NerdyRodent thank you, my own stupid fault. I was not using the correct model for Clip Vision. I also had to change the VFI, the one in your workflow gave me an error that I was missing a dll
thx
epic
Where can i find the workflow
I always put the links into the video description 😉
@@NerdyRodent the github link ? But i can’t download that file🥹🥹
@@NerdyRodent really sorry am new to this .. got it thanks ☺️
seems like most of the plugins in the shared workflow are missing or broken and they are not coming up when searching for custom notes. When loading the graph, the following node types were not found:
Note Plus (mtb)
IPAdapterModelLoader
ACN_SparseCtrlRGBPreprocessor
ACN_SparseCtrlLoaderAdvanced
VHS_LoadImagesPath
ADE_AnimateDiffUniformContextOptions
ADE_EmptyLatentImageLarge
ACN_AdvancedControlNetApply
VHS_SplitImages
STMFNet VFI
KSampler Adv. (Efficient)
ADE_AnimateDiffLoaderWithContext
VHS_VideoCombine
BatchCreativeInterpolation
You can drop me a dm on patreon if you need help 😀
Every time I try comfyui I get a dozen errors that need to be solved.
You can drop me a dm on patreon for help!
@@NerdyRodent I've got it working finally, but the output doesn't look anything like the input images. Do you have a guide that uses all similar images at the input? For example, I want to make a video of a person sitting in a chair and then stand up rather than having a bunch of random images at the input.
Hi I truly like your channel and this time I am trying to execute this workflow but I have some issues, pherhaps you may know what is the problem?
I kept having some errors and discovered that changing the ipadapter that you use from ip-adapter-plus-sd15 to ip-adapter-sd15 fixes the issue but still having problems.
After this process the final result is something totally different from the input images, and all the parameters are just like yours in this video.
Do you have any idea what could be the problem I am facing? Because I have no clue at all......I am lost :(
keep getting errors whatever i do , what the hell does this means Error occurred when executing ADE_AnimateDiffLoaderWithContext:
module 'comfy.ops' has no attribute 'Linear'......so not worth it
Remember to check the troubleshooting guide at the top. 90% of the time you need to update your ComfyUI or custom nodes
i want workflow link please , because i am still new in comfyui
== Links ==
ComfyUI Workflows: github.com/nerdyrodent/AVeryComfyNerd
you can't load as many images as you like - that's not true. A new model is loaded for each image added and here's the error you get:
WARNING:root:Some parameters are on the meta device device because they were offloaded to the cpu.
loading in lowvram mode 256.0
And nothing happen.
It's a shame, this kind of workflow could have been interesting for long renderings.
You can still do long renders like normal, but over 12 input images uses a lot of VRAM. Hopefully the developer will find a way to allow more than that 😃
Awesome how nice youi talk in your videos. when i use your workflow i always get this message. I tried to figure out but i am too new to get any ideas of what i am doing.
Error occurred when executing IPAdapterModelLoader:
'NoneType' object has no attribute 'lower'
File "D:\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 593, in load_ipadapter_model
model = comfy.utils.load_torch_file(ckpt_path, safe_load=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\ComfyUI\comfy\utils.py", line 12, in load_torch_file
if ckpt.lower().endswith(".safetensors"):
^^^^^^^^^^
Ist there something similar for automatic 1111? Cool video btw...
Not that I’ve found as yet, but the hunt continues! Let me know if you find anything