From Stills to Motion - AI Image Interpolation in ComfyUI!

Поділитися
Вставка
  • Опубліковано 10 лют 2025
  • Steerable Motion is an amazing new custom node that allows you to easily interpolate a batch of images in order to create cool videos. Turn cats into rodents, people into cars or whatever you fancy!
    Image interpolate has never been so easy and fun :)
    == Links ==
    ComfyUI Workflows: github.com/ner...
    == More Stable Diffusion Stuff! ==
    Faster Stable Diffusions with the LCM LoRA - • LCM LoRA = Speedy Stab...
    How do I create an animated SD avatar? - • Create your own animat...
    Installing Anaconda for MS Windows Beginners - • Anaconda - Python Inst...
    Add anything to your AI art in seconds - • 3 Amazing and Fun Upda...
    Video-to-Video AI using AnimateDiff - • How To Use AnimateDiff...
    Consistent Character in ANY pose - • Reposer = Consistent S...
    == Support ==
    Want to support the channel?
    / nerdyrodent

КОМЕНТАРІ • 237

  • @ashertique4651
    @ashertique4651 11 місяців тому +3

    Thank you for simplifying this so beautifully. The other workflows I found for Steerable Motion were so complex and gave so many errors, it was hard to know where to start. This just worked perfectly for me.

  • @THISISSMACK
    @THISISSMACK 10 місяців тому +3

    Exactly the workflow I was looking for! And very well presented, Mister Rodent. Thanks!

  • @07xGH0ST
    @07xGH0ST Рік тому +6

    I love this channel so much, it's my go to for latest info on AI!

  • @electronicmusicartcollective
    @electronicmusicartcollective Рік тому +4

    Hello my hero. I've been looking for an AI Morph solution for months and never found it, until now. You have already given me a lot of knowledge about automatic1111 and ComfyUI and I am doing a lot of research myself. Yes, I can't see python console anymore but since ComfyUI everything has become so much easier. Merry and relaxing Christmas from the bottom of my heart. AlbertoSono

  • @latent-broadcasting
    @latent-broadcasting Рік тому +2

    This is amazing! I'm using images with very little variation for creating consistent animations. I'm loving this workflow. Thanks for the tutorial!

    • @NerdyRodent
      @NerdyRodent  Рік тому

      Great to hear - thanks!

    • @jeanrenaudviers
      @jeanrenaudviers Рік тому

      @@NerdyRodent Hello ! Does it ask to install custom nodes ?

    • @NerdyRodent
      @NerdyRodent  Рік тому

      @@jeanrenaudviers yup! You can click “install missing custom nodes” if you’re missing any custom nodes

  • @MrSporf
    @MrSporf Рік тому +4

    Great video mate! Clear, precise and a free workflow too? What's not to like!

  • @devoiddesign
    @devoiddesign Рік тому +8

    Thank you for the tutorials!
    I am stuck at the Batch Creative Node...
    Its saying "Error occurred when executing BatchCreativeInterpolation:
    'ControlNet' object has no attribute 'load_device' "
    What did i miss? I have the controlNet we need installed already and have used controlNet before.

    • @NerdyRodent
      @NerdyRodent  Рік тому +3

      Never seen that one! I'd work through the troubleshooting section as 90% of the time, errors mean that Comfy needs updating.

    • @Akkhar
      @Akkhar Рік тому +2

      Same thing is happing to me too!!!

    • @CoolAiAvatars
      @CoolAiAvatars Рік тому +2

      I can confirm that the error does not appear after updating comfyui.

  • @THbeto8a
    @THbeto8a 10 місяців тому +1

    Awesome video, thanks I'm stuck with an error on the STMFNet VFI node.
    "Error occurred when executing STMFNet VFI:
    Error(s) in loading state_dict for STMFNet_Model: Missing key(s) in state_dict: "gauss_kernel", "feature_extractor.conv1.resnext_small.conv1.weight"... and a list too long to copy on the comment

  • @SKYGGEMUSIC
    @SKYGGEMUSIC 10 місяців тому

    Looks great, where is the specific workflow ? There are so many in the link (github) !

    • @SKYGGEMUSIC
      @SKYGGEMUSIC 10 місяців тому

      BatchImageAnimate.png

    • @NerdyRodent
      @NerdyRodent  10 місяців тому +1

      Second one from the bottom, with the video link that matches this video 😉

  • @sugartivi2126
    @sugartivi2126 9 місяців тому +1

    Hey again nerdy, this workflow stopped working for me, so I consulted a friend who suggested that I update to the newest ip adapter which has had an update recently (I didn't update comfy itself because I'm using run diffusion which it seems just goes based on the most current comfy version there is anyway). But now, with the new ip adapter in there, and with all my models installed (through the run diffusion manager panel), it can't get past the batch interpolation node. I get this error:
    Error occurred when executing BatchCreativeInterpolation:
    'ModelPatcher' object has no attribute 'get_model_object'
    I've checked that all the models are there, and I also had a friend test out the workflow (with the new ip adapter) while running comfy locally on his machine, and it worked for him! So I'm confused at why the same things wouldn't work while running comfy in the cloud. So strange! any advice is welcome 🙏

  • @Badguy292
    @Badguy292 7 місяців тому

    Somehow the KSampler keeps giving me errors. I'm quite new to this so I'm not sure what to do from here. "Fix node (recreate.)" did not help.
    On further inspection, it says "Out of Memory: Allocation On Device" I'm running a fairly decent i5 and RTX 3060Ti, and tried both the Nvidia and CPU modes, same error. Are there some parameters I can tweak to rectify this?
    Edit: Might've been my input pictures being thousands of pixels in resolution.

  • @aipamagica1
    @aipamagica1 Рік тому +1

    Mine is stalling at the box right before output with STMFNet VFI - what is this? I can't find a reference to it in the manager. Thank you!

  • @KennethEstanislao
    @KennethEstanislao Рік тому +5

    Awesome workflow!!!

  • @David_Fernandez
    @David_Fernandez Місяць тому

    Does this workflow still work?? As another user mentioned... my"Batch Creative Interpolation" node looks different, than the one in the tutorial - there is no "control_net_name" entry to select safetensors file. I tried with loaders but it seems Batch Creative Interpolation does not have an input for that.

  • @imalexx
    @imalexx 4 дні тому

    Hi, thank you for the video, I'm a bit late to the morphing party, so one year later, is there a better way to do this effect ? Because even though it's cool and all, it doesn't really use the inputs images in the morphing, it recreates some images that kind of look alike, but it's not reliable it we want real morphing on our images.

    • @NerdyRodent
      @NerdyRodent  4 дні тому

      You can always try this one - ua-cam.com/video/W-QuNjP_08U/v-deo.htmlsi=EzC7Sh06_fiBg1m8 ;)

  • @erikaronson
    @erikaronson 9 днів тому

    I gave up on getting this to work many months ago. But stumbled upon this and it worked so much better than the option workflows I tried. But, how do I change the resolution and aspect ratio of the output? I don't necessarily always want 512*512

  • @TheXInvador
    @TheXInvador 7 місяців тому

    Hey, I wanted to try out your workflow and after installing every model and every node it stops at the "Batch Creative Interpolation point. It says something along the lines of: ""ipa_weight"] for x in bin.weight_schedule] (...) in apply_ipadapter" Do you know what to do to fix this problem and begin animating images?

    • @NerdyRodent
      @NerdyRodent  7 місяців тому +1

      The most common issue is trying to use any other ip adapter model, as that will produce an error

    • @TheXInvador
      @TheXInvador 7 місяців тому

      @@NerdyRodent thanks for the fast reply! It seems to work out now :)

  • @furi216
    @furi216 11 місяців тому +1

    Hey, i'm trying to run this on google colab however im getting multiple issues: i get 8 identical images generates (with a batch size of 8) and the STMFNet VFI is not working - it's trying to download the model from multiple directories but all of them are 404. i found one on huggingface, but when i try to run it i get this: Error(s) in loading state_dict for STMFNet_Model:
    Missing key(s) in state_dict: and a bunch of parameters. what could be wrong?

    • @THbeto8a
      @THbeto8a 10 місяців тому

      I have the same issue

  • @Mckdirt
    @Mckdirt Рік тому +1

    Hey, great video :)
    I'm trying to find the workflow, I've followed the link, I see loads of workflows but not this one, do you have i direct link to it please? :) Thank you!

  • @nttnrecords5474
    @nttnrecords5474 11 місяців тому

    Hello i am looking for a solution for morphing transitions between 2 videos i need the video to start with the last frame of video 1 and then morph into the first frame of video 2. I am trying to tweak the settings and the graph but i dont seem to find a solution. Also i am having trouble understanding how to tweak the length of the whole animatio

  • @nickmarlow848
    @nickmarlow848 Рік тому +3

    great tutorial! But I keep ketting a Ksampler error at the end. It seems my 3090 is running out of memory! it's my card allready out dated? Or Am I somehow loading in a wrong model?

    • @NerdyRodent
      @NerdyRodent  Рік тому

      I’ve got an old 3090 as well and not had any issues yet! Perhaps if you’re doing more than 500 frames?

    • @NerdyRodent
      @NerdyRodent  Рік тому

      Just seed if you use >12 images then it will need more than 24 GB, so that's another option

    • @the_one_and_carpool
      @the_one_and_carpool Рік тому

      so i should not try on a 3060 i get same error with original settings@@NerdyRodent

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      AnimateDiff can use a lot of VRAM so I personally suggest 12GB+, though I think people run it with less

    • @the_one_and_carpool
      @the_one_and_carpool Рік тому

      you are the best thank you i needed a picture to picture morph been looking for months the one you made was the best i seen @@NerdyRodent

  • @mac24seven
    @mac24seven Рік тому +1

    I was going to ask if there was a way to download the work flow but decided to wait until the end of the video.
    Glad I did!
    8ve got to try this.

    • @Airbender131090
      @Airbender131090 Рік тому +3

      aaaand? where is it i cant find it via ling, tons of workflows but not this one

  • @cyril1111
    @cyril1111 Рік тому +1

    great workflow thanks! tips on how to make the video a little smoother, maybe slowing down the interpolations ?

    • @NerdyRodent
      @NerdyRodent  Рік тому

      One easy way is to ensure less the similarity between the images. Other than that, it’s just a matter of playing with the curves to get what you want.

  • @AndriVision
    @AndriVision 6 місяців тому

    Thanks how to increase the length of video, It always export to 8 second video, even I 've increase the max frame , add 8 image, change the batch interpolation to 32 keyframe to 4, etc..but it doesn't work.

  • @APerson-j2u
    @APerson-j2u Місяць тому

    Hi, I keep receiving the error: "The size of tensor a (4096) must match the size of tensor b (8192) at non-singleton dimension 0" on the ksampler. It seems to be related to the model, though I tried to download the exact one you are using in your video. I have also tried many other models. All search results say that the issue may be linked to the image size that the sampler was trained on, but changing the images doesn't have any effect on the outcome, which is always the same error. Please reach out! - A

    • @NerdyRodent
      @NerdyRodent  Місяць тому

      Error like that usually mean you’re trying to mix various models in ways that don’t work - such as an Sdxl ControlNet with sd1.5

    • @APerson-j2u
      @APerson-j2u Місяць тому

      @@NerdyRodent Thank you for the speedy response, I really didn't think that you'd respond this quickly!
      I also figured that was the case. I attempted to match all the models with the ones that were in the workflow, but it's possible that I may have accidentally downloaded the wrong one somewhere. I'll let you know if I can get it working after replacing some of the models

  • @Fweshiee
    @Fweshiee 10 місяців тому

    I have a question. Can you manipulate the aspect ratio of the images and the overall output? For example, I have a few AI Gen Images that were created on Midjourney in 9:16 aspect ratio. Can I input those images to receive the output in the same 9:16 aspect ratio? If not, how do we manipulate that?

  • @ja_nu
    @ja_nu 8 місяців тому

    Hi Rodent! As soon as I install steerable, I won't get the workflow at all. How do I get the same like you have?

  • @-Belshazzar-
    @-Belshazzar- 7 місяців тому

    Hey thank you for this, but i am getting an error "Error occurred when executing BatchCreativeInterpolation: insightface model is required for FaceID models" which is weird because I had ipadapter working fine before (in different workflows) and I have all ipadapter models loaded and ready. Also not sure why in the clipVision loader you have the 1.5 model checkpoint? I tried that too but still getting the same error :(

    • @NerdyRodent
      @NerdyRodent  7 місяців тому +1

      Personally, I avoid using any insightface things due to the non-commercial license

    • @-Belshazzar-
      @-Belshazzar- 7 місяців тому

      @NerdyRodent OK thanks, It was there because I downloaded this graph with it there from your github 🤷🏻‍♂️

  • @ian2593
    @ian2593 3 місяці тому

    It's unclear which versions to get and where to put them because comfyui has a mixture of flux and sd. Can you update your table to be more specific? Thanks. Looks like a great workflow!

    • @NerdyRodent
      @NerdyRodent  3 місяці тому +1

      Thankfully flux was released well after this, so all is good 😃

  • @CarCrashesBeamngDrive
    @CarCrashesBeamngDrive 9 місяців тому

    Hello, I have modified the workflow a little and added an upscale image. And I had a question: how to make an upscale using supir? Will this work with video? I don't have much experience.

    • @NerdyRodent
      @NerdyRodent  9 місяців тому +1

      Yup, you can upscale the output too! Also remember though that supir isn’t for commercial use.

  • @Andro-Meta
    @Andro-Meta Рік тому

    POM updated this workflow and the node, which kinda breaks this way of doing it. The new version uses sparsectrl rgb. Def worth a look :) And as always, thank you for all that you do! This workflow helped me out a ton.

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      Thanks for the info!

    • @RickGA77213
      @RickGA77213 Рік тому

      would it be possible to create a new video that uses the new workflow/node? I'm having a hard time figuring out how to get it work @@NerdyRodent

    • @NerdyRodent
      @NerdyRodent  Рік тому

      I’ll pop a new workflow up on patreon which also uses sparse rgb in a day or so!

  • @OliverMichaelBacon
    @OliverMichaelBacon 7 місяців тому

    Hi Nerdy Rodent, just discovered your stuff, amazing! I'm getting an error when i try run the model. The error is: Error occurred when executing IPAdapterModelLoader: xxx missing 1 required positional argument: 'ipadapter_file'

  • @ДимаХлыщенко
    @ДимаХлыщенко 6 місяців тому

    HEEEELP PLEEEASE i have this error RuntimeError: The size of tensor a (257) must match the size of tensor b (577) at non-singleton dimension 1 IT happens on all workflows with BATCH CREATIVE INTERPOLATION NODE

  • @34_motiongraphics
    @34_motiongraphics 10 місяців тому

    Thank you very much for this.
    Do you know the reason why if I run the workflow with 2 images it takes only 5 minutes, but if I run it with 5 images it takes about 3 hours. The time needed seems to be exponential... :/

  • @NoOnexRO
    @NoOnexRO Рік тому

    Thank you for this amazing tutorial. Unfortunately, my personal laptop was not bought keeping in mind that at some point I will be interested in AI image creation. Now, after installing Stable Diffusion and after that ComfyUI, seeing that generating one single image at 512x512 takes about one hour while others do it in seconds... I kinda want to bang my head against a wall. There is a web version of ComfyUI where I could test some workflows. I tried the one from your description but I don't think I managed to find the right models for all the nodes. I'll try more and hopefully, I'll manage to test it. I know you have a simpler version in your Patreon account. I'll try that too the second my salary hits my bank account. :))) "See you" in a week or so! Once again, thank you for everything you post!

  • @svt8253ai
    @svt8253ai 3 місяці тому

    What is the name of the workflow u use in this video?

  • @RickGA77213
    @RickGA77213 Рік тому

    what settings would you use if you wanted to keep the output as close as possible to the output image? I've tried to play with the batch creative interpolation across many settings, but no matter what, the image meaningfully changes - just curious if you've been able to accomplish this?

  • @vard_msx
    @vard_msx Рік тому +1

    Very interesting. Thank you for sharing workflow and tutorial. Unfortunately something is strange: my "Batch Creative Interpolation" node looks different, than the one in the tutorial - there is no "control_net_name" entry to select safetensors file. I tried with loaders but it seems Batch Creative Interpolation does not have an input for that. On SteerableMotion github I noticed there is no input for control net in the INPUT_TYPES - did you do your own modification of SteerableMotion? Or is there sometinh I need to do to make it visible?

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      I’ll pop a new workflow up on patreon in a day or so which also uses sparse rgb!

    • @Ramiroy
      @Ramiroy Рік тому

      I have the same issue

  • @gatoque12
    @gatoque12 Рік тому

    if you wanted to use this to do a video with a script that requires images to change at an specific time of the video, could you use this tool and have each image have your desired lenght or you just cant?

  • @DerekShenk
    @DerekShenk 10 місяців тому

    Comfyui is powerful, but I spend many hours trying to get nodes to work that Manager does not correct. I can't seem to find much info on where or how to install STMFNET. I manually copied the pth file but it must not be right because I keep getting RuntimeError: Error(s) in loading state_dict for STMFNet_Model. As others have indicated, when executing the workflow, it throws an error that the path to STMFNet cannot be found. Manual installation has not worked. Any suggestions?

    • @NerdyRodent
      @NerdyRodent  10 місяців тому

      Everything worked automatically for me, no manual copies or what not. Could be an out of date install at a guess!

    • @DerekShenk
      @DerekShenk 10 місяців тому +1

      @@NerdyRodent For anyone struggling with ST-MFNet node, replace it with FILM VFI node and everything works great!

    • @Russtachio
      @Russtachio 10 місяців тому

      @@DerekShenk Thank you! I was having this error too and it made me give up on this workflow. Gonna go try out FILM VFI right now.

  • @ronnykhalil
    @ronnykhalil Рік тому +6

    have I told you I loved you lately?

    • @Elwaves2925
      @Elwaves2925 Рік тому +3

      Have I told you there's no one else above you?

    • @Byrdfl3wsNest
      @Byrdfl3wsNest Рік тому +2

      You fill my heart with gladness?

  • @va4eslavankb
    @va4eslavankb Рік тому +1

    Thanks for the video! There is one problem. Gives an error message. Gives an error message. How to fix it properly? Also, instead of previewing the image, a black screen is constantly displayed. What to do with it?
    ""Error occurred when executing BatchCreativeInterpolation:
    Error(s) in loading state_dict for ImageProjModelImport:
    size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1024]) from checkpoint, the shape in current model is torch.Size([3072, 1280]).""
    I updated everything, so the problem is not in the old version.

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      Hi! Drop me a dm on www.patreon.com/NerdyRodent and I’ll see what I can do 😃

    • @va4eslavankb
      @va4eslavankb Рік тому

      @@NerdyRodent +

    • @va4eslavankb
      @va4eslavankb Рік тому

      @@NerdyRodent I wrote to you

    • @carlherner4561
      @carlherner4561 Рік тому +1

      updating the ip-adapter models fixed this for me (make sure you had the sd 1.5 plus one as he has in the video)

    • @va4eslavankb
      @va4eslavankb Рік тому

      @@carlherner4561 Thanks, I've tried a lot. currently working

  • @delfinandres
    @delfinandres 9 місяців тому

    hi there, excellent tutorial, i keep finding an error about "Ipa Weight" when running the creative batch node, any ideas? i already updated comfy and nodes but the error keeps appearing.

    • @NerdyRodent
      @NerdyRodent  9 місяців тому

      ipa_weight for ipadapter should be a float value, so I'd start by checking that nothing says "NaN" or has text where a number should be

  • @miguelarce6489
    @miguelarce6489 11 місяців тому

    Great tutorial thank you so much! just 1 question is possible to do interpolation just prompting without images?

    • @NerdyRodent
      @NerdyRodent  11 місяців тому

      Yup, just use the batch prompt schedule

  • @sugartivi2126
    @sugartivi2126 10 місяців тому

    Thanks so much for this! I had a lot of errors in the beginning but it's working well now. I had a couple of basic questions: what are the best ways to work with this workflow and change the video resolution for the final output? Same with aspect ratio - is it possible to do different ones (ie 16:9) using this workflow? tysm!!

    • @sugartivi2126
      @sugartivi2126 10 місяців тому

      i also would love to know how to make the video resolution super duper low because i think it would be so cool to try this with pixel art as the inputs images!

    • @NerdyRodent
      @NerdyRodent  10 місяців тому +1

      Yup - you can pick any size you like with the SD1.5 range! It's best if it matches your image size though :)

    • @NerdyRodent
      @NerdyRodent  10 місяців тому +1

      @@sugartivi2126 to change the output resolution, move the mouse pointer over the resolution and then left click on the width and height to change the values, which are at 512 by default. You can also use the little arrows to increase or decrease the value.

    • @sugartivi2126
      @sugartivi2126 10 місяців тому

      @@NerdyRodent thank you!!

  • @samuelgomez4101
    @samuelgomez4101 10 місяців тому

    hello! Where can I find this workflow?? I don't see it in the link provided.

  • @Semi-Cyclops
    @Semi-Cyclops Рік тому +2

    anyone else getting this error 'ControlNet' object has no attribute 'load_device'?

  • @Nine-Signs
    @Nine-Signs 5 місяців тому

    Oh thank you, clear and steady. The last tutorial i just watched ran the entire video at 2X speed (at least) AND used keyboard shortcuts AND has an east Asian accent explaining the actions but was wildly out of sync with them. Bloody frustrating, I nearly gave up.

  • @hashir
    @hashir Рік тому +2

    How did you get your comfy ui to look all colourful?

    • @NerdyRodent
      @NerdyRodent  Рік тому +2

      You can right-click on any node change it’s colour 😃

  • @PleaseOpenSourceAI
    @PleaseOpenSourceAI Рік тому +1

    It looks almost like Deforum extension for A1111 was converted for Comfy 👍

  • @davewills6121
    @davewills6121 Рік тому

    Question!! My son is doing a project on ''The Mesolithic period'', he wants to use some examples of AI art for his talk on the subject. The problem is, all my attempts using Comfyui are a mutated group of cavemen, he's nervous as it is, he said my AI art will make him the laughing stock. So, are there any simple PROMPTS that i could use to produce good results?. cheers

  • @yoavPK1
    @yoavPK1 3 місяці тому

    I downloaded the ZIP folder, but it's not really clear to me which file is the workflow itself.

    • @NerdyRodent
      @NerdyRodent  3 місяці тому

      As a first time user of ComfyUI, I would suggest you start off with the more basic workflows first before getting into the more advanced things

  • @iresolvers
    @iresolvers 2 місяці тому

    can't find this workflow on your site!

  • @suganesan1
    @suganesan1 11 місяців тому

    so, how long it takes you to complete . I have 4070 super, mine stuck at last ksampler for while, nothing changes. in cmd it shows loading 4 new models but nothing loading

    • @NerdyRodent
      @NerdyRodent  11 місяців тому

      Couple of minutes, depending on length

  • @jbiziou
    @jbiziou Рік тому +1

    Great video:) when I loaded the workflow there was no graph image in the preview, and running the prompt it keeps stopping at Batch Creative Interpolation node. any thoughts ? tried loading all missing nodes and restarting,? thanks again and great videos,

    • @NerdyRodent
      @NerdyRodent  Рік тому

      The graph should appear fairly quickly, like when I bypass the Ksampler nodes, etc? If it's not outputting a graph, I can only guess that it' is outputting an error of some sort? If so, the node developer may be able to provide some sort of clue, as I've not had that happen as yet!

    • @jbiziou
      @jbiziou Рік тому

      thanks for the reply, I shall keep investigating:) may try a fresh install and run it all again. Cheers . @@NerdyRodent

    • @jbiziou
      @jbiziou Рік тому

      So strange. totally did a clean reinstall of Comfi, all the dependencies, models, missing nodes and still not graph and getting stuck at

    • @jbiziou
      @jbiziou Рік тому

      Ahhh the update everything worked ,!! I got past the spot and got the graph :) !! then ran out of memory, hah Progress !:)

    • @jbiziou
      @jbiziou Рік тому

      Error occurred when executing STMFNet VFI:
      ================================================================
      Failed to import CuPy.

  • @francaleu7777
    @francaleu7777 Рік тому +2

    great.. thank you

  • @Epicfuzz
    @Epicfuzz Рік тому

    I keep getting this error when it runs through the batch interpolation node "BatchCreativeInterpolationNode.combined_function() got an unexpected keyword argument 'cn_start_at'" Anyone have any thoughts??

    • @NerdyRodent
      @NerdyRodent  Рік тому

      I’ll drop an updated version which also uses sparse control on patreon in a day or two 😀

  • @tonon_AI
    @tonon_AI 10 місяців тому

    getting this error: Loop (437,506) with broadcast (465) - not submitting workflow

    • @NerdyRodent
      @NerdyRodent  10 місяців тому +1

      Try updating ComfyUI + all custom nodes to their current release

  • @alonsogarrote8898
    @alonsogarrote8898 11 місяців тому

    So, I watched the video and see this is for SD 1.5?, can you clarify....

  • @yen.p2044
    @yen.p2044 11 місяців тому

    Hi, I am fascinated by this workflow! Can I use it on m1 MacBook Pro? Every time I try, ComfyUI disconnects in the AnimateDiff process.

    • @NerdyRodent
      @NerdyRodent  11 місяців тому

      I don't have a MacBook, I'm afraid :/

  • @IdgrafixCh
    @IdgrafixCh Рік тому

    Hi there, Thanks a lot for your great tutos! I think there must have been an update that causes the following error ("Error occurred when executing BatchCreativeInterpolation:
    BatchCreativeInterpolationNode.combined_function() got an unexpected keyword argument 'positive'). The "Batch Creative Interpolation" node seems a bit messed up after the update.

    • @NerdyRodent
      @NerdyRodent  Рік тому +2

      Make sure to update ComfyUI as well as your custom nodes!

    • @santobosco5008
      @santobosco5008 Рік тому +1

      me too! there are not updates visible, did you work it out?

    • @IdgrafixCh
      @IdgrafixCh Рік тому

      @@santobosco5008 It was working fine until a recent update which seems to have messed up the "Batch Creative Interpolation" node.

    • @NerdyRodent
      @NerdyRodent  Рік тому

      I’ll upload a new workflow to patreon in a day or so which also uses sparse control!

  • @elenabrandy
    @elenabrandy Рік тому

    😍Thank you so much! It is Amazing tutorial and thank you for workflow!

  • @bottonegiulio
    @bottonegiulio Рік тому

    Very interesting, but I can not find the workflow using the link above, any clue?

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      For support, see www.patreon.com/NerdyRodent 😀

  • @mao_miror
    @mao_miror Рік тому

    hi i have a problem: Error occurred when executing KSampler Adv. (Efficient):
    'NoneType' object has no attribute 'to'

    • @NerdyRodent
      @NerdyRodent  Рік тому

      NoneType means nothing is being used (hence it not having any attributes), so make sure you’re loading the correct models and that none of the model files are corrupted

    • @mao_miror
      @mao_miror Рік тому

      @@NerdyRodent Hello thanks, I got it working but the picture is totally blurry

    • @NerdyRodent
      @NerdyRodent  Рік тому

      I’ll drop a new version which also uses sparse control in patreon in a day or so!

  • @davewaldmancreative
    @davewaldmancreative Рік тому

    nerdy. how do you make the time between transitions longer?

  • @chronoxofficial
    @chronoxofficial Рік тому

    Looks amazing! Unfortunately I'm getting an error regarding the IP adapterModelLoader: 'NoneType' object has no attribute 'lower' And a bunch of lines in a file called 'execution.py' that are faulty. Any idea? I updated to the latest version

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      For support, see www.patreon.com/NerdyRodent 😃

    • @chronoxofficial
      @chronoxofficial Рік тому

      Done 😁@@NerdyRodent

    • @asishkumarpadhy3156
      @asishkumarpadhy3156 Рік тому +1

      Hi I am having the same problem I can't get your solution please let me know as well 🙏!

  • @swannschilling474
    @swannschilling474 Рік тому +3

    Keep em coming!!! 😁 Great one again!! 🤩

  • @mishash
    @mishash Рік тому

    Hi! This is two parts question.. Can I use lora in prompt? Like "0" :"". Probably not... So how I can use lora in this workflow? And then, sec question - can I use multiple loras with a connection to timeline "0", "4", "20", "36" etc. in your prompt? Probably not either, then maybe just separate lora for each image? ( &if that possible then probably different model to each other image is possible too? ) Thank you

    • @NerdyRodent
      @NerdyRodent  Рік тому

      You can load as many Loras you like - just use the Lora loaders

    • @mishash
      @mishash Рік тому

      @@NerdyRodent So applying different lora to each specific image is not possible? if I have 2 input images, Julia Roberts and Tom Cruise, the faces I'm getting in output video is neither of them - model changes them both to something else. And so I have to apply loras.

    • @mishash
      @mishash Рік тому

      @@NerdyRodent btw, I didn't know you could manually add lora loader to any random workflow... for some reason I always thought that this would require workflow changes on the code level... thanks for the tip! :)

  • @aguyandhisguitars435
    @aguyandhisguitars435 Рік тому

    Out of all your workflows which would be the best first one for a beginner in comfyUI to get started with using? I’m also using an amd 7900xt.

    • @NerdyRodent
      @NerdyRodent  Рік тому +2

      I would just use the default workflow to start with!

  • @tstone9151
    @tstone9151 Рік тому

    Does this do a good job interpolating images of the same scene/shot? I just want something to animate my images for a movie

  • @KittisupTungyasub
    @KittisupTungyasub 10 місяців тому

    I choose close loop by true, but my video still not loop.. How to do that ?

  • @mehradbayat9665
    @mehradbayat9665 Рік тому

    Which one of your workflows is the one you presented here, you have a million different .png files...

  • @haydenmartin5866
    @haydenmartin5866 Рік тому

    Error(s) in loading state_dict for ResamplerImport:
    size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]).
    no idea what this means or how to fix it?

    • @NerdyRodent
      @NerdyRodent  Рік тому

      As a guess, it could be that you're trying to use an SDXL model?

    • @haydenmartin5866
      @haydenmartin5866 Рік тому

      It’s realisticvision (1.5). I think it’s to do with the controlnet in the batch creative interpolation. Trying to download the CN that shows upon loading your workflow

    • @spinstate1005
      @spinstate1005 8 місяців тому

      I had this exact error. My fix was to use the vit-H clip vision model as that was the one used in the original bandoco steerable motion workflow.

  • @TheCcamera
    @TheCcamera Рік тому

    amazing workflow! which clip vision model get used here? also get an error that it failed to import CuPy any hints? or is it because of my 8 gb gpu?

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      It’s the usual Sd1.5 clip vision model. For CuPy it may be that you don’t have an Nvidia card, in which case you can just bypass that node. For more help drop me a dm on www.patreon.com/NerdyRodent 😀

    • @TheCcamera
      @TheCcamera Рік тому

      thank you! works without the frame interpolation; strange I have an NVIDIA card (3070 mobile)@@NerdyRodent

    • @TheCcamera
      @TheCcamera Рік тому +1

      for the record: works all like a charm, tested it on a 4090 now, my 3070 laptop GPU seems to be to weak for this workflow

    • @eyevenear
      @eyevenear Рік тому

      ​@@NerdyRodent can you please send a direct link to the Clip model to make this work? I'm literally on the verge of losing it, everything else works is just that I miss, thank you!

    • @eyevenear
      @eyevenear Рік тому

      @@TheCcamera ​ can you please send a direct link to the Clip model to make this work? I'm literally on the verge of losing it, everything else works is just that I miss, thank you!

  • @b0b6O6
    @b0b6O6 Рік тому +3

    cool stuff 😊

  • @mikelaing8001
    @mikelaing8001 Рік тому +1

    do you need to install cupy seperately?

    • @clenzen9930
      @clenzen9930 Рік тому +1

      I'm stuck on CuPy not installing too. I've added CUDA and cuTENSOR & NCCL. cuDNN wants credentials. Feel like I'm going down the wrong path. comfyui has it's own python / (conda?) environment so I don't know.

    • @mikelaing4859
      @mikelaing4859 Рік тому

      @@clenzen9930 I've not tried installing it yet. Was looking at it earlier, think I need to install cuda tool kit then cupy but was gonna see if anyone had some wisdom to share first.

    • @TheUrbanPassenger
      @TheUrbanPassenger Рік тому

      @@clenzen9930 I also got these problems. It was because of the framerate increase section (STMF Net VFI). I just bypassed it by connecting the KSample directly to the saving section ("video combine") instead of going from KSample to "split image batch". Worked for me. Got good results though.

    • @jbiziou
      @jbiziou Рік тому

      I got stuck at the same spot here, Error occurred when executing STMFNet VFI: Failed to import CuPy, Ill try your hack bypassing the split image Batch. fingers crossed. but Id love to know how to get it to work like Nerdy has it in his video :)

  • @KingZero69
    @KingZero69 Рік тому +4

    bro… those girls in TANK TOPS with RAT HEADS are freaking HORRIFYING… 😂

  • @immeb71
    @immeb71 Рік тому

    Thanks for the lesson. But I can't get the workflow to work
    Error in the sampler, replacing it with a standard one does not help.
    Error occurred when executing KSampler Adv. (Efficient):
    Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      Remember to check the troubleshooting at the top. 90% of the time you need to update your ComfyUI install or custom nodes

  • @ProjectAtlantis-b6d
    @ProjectAtlantis-b6d 10 місяців тому

    SyntaxError: Unexpected non-whitespace character after JSON at position 4 (line 1 column 5) it's a png so having a hard time opening a direct json file to fix it even if i save workflow as json it just opens the image

    • @polsemad1
      @polsemad1 10 місяців тому

      did you solve this?

    • @PandAttack80
      @PandAttack80 10 місяців тому

      I have the same problem, did you solve it?

  • @GlitchGorillaFilms
    @GlitchGorillaFilms Рік тому

    This is a great tutorial, thank you :) but my question is, is it possible to have this video like a loop? I mean getting a smooth transition from the last image to the first.

    • @BobDoyleMedia
      @BobDoyleMedia Рік тому +1

      You can. You can select "closed_loop" to be true in the AnimateDiff group "Uniform context Options" in the upper left.

  • @sirolim_
    @sirolim_ Рік тому

    where do i place the controlnet model in the batch creative interpolater?

    • @NerdyRodent
      @NerdyRodent  Рік тому

      You can drop me a dm on patreon if you need more help!

  • @iozsoo
    @iozsoo Рік тому

    I'm getting an error regarding the IP adapterModelLoader: 'NoneType' object has no attribute 'lower' :(

    • @NerdyRodent
      @NerdyRodent  Рік тому

      NoneType = the node hasn’t been able to load the model, so make your download isn’t corrupted and you’re using the correct model!

    • @iozsoo
      @iozsoo Рік тому

      Thank you very much! Now it says Error occurred when executing STMFNet VFI, and Failed to import CuPy ☹ It's not my day 😀

    • @NerdyRodent
      @NerdyRodent  Рік тому

      @iozsoo cupy is for Nvidia cards so you can just bypass it. There is experimental Linux ROCm support, but I don’t have an AMD card 🫤

    • @iozsoo
      @iozsoo Рік тому

      @@NerdyRodent Unfortunately, I have an RTX 3060, and I've just installed cupy, but it's still failing to import it. Same error, while executing STMFNet VFI ☹

    • @NerdyRodent
      @NerdyRodent  Рік тому

      The 3060 should work fine ^^ You can check if others have any similar issues with their Comfy install at github.com/Fannovel16/ComfyUI-Frame-Interpolation/issues

  • @Herman_HMS
    @Herman_HMS Рік тому

    Seems great, but i really struggle to make anything out of it. Tried the settings from video as well as many others and im getting some abominations, flashing images and nothing like source pictures. Any universal settings, that you could recommend?

    • @Herman_HMS
      @Herman_HMS Рік тому +2

      ok, I managed to solve it, so I'll post for anyone with similar problems. There was something wrong with my IP adapter and clip_vision models. I just redownloaded them and it works fine now.

    • @giancarloorsi4124
      @giancarloorsi4124 Рік тому

      @@Herman_HMS can you please point me to the correct Clip Vision model to use ? I can't find how to download the SD1.5/model.safetensors that nerdy rodent uses in the video

    • @alishkaBey
      @alishkaBey Рік тому

      @@giancarloorsi4124 If you find just let me know :d it I'm waiting for it too

    • @DraceAI
      @DraceAI Рік тому

      @@giancarloorsi4124 same cant find it, but might be a case of im just looking past it in the references.

  • @carsoncarr-busyframes619
    @carsoncarr-busyframes619 Рік тому +3

    I managed to work through errors on about 5 different nodes (updating comfy in the manager fixed most of them) but am hung up at the end where the animate diff loader feeds into the K-sampler. I have mm_sd_v15_v2.ckpt and sqrt_linear (Animate Diff) set which matches the video. the error is-
    "Error occurred when executing ADE_AnimateDiffLoaderWithContext: module 'comfy.ops' has no attribute 'Linear'..."
    I wonder if the K sampler it's plugged into has already been updated because I have an additional field called "sampler state" above "add noise" that I don't see in this video.

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      Remember to check the troubleshooting at the top. 90% of the time you need to update your ComfyUI install or custom nodes

    • @CoolAiAvatars
      @CoolAiAvatars Рік тому +2

      I got the same error, I needed to select Update all, not only Comfy Ui in order to work ;)

  • @yukka69
    @yukka69 8 місяців тому

    the workflow is different than that from the video. having trouble with Batch Creative Interpolation

    • @NerdyRodent
      @NerdyRodent  8 місяців тому

      Yes. The video was done in the past. We are now in the future! Woohoo!

    • @KhaosDubz
      @KhaosDubz 3 місяці тому

      @@NerdyRodent care to offer any advice regarding the new batch creative interpolation? Ive come back after several months and now it doesnt work.

    • @NerdyRodent
      @NerdyRodent  3 місяці тому

      @@KhaosDubz This is for the now ancient sd1.5, but works great in ComfyUI!

  • @karen-7057
    @karen-7057 Рік тому

    thank you! was waiting for this one. finally got it working but still trying to tame the beast .... not there yet

  • @LIMBICNATIONARTIST
    @LIMBICNATIONARTIST Рік тому +2

    🔥🔥

  • @Dr.R.
    @Dr.R. Рік тому

    i wanted this for a long time, and i really hoped it would work this time. I even installed everything fresh two times. but still an error :-(... Error: Can't find a usable init.tcl in the following directories... can anyone help? That would be great.... - after hours of trying i found the problem, i am using matrix as the sd/comfy installer, another version of comfy works fine. thx for the workflow!

  • @EmmaFitzgerald-dp4re
    @EmmaFitzgerald-dp4re Рік тому

    Always love your vids, awesome! Seeing this error which is similar to other comments, but a little diff, everything's been updated,
    Error occurred when executing BatchCreativeInterpolation:
    'NoneType' object has no attribute 'lower'

    • @NerdyRodent
      @NerdyRodent  Рік тому

      You can drop me a dm on patreon for help, plus I’ll also be uploading a new workflow version using sparse rgb too!

    • @EmmaFitzgerald-dp4re
      @EmmaFitzgerald-dp4re Рік тому

      @@NerdyRodent thank you, my own stupid fault. I was not using the correct model for Clip Vision. I also had to change the VFI, the one in your workflow gave me an error that I was missing a dll

  • @ogrekogrek
    @ogrekogrek Рік тому +1

    thx

  • @Artishtic
    @Artishtic Рік тому +2

    epic

  • @deepuvinil4565
    @deepuvinil4565 Рік тому

    Where can i find the workflow

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      I always put the links into the video description 😉

    • @deepuvinil4565
      @deepuvinil4565 Рік тому

      @@NerdyRodent the github link ? But i can’t download that file🥹🥹

    • @deepuvinil4565
      @deepuvinil4565 Рік тому

      @@NerdyRodent really sorry am new to this .. got it thanks ☺️

  • @keystothebox
    @keystothebox Рік тому

    seems like most of the plugins in the shared workflow are missing or broken and they are not coming up when searching for custom notes. When loading the graph, the following node types were not found:
    Note Plus (mtb)
    IPAdapterModelLoader
    ACN_SparseCtrlRGBPreprocessor
    ACN_SparseCtrlLoaderAdvanced
    VHS_LoadImagesPath
    ADE_AnimateDiffUniformContextOptions
    ADE_EmptyLatentImageLarge
    ACN_AdvancedControlNetApply
    VHS_SplitImages
    STMFNet VFI
    KSampler Adv. (Efficient)
    ADE_AnimateDiffLoaderWithContext
    VHS_VideoCombine
    BatchCreativeInterpolation

    • @NerdyRodent
      @NerdyRodent  Рік тому

      You can drop me a dm on patreon if you need help 😀

  • @Mediiiicc
    @Mediiiicc Рік тому

    Every time I try comfyui I get a dozen errors that need to be solved.

    • @NerdyRodent
      @NerdyRodent  Рік тому

      You can drop me a dm on patreon for help!

    • @Mediiiicc
      @Mediiiicc Рік тому

      @@NerdyRodent I've got it working finally, but the output doesn't look anything like the input images. Do you have a guide that uses all similar images at the input? For example, I want to make a video of a person sitting in a chair and then stand up rather than having a bunch of random images at the input.

  • @엠케이-p3p
    @엠케이-p3p Рік тому

    Hi I truly like your channel and this time I am trying to execute this workflow but I have some issues, pherhaps you may know what is the problem?
    I kept having some errors and discovered that changing the ipadapter that you use from ip-adapter-plus-sd15 to ip-adapter-sd15 fixes the issue but still having problems.
    After this process the final result is something totally different from the input images, and all the parameters are just like yours in this video.
    Do you have any idea what could be the problem I am facing? Because I have no clue at all......I am lost :(

  • @LLCinema22
    @LLCinema22 Рік тому

    keep getting errors whatever i do , what the hell does this means Error occurred when executing ADE_AnimateDiffLoaderWithContext:
    module 'comfy.ops' has no attribute 'Linear'......so not worth it

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      Remember to check the troubleshooting guide at the top. 90% of the time you need to update your ComfyUI or custom nodes

  • @amrsabry2402
    @amrsabry2402 Рік тому

    i want workflow link please , because i am still new in comfyui

    • @NerdyRodent
      @NerdyRodent  Рік тому +1

      == Links ==
      ComfyUI Workflows: github.com/nerdyrodent/AVeryComfyNerd

  • @damienprod8934
    @damienprod8934 Рік тому +3

    you can't load as many images as you like - that's not true. A new model is loaded for each image added and here's the error you get:
    WARNING:root:Some parameters are on the meta device device because they were offloaded to the cpu.
    loading in lowvram mode 256.0
    And nothing happen.
    It's a shame, this kind of workflow could have been interesting for long renderings.

    • @NerdyRodent
      @NerdyRodent  Рік тому +3

      You can still do long renders like normal, but over 12 input images uses a lot of VRAM. Hopefully the developer will find a way to allow more than that 😃

  • @NekoEwerth
    @NekoEwerth Рік тому

    Awesome how nice youi talk in your videos. when i use your workflow i always get this message. I tried to figure out but i am too new to get any ideas of what i am doing.
    Error occurred when executing IPAdapterModelLoader:
    'NoneType' object has no attribute 'lower'
    File "D:\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 593, in load_ipadapter_model
    model = comfy.utils.load_torch_file(ckpt_path, safe_load=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "D:\ComfyUI\comfy\utils.py", line 12, in load_torch_file
    if ckpt.lower().endswith(".safetensors"):
    ^^^^^^^^^^

  • @fpvx3922
    @fpvx3922 Рік тому

    Ist there something similar for automatic 1111? Cool video btw...

    • @NerdyRodent
      @NerdyRodent  Рік тому +3

      Not that I’ve found as yet, but the hunt continues! Let me know if you find anything