ComfyUI: Master Morphing Videos with Plug-and-Play AnimateDiff Workflow (Tutorial)

Поділитися
Вставка
  • Опубліковано 8 січ 2025

КОМЕНТАРІ • 152

  • @David_Fernandez
    @David_Fernandez 14 днів тому +1

    Must say: It's such a pleasure to listen to your calm voice tone. Plus very much appreciated all the info. Thanks!

  • @MindsMystery24
    @MindsMystery24 Місяць тому

    At first i didn't understand why you make this part 06:35 Supercharge the Workflow, but after getting a MemoryError now i know what to do, we need more thinkers like you

  • @Hebrideanphotography
    @Hebrideanphotography 6 місяців тому +8

    People like you are so important. Too many gatekeepers out there. ❤

  • @AI.Studios.4U
    @AI.Studios.4U 2 місяці тому +1

    Thanks to you I have created my first video using ComfyUI! Your video is priceless!

  • @ZergRadio
    @ZergRadio Місяць тому

    I really thought this was just gonna be junk like so many other "Video/animation" ones I already tried.
    And I am very impressed by it, simply because it worked.
    And my video came out really nice.
    Subscribed!

  • @jdsguam
    @jdsguam 5 місяців тому +1

    I've been having fun with this workflow for a few days already. It is amazing what can be done on a laptop in 2024.

  • @ted328
    @ted328 8 місяців тому +2

    Literally the answer to my prayers, have been looking for exactly this for MONTHS

  • @1010mrsB
    @1010mrsB 2 місяці тому

    You're amazing!! I was lost for so long and when I found this video I was found

  • @alessandrogiusti1949
    @alessandrogiusti1949 8 місяців тому +1

    After following many tutorial, you are the only one gettin to me the results in a very clear way. Thank you so much!

  • @RokSlana
    @RokSlana 3 місяці тому

    This looks awesome. I gotta give it a try asap. Thanks for sharing.

  • @EternalAI-v9b
    @EternalAI-v9b 2 місяці тому

    Hello, how did you make that effect with your eyes at 0:20 please?

  • @stinoway
    @stinoway 4 місяці тому

    Great video!! Hope you'll drop more knowledge in the future!

  • @dmitrykonovalov9366
    @dmitrykonovalov9366 12 днів тому

    nice! why did you stop making more tutorials?

  • @SylvainSangla
    @SylvainSangla 8 місяців тому

    Thanks a lot for sharing this, very precise and complete guide ! 🥰
    Cheers from France !

  • @andrruta868
    @andrruta868 6 місяців тому

    I get too fast transitions between images. I did not find where you can adjust the transition time. I will be grateful for the advice.

  • @Ai_mayyit
    @Ai_mayyit 8 місяців тому

    Error occurred when executing VHS_LoadVideoPath:
    module 'cv2' has no attribute 'VideoCapture'
    your video timestep: 04:20

  • @retrotiker
    @retrotiker 4 місяці тому

    Great tutorial! Your content is super helpful. Just wondering, where are you these days? We'd love to see more Comfy UI tutorials from you!

  • @paluruba
    @paluruba 8 місяців тому +2

    Thank you for this video! Any idea what to do when the videos are blurry?

  • @SAMEGAMAN
    @SAMEGAMAN 2 місяці тому

    Thank you for this video❤❤

  • @gorkemtekdal
    @gorkemtekdal 8 місяців тому +1

    Great video!
    I want to ask that can we use init image for this workflow like we do on Deforum?
    I need the video starts with a specific image on the first frame of the video, then it should changes through the prompts.
    Do you know how does it possible on ComfyUI / AnimateDiff?
    Thank you!

    • @abeatech
      @abeatech  8 місяців тому +1

      I haven't personally used deforum, but it sounds like its the same concept. This workflow uses 4 init images at different points during the 96 frames to guide the animation. The ipadapter and control net nodes do most of the heavy lifting so prompts aren't really needed, but i've used them to fine tune outputs. I'd encourage you to try it out and see if it gives you the results you're looking for.

  • @Danaeprojectful
    @Danaeprojectful 2 місяці тому

    hi, I would like the first and last frames to exactly match the images I uploaded without being reinterpreted. Is this possible? In the case how should I do it? Thanks

  • @TechWithHabbz
    @TechWithHabbz 8 місяців тому +1

    You about to blow up bro. Keep it going. Btw, I was subscriber #48 😁

    • @abeatech
      @abeatech  8 місяців тому

      Thanks for the sub!

  • @Treybradley
    @Treybradley 2 дні тому

    amazing work. this was going great for me but randomly now im seeing an error message: failed "KSampler index is out of bounds for dimension with size 0". it was working initially for some time this error came after updating comfyui manager; now im trying to re-download all files/models to try again but is tricky :/ any advice?

  • @hoptoad
    @hoptoad 6 місяців тому

    this is great!
    do you know if there is a way to "batch" many variations where you can give each of the four guidance images a folder and it will run through and do a new animation with different source images multiple times?

  • @AlderoshActual-z3k
    @AlderoshActual-z3k 5 місяців тому

    Awesome tutorial! I've been getting used to the ComfyUI workflow...love the batch image generation!! However, do you have any tips on how to make LONGER text to video animations? I've seen several YT channels that have very long format morphing videos...well over an hour. I'd like to create videos that average around 1 minute, but can't sort out how to do it!

  • @yannickweineck4302
    @yannickweineck4302 Місяць тому

    in my case it doesnt really use the images i feed it. I already tried to find all the settings which result in almost no morph and basically all 4 original images standing still but i cant seem to find them.

  • @juliensylvestreeee
    @juliensylvestreeee 4 місяці тому

    Nice tutorial, even if it was very hard for me to set this up. Which SD 1.5 model do you recommand to install ? I just wanna morph input images, and a very realistic render. If someone could help :3

  • @SF8008
    @SF8008 8 місяців тому +1

    Amazing! Thanks a lot for this!!!
    btw - which nodes do I need to disable in order to get back to the original flow? (the one that is based only on input images and not on prompts)

  • @petertucker455
    @petertucker455 7 місяців тому

    Hi Abe, I found the final animation output is wildly different in style & aesthetic from the initial input images. Any tips for retaining overall style? Also have you got this workflow to work with SDXL?

  • @mcqx4
    @mcqx4 8 місяців тому +1

    Nice tutorial, thanks!

    • @abeatech
      @abeatech  8 місяців тому +1

      Glad it was helpful!

  • @user-yo8pw8wd3z
    @user-yo8pw8wd3z 7 місяців тому

    good video. where can i find the link to the additional video masks? I don't see it in the description

  • @ComfyCott
    @ComfyCott 8 місяців тому

    Dude I loved this video! You explain things very well and I love how you explain in detail as you build out strings of nodes! subbed!

  • @chinyewcomics
    @chinyewcomics 7 місяців тому +1

    Hi, does anybody know how to add more images to create a longer video?

  • @Injaznito1
    @Injaznito1 7 місяців тому

    NICE! I tried and it works great. Thanx for the tut! Question though. I tried changing the 96 to a larger number so the changes between pictures takes a bit longer but I don't see any difference. Is there something I'm missing? Thanx!

  • @GNOM_
    @GNOM_ 4 місяці тому +1

    Hello! Big thanks to you, bro. I learned how to make different animations from your video. I watched many other tutorials, but they didn't work for me. You explained everything very clearly. Tell me, can I insert motion masks myself, or do I have to insert link addresses only? Are there any other websites with different masks? Greetings from UKRAINE!!!

    • @tadaizm
      @tadaizm 4 місяці тому

      Розібрався?

    • @GNOM_
      @GNOM_ 3 місяці тому

      @@tadaizm так, розібрався. Просто скопіювати свою маску як путь і вставити.Нажаль масок мало.Скачати інщі маски теж та щє проблема, фіг знадеш.

  • @evgenika2013
    @evgenika2013 7 місяців тому

    Everything is great, but i have blurry result on my horizontal artwork. Any suggestion what to check on it?

  • @juginnnn
    @juginnnn 4 місяці тому

    how can I fix "Motion module 'AnimateLCM_sd15_t2v.ckpt' is intended for SD1.5 models, but the provided model is type SD3."???

  • @EmoteNation
    @EmoteNation 5 місяців тому

    Bro u r doing really good job, i hav only one question,
    in this video u did image to video morphing so can u do video to video morphing?
    Or can u make morphing video by using only text / prompt?

  • @MariusBLid
    @MariusBLid 8 місяців тому +1

    Great stuff man! Thank you 😀what are your specs btw? I only have 8gb vram

  • @pedrobrandao7664
    @pedrobrandao7664 6 місяців тому

    Great tutorial

  • @Halfgawd_Halfdevil
    @Halfgawd_Halfdevil 8 місяців тому

    Managed to get this running. It does okay but I am not seeing much influence from the control net motion video input. Any way to make that more apparent? Also have notice a Shutterstock overlay near the bottom of the clip. it is translucent but noticeable. kind of ruins everything. anyway, to eliminate that artifact?

  • @goran-mp-kamenovic6293
    @goran-mp-kamenovic6293 6 місяців тому

    5:30 what do you do to see the duration :)

  • @CoqueTornado
    @CoqueTornado 8 місяців тому +1

    great tutorial, I am wondering... how many vram does this setup need?

    • @abeatech
      @abeatech  8 місяців тому +1

      i've heard of people running this successfully on as little as 8gb VRAM, but you'll probably need to turn of the frame interpolation. you can also try running this on the cloud at openart (but your checkpoint options might be limited): openart.ai/workflows/abeatech/tutorial-morpheus---morphing-videos-using-text-or-images-txt2img2vid/fOrrmsUtKEcBfopPrMXi

    • @CoqueTornado
      @CoqueTornado 8 місяців тому

      @@abeatech thank you!! will try the two suggestions! congrats for the channel!

  • @SapiensVirtus
    @SapiensVirtus 7 місяців тому

    hi! beginners question. So if I run a software like ComfyUI locally, does that mean that all AI art, music, works that I generate will be free to use for commercial purposes?or am I violating terms of copyright? I am searching more info about this but I get confused, thanks in advance

  • @GiancarloBombardieri
    @GiancarloBombardieri 7 місяців тому

    it worked so fine. but now it sends an error at the Load video Path, is there any update??

  • @damird9635
    @damird9635 6 місяців тому

    Working, but when i select "plus high strenght", i get clip vision error. What im i missing, i downloaded everything.... VIT-G is the problem for some reason?

  • @produccionesvoid
    @produccionesvoid 7 місяців тому

    when i put on manager install missing nodes i cant do it and said: To apply the installed/updated/disabled/enabled custom node, please RESTART ComfyUI. And refresh browser... what can do that?

  • @0x0abb
    @0x0abb 12 днів тому

    I have the workflow working but my videos look very uninteresting - too abstract and the matte animation is very obvious

  • @kwondiddy
    @kwondiddy 8 місяців тому

    I'm getting errors when trying to run... a few items that say "value not in list: ckpt_name:" "value not in list: lora_name" and "value not in list: vae_name:"
    I'm certain I put all the downloads in the correct folders and name everything appropriately.... Any thoughts?

  • @lucagenovese7207
    @lucagenovese7207 6 місяців тому

    Insane!!!!! Ty so much!

  • @frankiematassa1689
    @frankiematassa1689 7 місяців тому

    Error occurred when executing IPAdapterBatch:
    Error(s) in loading state_dict for ImageProjModel:
    size mismatch for proj.weight: copying a param with shape torch.Size([3072, 1280]) from checkpoint, the shape in current model is torch.Size([3072, 1024]).
    I followed this video exactly and am only using SDL 1.5 checkpoints. I cannot find anywhere how to fix this

  • @aslgg8114
    @aslgg8114 8 місяців тому +1

    What should I do to make the reference image persistent

  • @tetianaf5172
    @tetianaf5172 8 місяців тому

    Hi! I have this error all the time: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_CUDA_addmm). Though I use 1.5 checkpoint. Please help

  • @Murdalizer_studios
    @Murdalizer_studios 6 місяців тому

    nice bro. Thank you🖖

  • @ellopropello
    @ellopropello 4 місяці тому

    how awesome is that!
    but what needs to be done to get rid of these errors:
    When loading the graph, the following node types were not found:
    ADE_ApplyAnimateDiffModelSimple
    VHS_SplitImages
    SimpleMath+
    ControlNetLoaderAdvanced
    ADE_MultivalDynamic
    VHS_VideoCombine
    BatchCount+
    ADE_UseEvolvedSampling
    FILM VFI
    RIFE VFI
    Color Correct (mtb)
    VHS_LoadVideoPath
    IPAdapterUnifiedLoader
    ACN_AdvancedControlNetApply
    ADE_LoadAnimateDiffModel
    ADE_LoopedUniformContextOptions
    IPAdapterAdvanced
    CreateFadeMaskAdvanced

  • @yomi0ne
    @yomi0ne 2 місяці тому

    copying video address of the animation doesn't work, it copies an .webm link, please help :(

  • @Caret-ws1wo
    @Caret-ws1wo 7 місяців тому +2

    Hey, my animations come out super blurry and are no where near as clear as yours. I can barely make out the monkey, it's just a bunch of moving brown lol. Is there a reason for this?

    • @DanielMatotek
      @DanielMatotek 23 дні тому

      Same did you ever figure it out

    • @Caret-ws1wo
      @Caret-ws1wo 22 дні тому

      @@DanielMatotek This was a while ago, but i believe I changed models

  • @cabb_
    @cabb_ 8 місяців тому

    ipiv did an incredible job with this workflow!. Thanks for the tutorial.

  • @ollyevans636
    @ollyevans636 6 місяців тому

    i don't have an ipadapter folder in my models folder, should i just make one?

  • @人海-h5b
    @人海-h5b 8 місяців тому +2

    Help! I encountered this error while running it
    Error occurred when executing IPAdapterUnifiedLoader:
    Module 'comfy. model_base' has no attribute 'SDXL_instructpix2pix'

    • @abeatech
      @abeatech  8 місяців тому

      Sounds like it could be a couple of things:
      a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5
      or
      b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)

  • @Cats_Lo_Ve
    @Cats_Lo_Ve 7 місяців тому

    How i can get progress bar like you on top of the screen? I must reainstall full comfy UI for this workflow. I instaled crystools but progress bar doesn't appear on top :/ Thank you for your video you are a god!

  • @velvetjones8634
    @velvetjones8634 8 місяців тому

    Very helpful, thanks!

    • @abeatech
      @abeatech  8 місяців тому

      Glad it was helpful!

  • @AlexDisciple
    @AlexDisciple 7 місяців тому

    Thanks for this. Do you know what could be causing this error : Error occurred when executing KSampler:
    Given groups=1, weight of size [320, 5, 3, 3], expected input[16, 4, 64, 36] to have 5 channels, but got 4 channels instead

    • @AlexDisciple
      @AlexDisciple 7 місяців тому

      I figured out the problem, I was using the wrong ControlNet. I am having a different issue though, where my initial output is very "noisy", as if ther was latent noise all over it. Is it imporant for the source images to be in the same aspect ratio as the output?

    • @AlexDisciple
      @AlexDisciple 7 місяців тому

      Ok found the solution here too, I was using a photorealistic model, which somehow the workflow doesn't seem to like. Switching to juggernaut fixed it

  • @ywueeee
    @ywueeee 8 місяців тому

    can could one add some kind of ip adaptar to add your own face to transform?

  • @rayzerfantasy
    @rayzerfantasy 4 місяці тому

    How much GPU VRAM is needed?

  • @MichaelL-mq4uw
    @MichaelL-mq4uw 8 місяців тому

    why do you need controlnet at all? can it be skipped and morph without any mask?

  • @saundersnp
    @saundersnp 8 місяців тому

    I've encountered this error : Error occurred when executing RIFE VFI:
    Tensor type unknown to einops

  • @axxslr8862
    @axxslr8862 8 місяців тому +1

    in my comfy UI there is no manager option ...... help please

  • @efastcruex
    @efastcruex 8 місяців тому

    Why my generated animation very different from the reference images

  • @randomprocess7876
    @randomprocess7876 3 місяці тому

    Anybody know how to scale this to more than 4 images.. ive tried but the masks are messing up the animation from the cloned nodes

  • @ImTheMan725
    @ImTheMan725 8 місяців тому +1

    Why can't your morph 20/50 pictures?

  • @MSigh
    @MSigh 8 місяців тому

    Excellent! 👍👍👍

  • @CS.-ph2fr
    @CS.-ph2fr 6 місяців тому

    how to add more than 4 images

  • @TinyLLMDemos
    @TinyLLMDemos 8 місяців тому

    where do i get your input images

  • @CarCrashesBeamngDrive
    @CarCrashesBeamngDrive 8 місяців тому

    cool, how long did it take you?

  • @rowanwhile
    @rowanwhile 8 місяців тому

    Brilliant video. thanks so much for sharing your knowledge.

  • @devoiddesign
    @devoiddesign 8 місяців тому

    Hi! any suggestion for missing IPAdapter? I am confused because i didn't get an error to install or update and I have all of the IPAdapter nodes installed... the process stopped on the "IPAdapter Unified Loader" node.
    !!! Exception during processing!!! IPAdapter model not found.
    Traceback (most recent call last):
    File "/workspace/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    File "/workspace/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    File "/workspace/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    File "/workspace/ComfyUI/custom_nodes/ComfyUI_IPAdapter_plus/IPAdapterPlus.py", line 453, in load_models
    raise Exception("IPAdapter model not found.")
    Exception: IPAdapter model not found.

    • @tilkitilkitam
      @tilkitilkitam 8 місяців тому

      same problem

    • @tilkitilkitam
      @tilkitilkitam 8 місяців тому +1

      ip-adapter_sd15_vit-G.safetensors - install this from the manager

    • @devoiddesign
      @devoiddesign 8 місяців тому

      @@tilkitilkitam Thank you for responding.
      I already had the model installed but it was not seeing it. I ended up restarting Comfy completely after I updated everything from the manager instead of only doing a hard refresh and that fixed it.

  • @cohlsendk
    @cohlsendk 8 місяців тому

    Is there an way to increase frames/batch size for FadeMask?? Everything over 96 is messing up the Facemask -.-''

  • @yakiryyy
    @yakiryyy 8 місяців тому

    Hey! I've managed to get this working but I was under the impression this workflow will animate between the given reference images.
    The results I get are pretty different from the reference images.
    Am I wrong in my assumption?

    • @abeatech
      @abeatech  8 місяців тому

      You're right - it uses the reference images (4 frames vs 96 total frames) as a starting point and generates additional frames, but the results should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation

    • @efastcruex
      @efastcruex 8 місяців тому

      @@abeatech Is there any way to make the result more like reference images

  • @balibike9024
    @balibike9024 4 місяці тому

    I've got an error message
    Error occurred when executing IPAdapterUnifiedLoader:
    IPAdapter model not found.
    File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\waldo\Documents\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 573, in load_models
    raise Exception("IPAdapter model not found.")
    What shoud I do ?

    • @balibike9024
      @balibike9024 4 місяці тому

      Success now !
      I re-install ip-adapter_sd15_vit-G.safetensors from the manager

  • @TinyLLMDemos
    @TinyLLMDemos 8 місяців тому

    how do i kick it off?

  • @Blaqk_Frozste
    @Blaqk_Frozste 4 місяці тому

    I copied pretty much everything you did and my animation outputs looks super low quality?

  • @DanielMatotek
    @DanielMatotek 23 дні тому

    Tried for ages couldn't make it work, every image is very pixelated and crazy cannot wor it out

  • @WalkerW2O
    @WalkerW2O 8 місяців тому

    Hi Abe aTech, very informative and i like your work very much.

  • @zarone9270
    @zarone9270 8 місяців тому

    thx Abe!

  • @Adrianvideoedits
    @Adrianvideoedits 8 місяців тому +1

    you didnt explain most important part, which is how to run same batch with and without upscale. It generates new batches everytime you queue prompt so preview batch is waste of time. I like the idea though.

    • @7xIkm
      @7xIkm 7 місяців тому

      idk maybe a seed? efficiency nodes?

    • @rudyNok
      @rudyNok 4 місяці тому

      Hey man, not sure, but looks like there's this node in the workflow called Seed (rgthree) and it seems clicking the bottom button on this node called Use last queued seed does the trick. Try it.

  • @MACH_SDQ
    @MACH_SDQ 7 місяців тому

    Goooooood

  • @0x0abb
    @0x0abb 22 дні тому

    I maybe missing something but the workflow is different so it's not working

    • @0x0abb
      @0x0abb 19 днів тому

      I finally realized, I had the wrong workflow file...it's working now

  • @rooqueen6259
    @rooqueen6259 8 місяців тому

    Guys who have come across the fact that the loading 2 new models stops at 0% or I also had an example - the loading 3 new models is 9% and no longer continues. What is the problem? :c

  • @creed4788
    @creed4788 8 місяців тому

    Vram required?

    • @Adrianvideoedits
      @Adrianvideoedits 8 місяців тому

      16gb for upscaled

    • @creed4788
      @creed4788 8 місяців тому

      @@Adrianvideoedits Could you make the videos first and then close and load the upscaler to improve the quality or does it have to be all together and it can't be done in 2 different workflows?

    • @Adrianvideoedits
      @Adrianvideoedits 7 місяців тому

      @@creed4788 I dont see why not. But upscaling itself takes most vram so you would have to find upscaler for lower vram cards

  • @vivektyagi6848
    @vivektyagi6848 3 місяці тому

    Awesome but could you slow it down please.

  • @artificiallyinspired
    @artificiallyinspired 6 місяців тому

    "it's nothing too intimidating" then continues to show a workflow that takes up the entire screen. Lol! thanks for this tutorial, i've been looking for something like this days now. I'm switching from A1111 to comfy UI and the changes are a little more intimidating to get a handle on things than I originally expected. Thanks for this.

    • @artificiallyinspired
      @artificiallyinspired 6 місяців тому

      I get this weird error when it gets to the controlnet, not sure if you know whats wrong? 'ControlNet' object has no attribute 'latent_format', I have the qrcode control net loaded.

    • @eyoo369
      @eyoo369 6 місяців тому +1

      @@artificiallyinspired Make sure its the same name. A good habit I always do when loading new workflows is to go through all the nodes where you select a model or Lora and make sure the one I have locally is checked. Not everyone follows the same naming conventions. Sometimes you might download a workflow and someone has their ipadapter named "ip-adapter_plus.safetensors" while yours is "ip-adapter-plus.safetensors". Always good to re-select

  • @人海-h5b
    @人海-h5b 8 місяців тому +1

    Help! I encountered this error while running it

    • @人海-h5b
      @人海-h5b 8 місяців тому +1

      Error occurred when executing IPAdapterUnifiedLoader :
      module 'comfy.model base’ has no attribute 'SDXl instructpix2pix

    • @abeatech
      @abeatech  8 місяців тому

      Sounds like it could be a couple of things:
      a) you might be trying to use an SDXL checkpoint - in which case try using a SD1.5. The AnimateDiff model in the workflow only works with SD1.5
      or
      b) an issue with your IPAdapter node. you can yry making sure the ipadapter model is downloaded and in the right folder, or reinstalling the ComfyUI_IPAdapter_plus node (delete the custom node folder and reinstall from manager or github)

    • @Halfgawd_Halfdevil
      @Halfgawd_Halfdevil 8 місяців тому

      @@abeatech it say s in the note to install it in the clip vision folder. but that is not it as none of the preloaded models are there and the new one installed there does not appear in the dropdown selector. so if it is not that folder then where are you supposed to install it? if the node is bad why is it used in the work flow in the first place? shouldn't it just have the ipadapter plus node?

  • @pro_rock1910
    @pro_rock1910 8 місяців тому

    ❤‍🔥❤‍🔥❤‍🔥

  • @3djramiclone
    @3djramiclone 8 місяців тому

    This is not for beginners, put that on the description mate

    • @kaikaikikit
      @kaikaikikit 8 місяців тому

      what are you are crying about...go find a beginner class when it's too hard to understand...

  • @ErysonRodriguez
    @ErysonRodriguez 8 місяців тому

    noob question: why my results more different from my output

    • @ErysonRodriguez
      @ErysonRodriguez 8 місяців тому

      i mean, what images i loaded have different output instead transitioning

    • @abeatech
      @abeatech  8 місяців тому

      The results will not exactly be the same, but they should still be in the same ball park. if you're getting drastically different results, it might be a mix of your subject + SD1.5 model. I've had the best results by using a similar type of model (photograph, realism, anime, etc) for both the image generation and the animation. Also worth double checking that you have the VAE and LCM lora selected in the settings module

  • @nonprofit7163
    @nonprofit7163 6 місяців тому

    did anyone else run into some errors while following this video?

  • @suetologPlay
    @suetologPlay 6 місяців тому

    Вообще ни чего не понятно что ты там делал! быстр быстро прокликал и смотрите что у меня получилось. куда,чего,как не показал.

  • @anthonydelange4128
    @anthonydelange4128 6 місяців тому

    its morbing time...

  • @goran-mp-kamenovic6293
    @goran-mp-kamenovic6293 6 місяців тому

    urred when executing CheckpointLoaderSimple:
    'model.diffusion_model.input_blocks.0.0.weight'
    File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI
    odes.py", line 516, in load_checkpoint
    out = comfy.sd.load_checkpoint_guess_config(ckpt_path, output_vae=True, output_clip=True, embedding_directory=folder_paths.get_folder_paths("embeddings"))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\comfy\sd.py", line 511, in load_checkpoint_guess_config
    model_config = model_detection.model_config_from_unet(sd, diffusion_model_prefix)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 239, in model_config_from_unet
    unet_config = detect_unet_config(state_dict, unet_key_prefix)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\x1\Desktop\New folder (4)\ComfyUI_windows_portable\ComfyUI\comfy\model_detection.py", line 120, in detect_unet_config
    model_channels = state_dict['{}input_blocks.0.0.weight'.format(key_prefix)].shape[0]
    ~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ :P