ComfyUI And CogVideoX AI Video Extend Using Local AI Tools

Поділитися
Вставка
  • Опубліковано 25 лис 2024

КОМЕНТАРІ • 58

  • @esJoyboy
    @esJoyboy 5 днів тому

    i cant keep up with so many amazing tools to create great content.. this is AWESOME

  • @duncanh3721
    @duncanh3721 6 днів тому +3

    Another great video! Thank you!

  • @kalakala4803
    @kalakala4803 6 днів тому

    Thanks! I will try it tomorrow in office :)

    • @TheFutureThinker
      @TheFutureThinker  5 днів тому

      Have fun tomorrow and we try 5B 1.5 Comfyui edition 😉 as we try server side last time.

  • @idoshor4470
    @idoshor4470 6 днів тому +1

    you are a king. thanks for sharing man!
    one main issue which I noticed is that the stich between the steps is noticeable. mainly in the action that is happening in the frame. I guess maybe the solution is in the prompt. some like adding "Slowly starting to.." or something a like.

  • @TomHimanen
    @TomHimanen 6 днів тому

    Wow, just wow. You are doing God's work bro!

  • @TheMunteanuAlex
    @TheMunteanuAlex 6 днів тому +3

    Yesterday Kijai updated the CogVideoX wrapper with CogVideoX-5b 1.5 and new nodes.

    • @TheFutureThinker
      @TheFutureThinker  6 днів тому

      I played with the server edition of 1.5 model before. Looks good

    • @rageshantony2182
      @rageshantony2182 6 днів тому

      @@TheFutureThinker server edition of 1.5 model?? that means ?

    • @TheFutureThinker
      @TheFutureThinker  5 днів тому

      @rageshantony2182 The original Hugging Face model, without GGUF or compress all in 1 file Sft.
      try this: huggingface.co/THUDM/CogVideoX1.5-5B-SAT

  • @RDUBTutorial
    @RDUBTutorial День тому

    Will it run on Mac Studio M2 128gram?

  • @doin-doitnow
    @doin-doitnow 6 днів тому +1

    Amazing, thankyou 🎉

  • @Aaron_Jason
    @Aaron_Jason 5 днів тому

    Just a heads up, interpolating frames is not making the animation faster, it's just adding more frames. Probably best to speed up the animation to make it look real time, so about 4x speed up iirc. Interpolation just makes the movement smoother, not faster.

  • @giuseppedaizzole7025
    @giuseppedaizzole7025 6 днів тому

    This looks great, one question, would i be able to use it on a rtx 12gb with 64 ram cpu? and if yes..how long would take, i've tried the cogvideo tetxt to video in flow and it never end the proces. thanks for sharing ur knowledge and investigation, really appreciate.

  • @jonathanerich
    @jonathanerich 4 дні тому

    this is great

  • @crazyleafdesignweb
    @crazyleafdesignweb 6 днів тому

    nice! Things can be use in work.

  • @insurancecasino5790
    @insurancecasino5790 6 днів тому +3

    Just 120 frames. If we can get those frames generated first, then we can go back and make every image perfect before we make the video. Is there a way to see every frame?

    • @TheFutureThinker
      @TheFutureThinker  6 днів тому +1

      Save as image , instead of save in Video combine node

    • @TheFutureThinker
      @TheFutureThinker  6 днів тому +1

      Then you can modify each images

    • @insurancecasino5790
      @insurancecasino5790 6 днів тому

      @@TheFutureThinker Alright thanks.

    • @insurancecasino5790
      @insurancecasino5790 6 днів тому

      @@TheFutureThinker 🔥

    • @TheFutureThinker
      @TheFutureThinker  6 днів тому +2

      Yes, that will be like go back to the basic, before in A1111 img2img batch gen for animation.
      But thats how we can fix some frames honestly

  • @velRic
    @velRic 6 днів тому

    Great result. Is cog2video handle the interpolation between the start frame and the end frame? That tactic gives a more control how to build the scene

    • @TheFutureThinker
      @TheFutureThinker  5 днів тому +1

      The newly updated nodes yes, it can do start and end frame

  • @TheRoninteam
    @TheRoninteam 6 днів тому

    really amazing tutorial

  • @synesthesiaharmonics
    @synesthesiaharmonics 6 днів тому +1

    Always comprehensive and overall delivering pure functionality and with speed as well!

  • @TharindaMarasingha
    @TharindaMarasingha 4 дні тому

    Where can I download the model from?

  • @lindesfahlgaming5608
    @lindesfahlgaming5608 3 дні тому

    Hi there, my CogVideo Nodes looking different, therefore my Flow look different to. I dont have any Pipe. Is that a newer version then yours?

    • @TheFutureThinker
      @TheFutureThinker  3 дні тому

      Yes , Cog have new version update. Will do another video of the update node soon. Thanks

  • @hugoalvarez923
    @hugoalvarez923 6 днів тому

    Maybe if you show all the frames of the video, you can choose the frame to extend from, no only the last one. It could help to use a video before it makes morphing or something. And this idea could be use also on pyramid flow that i prefer because is faster and letme use my computer while its working

    • @TheFutureThinker
      @TheFutureThinker  5 днів тому

      Choose frame on the math expression node that I connected on each video extend groups. Do you math count on which frame you want to start with, then it will be okay.

    • @TheFutureThinker
      @TheFutureThinker  5 днів тому +1

      By the way, I will add an input on next version update. So you can pick a number after frames preview.

  • @showyougaming5299
    @showyougaming5299 4 дні тому

    Could this work with 1080 or 1070 8G devices?

  • @DigiBhem
    @DigiBhem 5 днів тому

    ❤️❤️❤️

  • @froilen13
    @froilen13 5 днів тому

    Looks cool, but I don't think I could use it to tell a compelling story. Just cool images without context. When do you think this would be good enough to make an animating cartoon?

  • @amarnamarpan
    @amarnamarpan 6 днів тому

    what gpu do you use?

  • @SeanieinLombok
    @SeanieinLombok 6 днів тому +1

    First

  • @ammarzammam2255
    @ammarzammam2255 6 днів тому

    I couldn't make it work on my Nvidia 15 vram
    It crashes every time i queue it because of the high usage of vram do you have a solution for those who doesn't have expensive gpu

  • @golddiggerprankz
    @golddiggerprankz 5 днів тому

    Please add a description of the GPU specs you are using? I always follow your videos but all I have is a laptop, even with low VRAM specs

  • @damarcta
    @damarcta 6 днів тому

    thanks a lot. unfortunately all my videos have lots of noise.

  • @Jcs187-rr7yt
    @Jcs187-rr7yt 5 днів тому

    Why is it that Kling and other private models are significantly better?

    • @TheFutureThinker
      @TheFutureThinker  5 днів тому

      If you have a server GPU or Rig and running the full version of Mochi , not the trim down size of Comfyui version. You can get a better result. I saw it.

    • @leepuznowski5293
      @leepuznowski5293 5 днів тому

      @@TheFutureThinker Have you tried this? I would be interested as we have a GPU server but I do not know how we could set this up.

    • @TheFutureThinker
      @TheFutureThinker  5 днів тому

      @@leepuznowski5293 yes , Mochi, and Cog 1.5 try it last week in my company server gpu. Its not only able to generate means its okay to use, higher VRam GPU also create better quality, even using the same AI model.

    • @leepuznowski5293
      @leepuznowski5293 5 днів тому

      @@TheFutureThinker Very interesting. Will you be doing a video on this possibly? How to set it up locally. What GPUs do you have on your company server? We have two A6000 with 48 GB. The genmo website says it needs about 60GB to process, but it is possible to split between GPUs.