you are a king. thanks for sharing man! one main issue which I noticed is that the stich between the steps is noticeable. mainly in the action that is happening in the frame. I guess maybe the solution is in the prompt. some like adding "Slowly starting to.." or something a like.
Just a heads up, interpolating frames is not making the animation faster, it's just adding more frames. Probably best to speed up the animation to make it look real time, so about 4x speed up iirc. Interpolation just makes the movement smoother, not faster.
This looks great, one question, would i be able to use it on a rtx 12gb with 64 ram cpu? and if yes..how long would take, i've tried the cogvideo tetxt to video in flow and it never end the proces. thanks for sharing ur knowledge and investigation, really appreciate.
Just 120 frames. If we can get those frames generated first, then we can go back and make every image perfect before we make the video. Is there a way to see every frame?
Maybe if you show all the frames of the video, you can choose the frame to extend from, no only the last one. It could help to use a video before it makes morphing or something. And this idea could be use also on pyramid flow that i prefer because is faster and letme use my computer while its working
Choose frame on the math expression node that I connected on each video extend groups. Do you math count on which frame you want to start with, then it will be okay.
Looks cool, but I don't think I could use it to tell a compelling story. Just cool images without context. When do you think this would be good enough to make an animating cartoon?
I couldn't make it work on my Nvidia 15 vram It crashes every time i queue it because of the high usage of vram do you have a solution for those who doesn't have expensive gpu
If you have a server GPU or Rig and running the full version of Mochi , not the trim down size of Comfyui version. You can get a better result. I saw it.
@@leepuznowski5293 yes , Mochi, and Cog 1.5 try it last week in my company server gpu. Its not only able to generate means its okay to use, higher VRam GPU also create better quality, even using the same AI model.
@@TheFutureThinker Very interesting. Will you be doing a video on this possibly? How to set it up locally. What GPUs do you have on your company server? We have two A6000 with 48 GB. The genmo website says it needs about 60GB to process, but it is possible to split between GPUs.
i cant keep up with so many amazing tools to create great content.. this is AWESOME
Another great video! Thank you!
Glad you enjoyed it!
Thanks! I will try it tomorrow in office :)
Have fun tomorrow and we try 5B 1.5 Comfyui edition 😉 as we try server side last time.
you are a king. thanks for sharing man!
one main issue which I noticed is that the stich between the steps is noticeable. mainly in the action that is happening in the frame. I guess maybe the solution is in the prompt. some like adding "Slowly starting to.." or something a like.
Wow, just wow. You are doing God's work bro!
Glad it help
Yesterday Kijai updated the CogVideoX wrapper with CogVideoX-5b 1.5 and new nodes.
I played with the server edition of 1.5 model before. Looks good
@@TheFutureThinker server edition of 1.5 model?? that means ?
@rageshantony2182 The original Hugging Face model, without GGUF or compress all in 1 file Sft.
try this: huggingface.co/THUDM/CogVideoX1.5-5B-SAT
Will it run on Mac Studio M2 128gram?
Amazing, thankyou 🎉
Just a heads up, interpolating frames is not making the animation faster, it's just adding more frames. Probably best to speed up the animation to make it look real time, so about 4x speed up iirc. Interpolation just makes the movement smoother, not faster.
yes correct, thank you. it's smoother.
This looks great, one question, would i be able to use it on a rtx 12gb with 64 ram cpu? and if yes..how long would take, i've tried the cogvideo tetxt to video in flow and it never end the proces. thanks for sharing ur knowledge and investigation, really appreciate.
this is great
nice! Things can be use in work.
Absolutely!
Just 120 frames. If we can get those frames generated first, then we can go back and make every image perfect before we make the video. Is there a way to see every frame?
Save as image , instead of save in Video combine node
Then you can modify each images
@@TheFutureThinker Alright thanks.
@@TheFutureThinker 🔥
Yes, that will be like go back to the basic, before in A1111 img2img batch gen for animation.
But thats how we can fix some frames honestly
Great result. Is cog2video handle the interpolation between the start frame and the end frame? That tactic gives a more control how to build the scene
The newly updated nodes yes, it can do start and end frame
really amazing tutorial
Have fun👍
Always comprehensive and overall delivering pure functionality and with speed as well!
Glad it helps
Where can I download the model from?
Hi there, my CogVideo Nodes looking different, therefore my Flow look different to. I dont have any Pipe. Is that a newer version then yours?
Yes , Cog have new version update. Will do another video of the update node soon. Thanks
Maybe if you show all the frames of the video, you can choose the frame to extend from, no only the last one. It could help to use a video before it makes morphing or something. And this idea could be use also on pyramid flow that i prefer because is faster and letme use my computer while its working
Choose frame on the math expression node that I connected on each video extend groups. Do you math count on which frame you want to start with, then it will be okay.
By the way, I will add an input on next version update. So you can pick a number after frames preview.
Could this work with 1080 or 1070 8G devices?
❤️❤️❤️
Looks cool, but I don't think I could use it to tell a compelling story. Just cool images without context. When do you think this would be good enough to make an animating cartoon?
Use Kling, Runway, Mimimax then
what gpu do you use?
4090
First
Sean👋👋👋
I couldn't make it work on my Nvidia 15 vram
It crashes every time i queue it because of the high usage of vram do you have a solution for those who doesn't have expensive gpu
Please add a description of the GPU specs you are using? I always follow your videos but all I have is a laptop, even with low VRAM specs
Um.. then you need to rent cloud gpu then.
thanks a lot. unfortunately all my videos have lots of noise.
Why is it that Kling and other private models are significantly better?
If you have a server GPU or Rig and running the full version of Mochi , not the trim down size of Comfyui version. You can get a better result. I saw it.
@@TheFutureThinker Have you tried this? I would be interested as we have a GPU server but I do not know how we could set this up.
@@leepuznowski5293 yes , Mochi, and Cog 1.5 try it last week in my company server gpu. Its not only able to generate means its okay to use, higher VRam GPU also create better quality, even using the same AI model.
@@TheFutureThinker Very interesting. Will you be doing a video on this possibly? How to set it up locally. What GPUs do you have on your company server? We have two A6000 with 48 GB. The genmo website says it needs about 60GB to process, but it is possible to split between GPUs.