Some explanations for the parameters: video_frames: The number of video frames to generate. motion_bucket_id: The higher the number the more motion will be in the video. fps: The higher the fps the less choppy the video will be. augmentation level: The amount of noise added to the init image, the higher it is the less the video will look like the init image. Increase it for more motion. VideoLinearCFGGuidance: This node improves sampling for these video models a bit, what it does is linearly scale the cfg across the different frames. In the above example the first frame will be cfg 1.0 (the min_cfg in the node) the middle frame 1.75 and the last frame 2.5. (the cfg set in the sampler). This way frames further away from the init frame get a gradually higher cfg.
@@StyledByVirtualVogue Tbh since it was almost a year ago, I don't remember, but i think i didnt lol. If you are stuck I suggest you to watch Jerry Davos AI videos. He have some great tutorials and workflow that worked for me.
Omg the advancements in video/animation are crazy lately 🤯. I think i still like having control on the animation with motion loras more but hey this setup is so easy. I can also imagine it being used in conjunction with other methods
I followed your steps to install comfy but whenever I want to let it run.. it says request to download models and I have to wait for a while.. how can I solve that ?
i get this error in in the rife node "Prompt outputs failed validation RIFE VFI: - Value not in list: ckpt_name: 'sudo_rife4_269.662_testV1_scale1.pth' not in ['rife40.pth', 'rife41.pth', 'rife42.pth', 'rife43.pth', 'rife44.pth', 'rife45.pth', 'rife46.pth', 'rife47.pth', 'rife48.pth', 'rife49.pth']"
I'm getting the following errors and don't know where to start Prompt outputs failed validation ImageOnlyCheckpointLoader: - Value not in list: ckpt_name: 'svd_xt_image_decoder.safetensors' not in ['shendan v2.safetensors', 'svd.safetensors', 'svd_xt.safetensors', 'v1-5-pruned-emaonly.safetensors'] RIFE VFI: - Value not in list: ckpt_name: 'sudo_rife4_269.662_testV1_scale1.pth' not in ['rife40.pth', 'rife41.pth', 'rife42.pth', 'rife43.pth', 'rife44.pth', 'rife45.pth', 'rife46.pth', 'rife47.pth', 'rife48.pth', 'rife49.pth']
I am getting a lot of help from your video. I have one question. Among rife VFI ckpts, which repo contains files such as rife40.pth, ....., sudo_rife4_269.662_testV1_scale1.pth? No matter how much I searched, I couldn't find it. I'm looking forward to your smart answer to my stupid question.
How long does it take your computer to generate the video. I’m wondering if mine is not set up properly. When I generate images it takes seconds. Using ComfyUI to do video from image it will work all night and never get a result.
Interesting. Curious to see if you can combine this with text prompt conditioning to guide the video output. I'll certainly be doing some mad science experiments. Thanks!
Remove the if. Of course we can do that already, we always could. Prompts influencing the generation of pixels is the whole purpose of Stable Diffusion...
need some help, im getting Error occurred when executing KSampler: input must be 4-dimensional, when trying to do the stable difusion animate in comfy ui, i have an AMD 7900XT.
Im getting Boring animations! It always happens to me when I use animatediff with comfy or automatic...but... I LOVE CLI with animate diff! Ill keep at this and tweaking my settings! THANK YOU FOR YOUR HELP!!!
So great that you set this up. Thank you so much! It’s working great, but I do see this error after the job is run “exception in callback Proactor BasePipeTransport.callconnection lost” at asyncio\events.py line 80 in run… and then “an existing connection was forcibly closed by the remote host”… I think there are processes hanging around after the run than need to be cleaned up
When I import your workflow I get When loading the graph, the following node types were not found: RIFE VFI Seed (rgthree) Nodes that have failed to load will show as red on the graph. the seed and RIFE VFI panels are errored out any advice?
@@enigmatic_e Yes I have realized that. Hopefully there will be a workaround like there was for Animatediff with prompt travel + IP Adapter. Do keep us updated
@@enigmatic_e I just dont enjoy having to create a whole workflow for something auto 1111 does with a single switch, it's slower than auto1111 in most cases too.
Bro you need to go into better detail about how to install the required nodes. I literally just installed the basic ComfyUI, not everyone has the same nodes as you do.
Sorry about that. Are you new to comfyui? If so I just pinned a comment with a link to my beginners tutorial. You need to install manager and there’s an option to install missing nodes automatically.
If you’re new to ComfyUI watch my beginners tutorial here ua-cam.com/video/WHxIrY2wLQE/v-deo.htmlsi=HV61VB9nt4wxn18L
It’s insane how much progress ai art has made in the last 6 months alone…
facts
Some explanations for the parameters:
video_frames: The number of video frames to generate.
motion_bucket_id: The higher the number the more motion will be in the video.
fps: The higher the fps the less choppy the video will be.
augmentation level: The amount of noise added to the init image, the higher it is the less the video will look like the init image. Increase it for more motion.
VideoLinearCFGGuidance: This node improves sampling for these video models a bit, what it does is linearly scale the cfg across the different frames. In the above example the first frame will be cfg 1.0 (the min_cfg in the node) the middle frame 1.75 and the last frame 2.5. (the cfg set in the sampler). This way frames further away from the init frame get a gradually higher cfg.
Strangely it always never works, because of conflicts, not able to install the missing nodes, not able to install missing libraries, etc
Do you know which nodes are missing?
I have a problem finding where and how to install the RIFE VFI node. Could you tell us where we can find it? Otherwise thanks for the video!
Did you get over this
@@StyledByVirtualVogue Tbh since it was almost a year ago, I don't remember, but i think i didnt lol. If you are stuck I suggest you to watch Jerry Davos AI videos. He have some great tutorials and workflow that worked for me.
super cool. cant wait to see what this does in a month. longer videos would be amazing.
Exciting times!
Omg the advancements in video/animation are crazy lately 🤯. I think i still like having control on the animation with motion loras more but hey this setup is so easy. I can also imagine it being used in conjunction with other methods
Its insane how fast we're moving with this! I can't wait till we get even more control.
Always getting:
Error occurred when executing KSampler:
Conv3D is not supported on MPS
on M2 :/
I followed your steps to install comfy but whenever I want to let it run.. it says request to download models and I have to wait for a while.. how can I solve that ?
Thanks. Just starting to fiddle around with this.
I didn't know this was a thing. Thank you!
i get this error in in the rife node "Prompt outputs failed validation
RIFE VFI:
- Value not in list: ckpt_name: 'sudo_rife4_269.662_testV1_scale1.pth' not in ['rife40.pth', 'rife41.pth', 'rife42.pth', 'rife43.pth', 'rife44.pth', 'rife45.pth', 'rife46.pth', 'rife47.pth', 'rife48.pth', 'rife49.pth']"
click on refresh and pick again model
@@vlada9740 Please elaborate. Same error. No missing custom nodes. Refreshed. Model picked again and server restarted. Thanks.
Did u choose 'rife47.pth' or 'rife49.pth' in ckpt_name?
wow looks great man.thx
No problem!
I'm getting the following errors and don't know where to start
Prompt outputs failed validation
ImageOnlyCheckpointLoader:
- Value not in list: ckpt_name: 'svd_xt_image_decoder.safetensors' not in ['shendan v2.safetensors', 'svd.safetensors', 'svd_xt.safetensors', 'v1-5-pruned-emaonly.safetensors']
RIFE VFI:
- Value not in list: ckpt_name: 'sudo_rife4_269.662_testV1_scale1.pth' not in ['rife40.pth', 'rife41.pth', 'rife42.pth', 'rife43.pth', 'rife44.pth', 'rife45.pth', 'rife46.pth', 'rife47.pth', 'rife48.pth', 'rife49.pth']
I am getting a lot of help from your video. I have one question. Among rife VFI ckpts, which repo contains files such as rife40.pth, ....., sudo_rife4_269.662_testV1_scale1.pth? No matter how much I searched, I couldn't find it. I'm looking forward to your smart answer to my stupid question.
👏MASSIVE thanks. Soooooooo glad I found your channel! You've got a new sub.👊
Yooo! Thank you!! 🙏🏽 🙏🏽
this thanksgiving I'm grateful for enigmatic_e's tutorials! 🙌🙌
shouts outs to Kijai as well!
🎉
this is beautiful
you can conneect the output from VAE decoder to SVD, and make promp to video
Have you gotten good results?
Thank you, this is amazing!!
What is the specific file to download from the huggingface site?
How long does it take your computer to generate the video. I’m wondering if mine is not set up properly. When I generate images it takes seconds. Using ComfyUI to do video from image it will work all night and never get a result.
how do you get smoother natural skin?
Interesting. Curious to see if you can combine this with text prompt conditioning to guide the video output. I'll certainly be doing some mad science experiments. Thanks!
Remove the if. Of course we can do that already, we always could. Prompts influencing the generation of pixels is the whole purpose of Stable Diffusion...
Thanks, haha! Yeah, I've gotten there. Love the freedom & flexibility ComfyUI offers.
need some help, im getting Error occurred when executing KSampler: input must be 4-dimensional, when trying to do the stable difusion animate in comfy ui, i have an AMD 7900XT.
It seems not working on mac, I have ComfyUI & all custom nodes installed, but still got tons of error when calculation reaches KSampler
Im getting Boring animations! It always happens to me when I use animatediff with comfy or automatic...but... I LOVE CLI with animate diff! Ill keep at this and tweaking my settings! THANK YOU FOR YOUR HELP!!!
thanks for the workflow! can you tell me where the YHS module sends your exports? cant figure it out.
it send here: ComfyUI_windows_portable\ComfyUI\output
So great that you set this up. Thank you so much! It’s working great, but I do see this error after the job is run “exception in callback Proactor BasePipeTransport.callconnection lost” at asyncio\events.py line 80 in run… and then “an existing connection was forcibly closed by the remote host”… I think there are processes hanging around after the run than need to be cleaned up
sweet, thanx for sharing :-) Cool stuff!!!
Great video as always! Keep it up!
Thanks 🙏🏽
When I import your workflow I get When loading the graph, the following node types were not found: RIFE VFI Seed (rgthree) Nodes that have failed to load will show as red on the graph. the seed and RIFE VFI panels are errored out any advice?
nevermind fix for this is installing missing nodes^
Glad you found a solution!
Is it possible to add controlnets like openpose then have this animate an image using the controlnet information?
Not at the moment, at least not anything that looks good.
hi, how can i install automatic1111 on confyUI
Is there a way to extend a video after it has been created?
Do you mean like add frames to allow slow motion?
@@enigmatic_e good ideia
Where do I install the sudo_rife4_269.662_testV1_scale1.pth in ComfyUI ??
Bumping this - running into the same issue @enigmatic_e
Is it able to work with any image or does it need to be a stable diffusion generated image?
No, it could be any image but I’ve noticed that some images work better than others.
is it possible for batch process?
Wow, this is a game changer my brother! Is there a frame cap similar to AnimateDiff? Thank you so much for putting this awesome tutorial out there!
You can add frames but I think it gets destroyed after some time
thanks for sharing, man! 😘
🙏🏽🙏🏽🙏🏽
how up video time? TNX
Thank you. Your videos are really great and helpful. But is it possible to do something similar if we have an AMD card?
Hello!, unfortunately I don't know. I don't own an AMD card to test this. Technically you could run ComfyUI through CPU but its very slow.
@@enigmatic_e thank you. It always scared me. That's why I haven't tried it yet; I just watch videos about it.
cheers mate, hitting that ai spot once again! ;)
😎👍🏽
Great tutorial as always! Can we increase the output video duration?
You can but it starts to degrade over time.
@@enigmatic_e Yes I have realized that. Hopefully there will be a workaround like there was for Animatediff with prompt travel + IP Adapter. Do keep us updated
@@tdfilmstudioyea I’m sure we will get new tools soon! Can’t wait!!
how/where can i set the duration of the video?
How many frames can i generate i mean can i create a video of 2 to 4 minutes long ?
Not really. I think some people find word arounds like getting the last frame of a 25 frame video and rerun it and at then edit them together.
Where does we have to put the svd files ?
In your checkpoint folder
On an 4090, how long for each video?
It takes about 1 minute or so for a 24 frame video.
Error occurred when executing SVD_img2vid_Conditioning:
'NoneType' object has no attribute 'encode_image'
File "C:\comf 2\ComfyUI_windows_portable\ComfyUI\execution.py", line 153, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "C:\comf 2\ComfyUI_windows_portable\ComfyUI\execution.py", line 83, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "C:\comf 2\ComfyUI_windows_portable\ComfyUI\execution.py", line 76, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "C:\comf 2\ComfyUI_windows_portable\ComfyUI\comfy_extras
odes_video_model.py", line 45, in encode
output = clip_vision.encode_image(init_image)
wont let me generate. i updated and restarted and downloaded models and put them into the checkpoints folder in the models folder
do you think you can help me out further
running the run_gpu_updates.bat fixed everything for me, i was having same issue, idk if its same for you
@@Paracast where is this found?
what foilder is that in?
@@Paracast
Really cool, Thanks
Can't get it to work. Keep getting errors :(
At what point does the process stop? Which nodes has a red outline?
What is the minimum requirement of VRAM?
I’ve heard some say 10-12 vram works but I haven’t tested that.
At least with using my 3060 12gb it runs around 8gb vram
Thanks
Can't get why but my video is oversaturated every time...
i loaded everything clicked update all, but i dont get the new video nodes :(
Did you restart everything?
@@enigmatic_e yes, restarted everything
@@Zippo08 then I would just check to see if downloading missing nodes through manager works.
running the run_gpu_updates.bat fixed everything for me, i was having same issue, idk if its same for you@@Zippo08
@@enigmatic_e hmm its says all updated ;( but thank you anyway
the best maan
🔥 🔥 🔥
how VHS ?
Ask me for VHS_VideoCombine ? I could not find it in the manager
Sometimes I lose track of where I get the nodes from, either manager or just google it and and install it into the custom node folder.
so sick
nice 😍
excelent, now it's time to throw L2d into trashcan
Comfy Ui? nope. I'll wait for Auto 1111
Comfy is not so bad 😂
@@enigmatic_e I just dont enjoy having to create a whole workflow for something auto 1111 does with a single switch, it's slower than auto1111 in most cases too.
Bro you need to go into better detail about how to install the required nodes. I literally just installed the basic ComfyUI, not everyone has the same nodes as you do.
Sorry about that. Are you new to comfyui? If so I just pinned a comment with a link to my beginners tutorial. You need to install manager and there’s an option to install missing nodes automatically.
Thx for the awesome Video! I made a shoutout to you in my video. Hope you get a bunch of additional subs from this :)
Hey Olivio! Big fan of your channel! Thank you for the shout out! 🙏🏽🙏🏽🙏🏽
@@enigmatic_e 🥰