any clues on howto fix this error ? EinopsError: Error while processing rearrange-reduction pattern "(b f) c h w -> b c f h w". Input tensor shape: torch.Size([2, 320, 13, 135, 240]). Additional info: {'b': 2}. Expected 4 dimensions, got 5
thank you very much for the tutorial! I am unfortunately constantly getting the following error: "RuntimeError: einsum(): subscript b has size 16 for operand 1 which does not broadcast with previously seen size 32" Does anyone have any idea where I could begin fixing this?
Thank you for tutorial, but i have a problem, It does not interpolate the various input images between them, but takes them individually making single video for each image with no interpolation.. what i'm doing wrong?
In order to create a morph animation, we'll utilize the FILM option within the Animatediff extension, It's imperative to have the Deforum extension installed to activate the "FILM" interpolate feature.
@@NextDiffusionthanks for answer, i already have Deforum installed, there's some option in setting that i have to change? video result are interpolation between one image and noise (not between images), it starts with image and finish with noise, doesn't merge every image, and creates a video for every image 😔
Which version of SD are you using.I have 1.8. I am getting errors. untimeError: Sizes of tensors must match except in dimension 1. Expected size 20 but got size 2 for tensor number 1 in the list.
Certainly, that would be your optimal decision. Check out our blog for a selection of top-notch graphic card options: www.nextdiffusion.ai/blogs/budget-graphic-cards-for-running-stable-diffusion-locally
Written Tutorial: www.nextdiffusion.ai/tutorials/create-morph-animations-using-frame-interpolation-in-stable-diffusion-a1111
Nice Tutorial! I have one question how can you make the pictures stay for a longer time between transitions.
So good your tutor, but how do you keep original face via animation? Do you have hidden step?)
it doesn't work on latest version and the modules dropdown shows empty.
It doesn't work anymore
any clues on howto fix this error ?
EinopsError: Error while processing rearrange-reduction pattern "(b f) c h w -> b c f h w". Input tensor shape: torch.Size([2, 320, 13, 135, 240]). Additional info: {'b': 2}. Expected 4 dimensions, got 5
thank you very much for the tutorial!
I am unfortunately constantly getting the following error:
"RuntimeError: einsum(): subscript b has size 16 for operand 1 which does not broadcast with previously seen size 32"
Does anyone have any idea where I could begin fixing this?
Legend.
Thank you for tutorial, but i have a problem, It does not interpolate the various input images between them, but takes them individually making single video for each image with no interpolation.. what i'm doing wrong?
In order to create a morph animation, we'll utilize the FILM option within the Animatediff extension, It's imperative to have the Deforum extension installed to activate the "FILM" interpolate feature.
@@NextDiffusionthanks for answer, i already have Deforum installed, there's some option in setting that i have to change? video result are interpolation between one image and noise (not between images), it starts with image and finish with noise, doesn't merge every image, and creates a video for every image 😔
Which version of SD are you using.I have 1.8. I am getting errors. untimeError: Sizes of tensors must match except in dimension 1. Expected size 20 but got size 2 for tensor number 1 in the list.
I literally missed everything you said in the into and had to rewind. Then I missed it again.
I only get signe images out of it. what could be a reason?
The nvidia graphics card is what you used for stable diffusion?
Certainly, that would be your optimal decision. Check out our blog for a selection of top-notch graphic card options: www.nextdiffusion.ai/blogs/budget-graphic-cards-for-running-stable-diffusion-locally
but aren't you just using deforum for frame interpolation? i don't see animatediff doing anything here