brother everyone is looking for the solution you just show us... the video title does not say this is the solution.... I am just lucky I found your video!!!!!!!!!!
Thanks a lot for your feedback, I've added a hint regarding the bug fix to the title :) Regarding Git Fetch: "The git fetch command downloads commits, files, and refs from a remote repository into your local repo". In short: It's downloading the files of the active commit.
For me the extension for control net is not enabled in the extensions tab and if I enable it after I press Apply and restart UI it auto disable it again. I get error Error running postprocess_batch_list, and error Error running postprocess and a Warning for No motion module detected, falling back to the original forward. I installed both extension from the URL and put both models for the appropriate directory. I guess because of that I have only ControlNet Integrated on my generation tab. I did the fix from the end section of the video and now it says: TypeError: HEAD is a detached symbolic reference as it points to '10bd9b25f62deab9acb256301bbf3363c42645e7' on startup
Amazing video thank you so much. By any chance, do you know how people are getting these really high quality visuals on citivai? I see them using this technique but the output looks borderline image level quality. I assume they're just letting the thing render for hours? Also, using the same steps with adetailer and reactor enabled, I am getting around 3-4 minutes processing time. Using nvidia RTX 4060 ti 16gb and i5 12600k
Thanks a lot for your feedback! Having a long prompt and negative prompt, trying many runs and perhaps doing an upscale of the result would for sure enhance the result. Choosing a suitable Checkpoint for animation has a lot of influence, too.
@matthallett4126 Right, that's currently a problem. Have you tried the related bug fix I have explained in the video? You can use an "older" commit of ControlNet in order to have it working with AnimateDiff.
@@NextTechandAI I was going to revisit the video for that fix but then just decided to move over to Comfy, and then just gave up because really what I was doing was procrastinating the rendering work I needed to do. But I appreciate the help! BTW, I have the same number of subs on my AI for Architecture Channel. I don't know how you find the time to do such quality post editing! What are you using?
Strange. You can see all versions e.g. at 01:56 at the bottom of the vid. BTW, you can try to temporarily deactivate all extensions except for animateDiff, maybe one extensions blocks it.
@@NextTechandAI ok i have figured it out :D Maybe this will help some people. There are two solutions. My webui version is 1.10.1. I deleted the "venv" folder in the stable diffusion directory and run the webui-user.bat again. After sd redownloaded the folder everything is as it should be. The second one (which also fixed my other issues like a few extensions not showing up) is dropping the version from 1.10.1 to 1.09.
Is Zluda compatible with AnimateDiff ? I installed automatic1111 with Zluda since I have an AMD RX 6800. Installing AnimateDiff is not a problem but the generation uses my CPU instead of the GPU. Do you know a solution ? Thanks !
Does not animate unfortunately I get this error.....EinopsError: Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c". Input tensor shape: torch.Size([2, 4096, 320]). Additional info: {'f': 16}. Shape mismatch, can't divide axis of length 2 in chunks of 16. I have a 2080ti 12gb gpu
Have you updated AnimateDiff or other components recently? There seems to be a bug since a few days. Somebody has fixed it on his machine by updating the version of automatic1111 web UI from 1.6.0 to 1.8.0.
brother everyone is looking for the solution you just show us... the video title does not say this is the solution.... I am just lucky I found your video!!!!!!!!!!
THANK YOU!! IT WORKED!!! now just for education.. what is Git Fetch?
Thanks a lot for your feedback, I've added a hint regarding the bug fix to the title :)
Regarding Git Fetch: "The git fetch command downloads commits, files, and refs from a remote repository into your local repo".
In short: It's downloading the files of the active commit.
Yeah wow thanks! Finally someone who explained how to use that video source panel in AnimateDiff with the ControlNet!
I'm glad that the video is useful. Thanks for sharing.
Not working, video source stays black and empty (0:00) whenever extension i use for video source.
For me the extension for control net is not enabled in the extensions tab and if I enable it after I press Apply and restart UI it auto disable it again. I get error Error running postprocess_batch_list, and error Error running postprocess and a Warning for No motion module detected, falling back to the original forward. I installed both extension from the URL and put both models for the appropriate directory. I guess because of that I have only ControlNet Integrated on my generation tab.
I did the fix from the end section of the video and now it says: TypeError: HEAD is a detached symbolic reference as it points to '10bd9b25f62deab9acb256301bbf3363c42645e7' on startup
Getting: "RuntimeError: Could not allocate tensor with 1073741824 bytes. There is not enough GPU video memory available!" with an RX6600. Any help?
Amazing video thank you so much. By any chance, do you know how people are getting these really high quality visuals on citivai? I see them using this technique but the output looks borderline image level quality. I assume they're just letting the thing render for hours?
Also, using the same steps with adetailer and reactor enabled, I am getting around 3-4 minutes processing time. Using nvidia RTX 4060 ti 16gb and i5 12600k
Thanks a lot for your feedback! Having a long prompt and negative prompt, trying many runs and perhaps doing an upscale of the result would for sure enhance the result. Choosing a suitable Checkpoint for animation has a lot of influence, too.
There is another space under the ControlNet that we can drop an image. What is that for?
Has anyone getting this error? AttributeError: 'NoneType' object has no attribute 'batch_size'
Thanks for the video. I'm hunting youtube looking for a reason why my controlnet isn't working with ADiff.
@matthallett4126 Thanks a lot for your feedback. What exactly does not work between ControlNet and AnimateDiff?
@@NextTechandAI My guess is that its the latest version of Controlnet. Issues on github saying its the AnimateDiff guy isn't keep it updated.
@matthallett4126 Right, that's currently a problem. Have you tried the related bug fix I have explained in the video? You can use an "older" commit of ControlNet in order to have it working with AnimateDiff.
@@NextTechandAI I was going to revisit the video for that fix but then just decided to move over to Comfy, and then just gave up because really what I was doing was procrastinating the rendering work I needed to do. But I appreciate the help! BTW, I have the same number of subs on my AI for Architecture Channel. I don't know how you find the time to do such quality post editing! What are you using?
@matthallett4126 Indeed time is a limiting factor :) I did the post with Davince Resolve, switched last year from Sony Vegas.
For some reason animatediff is not showing up in the web ui :(
What version of the web ui are you using?
Strange. You can see all versions e.g. at 01:56 at the bottom of the vid. BTW, you can try to temporarily deactivate all extensions except for animateDiff, maybe one extensions blocks it.
@@NextTechandAI ok i have figured it out :D Maybe this will help some people. There are two solutions. My webui version is 1.10.1. I deleted the "venv" folder in the stable diffusion directory and run the webui-user.bat again. After sd redownloaded the folder everything is as it should be. The second one (which also fixed my other issues like a few extensions not showing up) is dropping the version from 1.10.1 to 1.09.
@@Lukasz490 Thanks a lot for sharing this & have fun with AnimateDiff.
@@NextTechandAI And thank you for the tutorial!
Is Zluda compatible with AnimateDiff ? I installed automatic1111 with Zluda since I have an AMD RX 6800. Installing AnimateDiff is not a problem but the generation uses my CPU instead of the GPU. Do you know a solution ? Thanks !
As far as I know it should work. There's currently an issue with AnimateDiff v3, you could try AnimateDiff v2.
@@NextTechandAI Indeed, it seems to work with the v2 model. Thanks for the advice.
@@Painbeas I'm happy that it's working, now. Thanks for sharing.
Does not animate unfortunately I get this error.....EinopsError: Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c". Input tensor shape: torch.Size([2, 4096, 320]). Additional info: {'f': 16}. Shape mismatch, can't divide axis of length 2 in chunks of 16.
I have a 2080ti 12gb gpu
Have you updated AnimateDiff or other components recently? There seems to be a bug since a few days. Somebody has fixed it on his machine by updating the version of automatic1111 web UI from 1.6.0 to 1.8.0.
Where should I put the .pth file? Ok Understood! 4:23 WATCH: Starting 3:45 it explains where are the four files should be put.
I'm happy that you found it in my vid.
Where can I get the prompt at 5:08?
You stop the video and type the prompt into an editor - or what do you mean?
@@NextTechandAI I am not willing to do anything more difficult than copy and paste. Could you help me?
Seriously? Then I think such complex topics are not for you.
@@NextTechandAI Come on! An easy copy-and-paste can save time!
@@weishanlei8682 In the time you took to complain you could have just wrote it yourself
5/5
Thank you :)