Automatic1111 Animatediff Controlnet - How to make AI animations Stable Diffusion

Поділитися
Вставка
  • Опубліковано 9 вер 2024
  • To make incredible AI animations, combine Animatediff and ControlNet. This video covers the installation process as well as some easy little tricks that can produce some really cool results.
    Project Resources + Prompts: goshnii.gumroa...
    Animatediff for Beginners: • AnimateDiff and (Autom...
    Animatediff Models: huggingface.co...
    ControlNet Models: huggingface.co...
    Helloypund2d on Civitai: civitai.com/mo...
    Animatediff Tutorials: • Automatic 1111 Tutorials
    Installing Deforum and ControlNet STEP-BY-STEP (Automatic 1111): • Installing Deforum and...
    #animatediff #aianimation #gifs #stablediffusion #text2video

КОМЕНТАРІ • 80

  • @pedro_a_martins
    @pedro_a_martins 6 місяців тому +1

    This is awesome! Thank you so much!

    • @goshniiAI
      @goshniiAI  6 місяців тому

      You are most welcome, and I appreciate your kind words!

    • @serinadelmar6012
      @serinadelmar6012 5 місяців тому

      Same, Thank you! ❤

    • @goshniiAI
      @goshniiAI  5 місяців тому +1

      @@serinadelmar6012 I'm glad to hear from you.♥

  • @apidyahex9213
    @apidyahex9213 6 місяців тому

    Just waiting for checkpoints to download to give it a go, but so far this is the best video I've seen that teaches animation in stable diff. keep them coming you are awesome

    • @goshniiAI
      @goshniiAI  6 місяців тому +1

      Happy animating! Your words of encouragement mean a lot, and I am glad that I could assist you. Thank you for your feedback

    • @apidyahex9213
      @apidyahex9213 6 місяців тому

      @@goshniiAI I'm yet to try it fully but honestly your description and steps with links and highlighting are so easy to follow. I'm coming to your channel first for all my info and tutorials now! . thanks for spending time making the video.

    • @goshniiAI
      @goshniiAI  6 місяців тому +1

      @@apidyahex9213 It's nice to know everything connects with you and is simple to follow. thank you for tunning in again

    • @apidyahex9213
      @apidyahex9213 6 місяців тому

      finally got round to trying and got this error....EinopsError: Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c". Input tensor shape: torch.Size([2, 6144, 320]). Additional info: {'f': 16}. Shape mismatch, can't divide axis of length 2 in chunks of 16. any ideas? thanks@@goshniiAI

    • @apidyahex9213
      @apidyahex9213 6 місяців тому

      tried everything and after that error even text to image gen doesn't work unless i restart stable diff thanks for the vid but no luck with any animation for me still .

  • @plan2501
    @plan2501 4 дні тому

    nice tutorial

  • @DaltOniXProductions
    @DaltOniXProductions 5 місяців тому

    This was a very helpful reminder thanks! I need some coffee lol

    • @goshniiAI
      @goshniiAI  5 місяців тому +1

      Lol...I'm grateful that you stopped by and glad that was useful

  • @romeofthesouth
    @romeofthesouth 3 місяці тому

    This is excellent, I've been trying to get consistency for a while now. I will try this later. I have Forge so hope it renders faster than 8 hours 😭

    • @goshniiAI
      @goshniiAI  3 місяці тому

      I'm glad you found the video helpful! Using Forge can speed up the process compared to an 8-hour render, which can be exhausting! lol

  • @aiximagination
    @aiximagination 8 місяців тому +1

    Greate video. Thank you!

    • @goshniiAI
      @goshniiAI  8 місяців тому

      I appreciate your kind words very much!

  • @T.L-TV
    @T.L-TV 6 місяців тому

    thank you for this tutorial

    • @goshniiAI
      @goshniiAI  6 місяців тому

      Thank you for the feedback, and you're welcome.

  • @MisterCozyMelodies
    @MisterCozyMelodies 4 місяці тому

    thanks, again!!

    • @goshniiAI
      @goshniiAI  4 місяці тому

      I'm glad to help, and I appreciate your comments.

  • @ChaseEverything
    @ChaseEverything 3 місяці тому

    I was working on it but then I got this
    EinopsError: Error while processing rearrange-reduction pattern "(b f) d c -> (b d) f c". Input tensor shape: torch.Size([2, 6144, 320]). Additional info: {'f': 16}. Shape mismatch, can't divide axis of length 2 in chunks of 16

    • @goshniiAI
      @goshniiAI  3 місяці тому

      The error could be a problem with the dimensions not being divisible by 16. You may need to reshape or edit your input data just to make sure that it matches the intended format.

  • @ArtWreckAI
    @ArtWreckAI 7 місяців тому

    thank you for this tutorial. BTW the PNG info gave me "parameters: none"

    • @goshniiAI
      @goshniiAI  7 місяців тому

      You're welcome, and I appreciate your feedback.
      If the PNG info shows "parameters: none," you might want to verify your settings. Or you can find the image that was used in the project resources

  • @user-tv4ls1lx9n
    @user-tv4ls1lx9n Місяць тому

    I can't even get AD to run in A1111 with all the settings the same as you had there. At least that I can see. When I hit generate, it just generates another image. AD is enabled and models installed etc

    • @goshniiAI
      @goshniiAI  Місяць тому

      Hello there, im sorry to here that. I recently went through the same thing, and I talked about it in this video.
      ua-cam.com/video/nlk64bEID54/v-deo.html - Animatediff might be a little tricky at times. First, ensure that you've picked the correct models and that they're properly loaded. Also, ensure that the ControlNet settings are properly linked to Animatediff, as it may occasionally revert to regular image generation.

  • @zhentan6734
    @zhentan6734 Місяць тому

    Why is your stable diffusion so obedient!

    • @goshniiAI
      @goshniiAI  Місяць тому

      😄 I'm so grateful to get it to behave just right.

  • @estebanmoraga3126
    @estebanmoraga3126 2 місяці тому

    Great tutorial, thanks for this! Question tho: Is there a way to feed it an image to be animated like the sourced video? Like say I want to animate a specific, original character singing. Can I provide an image of said character and a video of someone singing and have comfy replace that person with the character? Or those Animatediff works through prompts only at the moment?

    • @goshniiAI
      @goshniiAI  2 місяці тому +1

      Using IPADAPTER can assist you achieve this, or you can use theFace reactor. You may use any video and replace the face with either ipadapter or Face Reactor. If you're familiar with ComfyUI, I recommend that for better outcomes.

  • @SecretAsianManOO7
    @SecretAsianManOO7 10 днів тому

    1:10 lol I'm using my F drive too. The only thing that's not working for me is that the controlnet openpose preview is not showing up. I'm still generating images that match the pose however. When I try to make animations, I get a CUDA error. I'm keeping the prompt within 75 so not sure why it's being difficult. Appreciate the guide either way, cheers.

    • @goshniiAI
      @goshniiAI  19 годин тому

      Hi, it could be a controlnet conflict following an update. In this video, I have covered that in more detail if you have time. ua-cam.com/video/nlk64bEID54/v-deo.html

  • @ChaseEverything
    @ChaseEverything 3 місяці тому

    Hi for some reason I got to the stage where you hit the explosion and then generate and it didn't generate with the pose even though I can see the mask for it. :(

    • @goshniiAI
      @goshniiAI  3 місяці тому

      You are correct, there is a problem with controlnet and animatediff merging in A1111, which I also encountered. I'll discuss the solution in an upcoming upload in order to get this working. ua-cam.com/video/nlk64bEID54/v-deo.html

  • @Postfxx
    @Postfxx 7 місяців тому +3

    hi its been 3 days i am trying this animate diff + controlnet unit is not working in batch. but if i disable animate diff control net is working fine* please help me with be very much thank ful.

    • @goshniiAI
      @goshniiAI  7 місяців тому +2

      I'm sorry to hear that, it must be frustrating.
      1. I recommend carefully reviewing your parameters before generating your propmt
      2. Run a stable diffusion update for A1111.
      3. Check A1111's extension updates.
      4. Check the control net settings under Automatic 1111 Settings.
      5. Please check for FFMPEG if you do not already have it installed.
      Sometimes, adjusting a few parameters can make all the difference.
      I hope any of these are useful.

    • @Postfxx
      @Postfxx 7 місяців тому

      thank u for ur reply.. really appriciate your feedback and your work.checking with that only hoping to get it correct.

    • @goshniiAI
      @goshniiAI  7 місяців тому +1

      You are welcome, For reference, these are my controlnet settings. tinyurl.com/2rjynv3b @@Postfxx

    • @Postfxx
      @Postfxx 7 місяців тому

      @@goshniiAI how to comment send screenshot on this.. its getting deleted. link getting deleted...

    • @Postfxx
      @Postfxx 7 місяців тому

      please help.. really need this to work..

  • @olafguerrero740
    @olafguerrero740 7 місяців тому

    Thank you for creating this tutorial. Quick question did you say it took 8 hours to see the final results? Can you recommend some options to create lower rez examples to make sure controlnet is using the reference folder first before you do your final output?

    • @goshniiAI
      @goshniiAI  7 місяців тому

      You are welcome! Yes, the final high-resolution output took approximately 8 hours. To create lower-resolution examples to ensure Controlnet is properly using the reference folder before the final render, try reducing the image size or rendering a shorter sequence first.
      This allows you to quickly verify the process and make changes as needed before committing to the final render. I hope this helps!

  • @Joe-ce6cc
    @Joe-ce6cc 4 місяці тому

    i have a question, why do you need to import the png sequence batch ? since you already got the video as reference for openpose ?

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      Hello, the video reference was required for Animatediff, while Batch Frames guided controlnet. I didn't use the video for Controlnet because the image sequence helped me save GPU processing time.
      However, there could always be multiple ways to achieve the same result.

  • @barisvkabalak
    @barisvkabalak 6 місяців тому

    when I use animatediff, it batches 32 different images. How can I fix this?

    • @goshniiAI
      @goshniiAI  6 місяців тому

      At times AnimateDiff performs random animations. Kindly check the settings to ensure you're specifying the correct processing inputs, as well as the model you're using for AnimDiff's generations.

  • @ucphamvan1711
    @ucphamvan1711 4 місяці тому

    Hello, may I ask how did you write such a long reminder and still make it work? When I write a reminder longer than 75 words it gives me the error RuntimeError: CUDA error: device-side assert triggered.

    • @goshniiAI
      @goshniiAI  4 місяці тому

      Hello there, It's probably caused by memory limits on your GPU, especially if processing is done with CUDA. To use less memory, you could divide your reminders into smaller sections or make your output size a little smaller.

    • @ucphamvan1711
      @ucphamvan1711 4 місяці тому

      @@goshniiAI i using 3090ti 24g vram. Can you help me optimize?

    • @goshniiAI
      @goshniiAI  4 місяці тому

      @@ucphamvan1711 I believe you are well-equipped for high performance. You have a powerful GPU.
      I will not recommend overclocking, as it usually has negative long-term effects on any GPU.
      Consider using a lower resolution for beginning drafts and then upscale the final for animations.
      Also Keep track of your VRAM usage while rendering. If it reaches your 24-GB limit, you may need to adjust the settings or consider using cloud resources with more VRAM for complex projects.

    • @ucphamvan1711
      @ucphamvan1711 4 місяці тому

      @@goshniiAI Thank you, I try again

  • @dragongaiden1992
    @dragongaiden1992 3 місяці тому

    Friend, what graphics to use, I can't use hires?

    • @goshniiAI
      @goshniiAI  3 місяці тому

      Hey friend, a GPU with at least 8GB of VRAM, though 12GB or more is better. Some common options include the NVIDIA RTX 3060, 3070, or 3080 series.

    • @ChaseEverything
      @ChaseEverything 3 місяці тому

      @@goshniiAI Hi there brother, what GPU did you use for this video and what speeds were you getting for Generation?

  • @siddhartharoy5263
    @siddhartharoy5263 Місяць тому

    I am on google collab, Please make a video on that, on how to do this masterpiee on google collab

    • @goshniiAI
      @goshniiAI  Місяць тому

      Hello there, Thank you for the lovely words!
      Sadly, I have no experience with Google Colab. However, you may find some useful resources from the community or other creators who specialize in Colab.

  • @villevase
    @villevase 7 місяців тому

    Hi, Can you tell me why AnimateDiff is changing the photo on half way? Thanks!

    • @goshniiAI
      @goshniiAI  7 місяців тому

      Thank you for reaching out if anything was unclear along the way.
      To clarify, the technique was not to achieve similarity with the reference image, but rather to use the open pose motion from the reference to direct the animation
      And I agree that Animatediff sometimes creates surprises and uncertainties in the process.
      Hopefully, based on my recent research, I'll be able to upload a process to get around this soon.

    • @villevase
      @villevase 7 місяців тому

      Thanks for clarifying. Great tutorial!@@goshniiAI

    • @goshniiAI
      @goshniiAI  7 місяців тому +1

      @@villevase You are very welcome! Happy creating.

    • @RankingIsHell
      @RankingIsHell 7 місяців тому +1

      I believe that because he is using controlnet the image doesn't change halfway because Controlnet takes over but if you aren't using Controlnet then try not going over 75 characters. It has worked for me as long as you don't go over 75 characters as I had the same problem too. Others have said on reddit the same thing.

    • @goshniiAI
      @goshniiAI  7 місяців тому +1

      I appreciate all of the extra information and practical advice. It'll be useful to a lot of us. Let's keep exploring and discovering together! @@RankingIsHell

  • @alienandroid943
    @alienandroid943 25 днів тому

    Im using a 1070 ugh so slow....

    • @goshniiAI
      @goshniiAI  19 годин тому

      Consider using a smaller resolution or frame size to improve your rendering speed. Later, you can upscale by using and upscaler like Topaz

  • @HopsinThaGoat
    @HopsinThaGoat 8 місяців тому

    your honest opinion
    does this replace deforum

    • @goshniiAI
      @goshniiAI  8 місяців тому

      That's a great question. However, in my opinion, the decision comes down to our individual targets since both have positive benefits. The most important thing is to choose the tool that best fits our project goals.