Transforming Your Videos Cannot Be Easier!

Поділитися
Вставка
  • Опубліковано 12 вер 2024
  • In this video, we delve into the SD-CN-Animation extension in Stable Diffusion. Creating new videos or modifying existing ones has never been easier. With detailed prompt descriptions, ControlNet, and LoRA, you can produce beautiful animations. The RAFT method significantly reduces the flickering problem.
    Although the additional settings in vid2vid remain a mystery, I will share any new information on my Discord page once I discover more. Keep an eye out!
    📣📣📣I have just opened a Discord page to discuss SD and AI Art - common issues and news - join using the link: / discord
    🤙🏻 Follow me on Medium to get my Newsletter:
    - Get UNLIMITED access to all articles: / membership
    - Laura: / lauracarnevali
    - Intelligent Art: / intelligent
    📰 Medium Article:
    / sd-cn-animation-extension
    📌 Links:
    - GitHub SD-CN-Animation: github.com/vol...
    - RAFT paper: arxiv.org/pdf/...
    00:51 Discord Page - JOIN US ;)
    02:15 Install the SD-CD-Extension
    04:25 Text-to-video animation (txt2vid tab)
    07:31 Processing strength (Step 1) and Fix frame strength (Step 2)
    08:52 Where to find the outputs
    09:16 Video-to-video (vid2vid tab)
    15:07 Conclusions

КОМЕНТАРІ • 70

  • @AIPlayground_
    @AIPlayground_ Рік тому +12

    Good video :) I started to using SD-CN-Animation a few days ago and its great, i use similar setting. I have three tips that increase the stability of the output video:
    1. Take the image that you generate in IMG2IMG and in SD-CN add another ControlNet with "reference_only" and put that image in there (so you have two ControlNet, one with tile and another with reference_only with the image from IMG2IMG). The downside of this is that the procesing increase a lot :(, but you will have more coherence and better styilized video.
    2. Sadly if you try to output a video with low resolution (below 512x512) the flickering is greater, so if you want amazing result its better to increase the output resolution.
    3. If your input video has a lot of rapid moves (like a dance video) you will see a lot of ghosting in the video, you can decrase that effect with these settings (after a lot of trial and error):
    "occlusion_mask_flow_multiplier": 3,
    "occlusion_mask_difo_multiplier": 4,
    "occlusion_mask_difs_multiplier": 3,
    "step_1_processing_mode": 0,
    "step_1_blend_alpha": 0,
    If your video doesnt get the ghosting effect, dont put these setting, because it will increase the flickering

    • @LaCarnevali
      @LaCarnevali  Рік тому +2

      Thank you! pinned! :)

    • @bradballew3037
      @bradballew3037 Рік тому

      Thanks for this. I'm trying to use this to make some 3D rendered characters look a bit more realistic. It works pretty well but still trying to get the most consistent temporal coherence. Any new tips since you posted this? I'm really digging in and running tests to try and see what all the settings do exactly.

  • @tioilmoai
    @tioilmoai Рік тому +3

    Congrats Laura! I’m Brazilian and you are Italian speaking in English in your tutorials, which help me understand better your contents since we have similar native languages! Good job! I hope your channel could grow a lot! Continue giving us SD content! Thanks a lot! My name is Tio Ilmo!

    • @LaCarnevali
      @LaCarnevali  Рік тому

      Hi Tio Ilmo! Happy to hear that :)

  • @gameboardgames
    @gameboardgames 10 місяців тому

    Thanks so much for this video. Hard to find info on using sd-cn-animation. Your video is super helpful !

  • @bonym371
    @bonym371 Рік тому +2

    Laura, your videos are perfect. You're so good at explaining, please keep producing content. I subscribed in 30 seconds flat plus your Italian accent, I could listen to it all day long!!

  • @Sully365
    @Sully365 Рік тому

    you are the first person to explain controlnet in a way that makes sense to me. can't thank you enough, great job!

  • @CarlosGarcia-tk5du
    @CarlosGarcia-tk5du Рік тому

    You’re so awesome!! Great teacher. I’m going to join your discord later when I get on my pc. I’ve learned more in 20 mins of watching your videos than most. You explain everything so well.

  • @SantoValentino
    @SantoValentino Рік тому

    Thanks Laura 🫡

  • @CognitiveEvolution
    @CognitiveEvolution Рік тому

    This is one of the clearest explanations I've experienced on Stable Diffusion.

  • @colinfoo2856
    @colinfoo2856 Рік тому +1

    Thank you so Much, Laura for this tutorials. Much appreciated ❤🎉

  • @SouthbayCreations
    @SouthbayCreations Рік тому

    Great video Laura, thank you! I joined your Discord also!! 🥳🥳

  • @memoryhero
    @memoryhero Рік тому

    What a great tutorial. Excellent presentation visually, aurally, and organizationally!

  • @jrbirdmanpodcast
    @jrbirdmanpodcast Рік тому

    This was very helpful Laura. Thank you very much.

  • @gordmills1983
    @gordmills1983 9 місяців тому

    Have to say… what a nice young lady! Subscribed.

  • @electrolab2624
    @electrolab2624 Рік тому

    Thank you! - I tried the mov2mov extension for automatic1111 and like it much! - Wondering why not so many people use it.

    • @LaCarnevali
      @LaCarnevali  Рік тому

      Cause not many people are aware of their existence, which is understandable given the quantity of extensions for A1111

  • @creativeleodaily
    @creativeleodaily Рік тому

    Amazing VIdeo, I will experiment with this soon, Although I used Img2img and converted a Batch of 30fps 15sec video, it turned out quite good in first attempt.
    I am curious What GPU are you using ?

  • @Ilovebrushingmyhorse
    @Ilovebrushingmyhorse Рік тому

    havent watched the video but saw the thumbnail and "video stable diffusion" sounds like something that would absolutely destroy my pc

  • @TCISBEATHUB
    @TCISBEATHUB Рік тому

    🏆🏆🏆 love watching your videos . Thank you for the time you take to make them

  • @alphonsedes8021
    @alphonsedes8021 Рік тому

    impressive a good tool if you search really consistant animations. but very long process indeed. Nice video, thanks !

  • @EllenVaman
    @EllenVaman Рік тому

    Thanks lovely ;)

  • @HiggsBosonandtheStrangeCharm
    @HiggsBosonandtheStrangeCharm Місяць тому

    Hi Laura, love your videos. I was just trying to follow your tutorial but I don't seem to be able to find SD-CN-Animation tab. I'm loading from the same "Extension index URL' but it mustn't exist any more? If you know a work around, please let me know. Thanks heaps.......

  • @zglows
    @zglows Рік тому

    Hi Laura! Your videos are awesome. What do you recommend for getting the best animation results, this method that you explain right here or the one from your previous video?

    • @LaCarnevali
      @LaCarnevali  Рік тому

      Hi, this is a very good one, but it takes a little of time

  • @aljopot4236
    @aljopot4236 Рік тому

    Thanks for this tutorial, I think this is the easiest method.

  • @ronnykhalil
    @ronnykhalil Рік тому

    Love your videos. Thank you

  • @KDashHoward
    @KDashHoward Рік тому

    thank you it's really great and well eplained! i was wondering... what's the main difference with this plugin and the temporal kit one? :O

    • @LaCarnevali
      @LaCarnevali  Рік тому

      Hello!! :) What are you referring to when saying 'temporal kit plugin'?

  • @NikhilJaiswal4129
    @NikhilJaiswal4129 Рік тому

    warp fusion tutraiol on Mac or runpod
    is there option to use warp fusion for free

  • @RiotRemixProductions
    @RiotRemixProductions Рік тому

    ✍👍

  • @lucianodaluz5414
    @lucianodaluz5414 Рік тому

    if there was a way to make it stop "Imagine" the image for each frame, would solve this. Is there any? Like, "Use the prompt just for the first frame and do your job. :)

    • @LaCarnevali
      @LaCarnevali  Рік тому

      Maybe in the upgrade, but not sure

  • @CrazyBullProduction
    @CrazyBullProduction Рік тому

    Thank you so much for the tutorial!
    I unfortunately have an error message after trying to generate the first frame, that says "Torch not compiled with CUDA enabled".
    Do you have some magic information to help? 😀

    • @LaCarnevali
      @LaCarnevali  Рік тому +1

      Hi, that is not an issue if you are using a mac. Do you see any other errors?

    • @CrazyBullProduction
      @CrazyBullProduction Рік тому +1

      @@LaCarnevali I am using mac, but after generating the first frame i get this message en SD "An exception occurred while trying to process the frame: Torch not compiled with CUDA enabled", and no other error messages in warp

    • @LaCarnevali
      @LaCarnevali  Рік тому

      @@CrazyBullProduction try launching the webui.sh using the --no-half command:
      ./webui.sh --no-half

  • @corujafilmmaker3724
    @corujafilmmaker3724 Рік тому

    🎉🎉🎉🎉🎉

  • @benmcc7729
    @benmcc7729 Рік тому

    Hi Laura, i'm new to this, but I don't have ControlNet in my version (was this removed?)

    • @LaCarnevali
      @LaCarnevali  Рік тому

      I don't think it has been removed. Do you have controlnet installed and activated in the extensions tab?

  • @electricdreamer
    @electricdreamer Рік тому

    Can you do this with Invoke AI's webui? Or it has to be Automatic1111?

    • @LaCarnevali
      @LaCarnevali  Рік тому

      Only Automatic1111 is fully supported

  • @alphacentauri424
    @alphacentauri424 Рік тому

    omg this girl is cute :) and not just cute, but that good kind of cute :)

  • @m_sha3er
    @m_sha3er Рік тому

    It takes too much time with multiple CN, like am testing a 2 sec video gives me about 6 hr 45 mints 😅😅

    • @LaCarnevali
      @LaCarnevali  Рік тому

      Yeah it took me 4 h for a 11 seconds video! Probably something that needs improvement.

  • @Comic_Book_Creator
    @Comic_Book_Creator Рік тому

    I just try , and dont see the tab

  • @JadhuGhr-lz8en
    @JadhuGhr-lz8en Рік тому

    How to update sd latest 😊

    • @LaCarnevali
      @LaCarnevali  Рік тому

      git pull when in the main folder :)

  • @sidbhattnoida
    @sidbhattnoida Рік тому

    hi...does this work in a mac?

  • @NikhilJaiswal4129
    @NikhilJaiswal4129 Рік тому

    places help me with thatThin-Plate-Spline-Motion-Model for SD.ipynb

    • @LaCarnevali
      @LaCarnevali  Рік тому

      What about that?

    • @NikhilJaiswal4129
      @NikhilJaiswal4129 Рік тому

      in step 3
      AttributeError Traceback (most recent call last)
      in ()
      8 if predict_mode=='relative' and find_best_frame:
      9 from demo import find_best_frame as _find
      ---> 10 i = _find(source_image, driving_video, device.type=='cpu')
      11 print ("Best frame: " + str(i))
      12 driving_forward = driving_video[i:]
      1 frames
      /usr/lib/python3.10/enum.py in __getattr__(cls, name)
      435 return cls._member_map_[name]
      436 except KeyError:
      --> 437 raise AttributeError(name) from None
      438
      439 def __getitem__(cls, name):
      AttributeError: _2D

    • @NikhilJaiswal4129
      @NikhilJaiswal4129 Рік тому

      In fact I uploaded same size png,mp4

  • @Comic_Book_Creator
    @Comic_Book_Creator Рік тому

    it has problem with wildcard manager ..

  • @twilightfilms9436
    @twilightfilms9436 Рік тому

    I hope you can see that the original video has her eyes closed and the output has not. Also, there’s an annoying flickering in the eyes. The reason for this is because the models are not properly trained for img2img. The models are trained with the faces always looking at the viewer. When you train a model for other platforms, like METAHUMANS or else, you have to do it with the eyeballs in all directions and dilatation. I’ve been trying to explain this to several UA-camrs so the can put the word out, but nobody seems to understand the issue or even worst, they don’t care. So the problem will persist with flickering in the eyes and hair until the models are proper trained. This is of course from the eye of a professional. For TikTok videos I guess is alright?

    • @LaCarnevali
      @LaCarnevali  Рік тому

      Hi, happy to discuss this further. For the video, I haven't trained a model but just picked a random one. I think with ebsyinth there is less of this issue - anyway, I will try and train a model looking in all the directions and will test it. Happy to hear different point of view, mainly if constructive (like in this case)

  • @timbacodes8021
    @timbacodes8021 Рік тому

    Can this method be used to make videos like this? they ran a music video through it: ua-cam.com/video/O7-SCsgMgnk/v-deo.html

    • @LaCarnevali
      @LaCarnevali  Рік тому

      Probably it will take too much time, but you can use Ebsynth I suppsoe

    • @ATLJB86
      @ATLJB86 Рік тому

      What you want for this is Warpfusion, nothing else is remotely close.

  • @0oORealOo0
    @0oORealOo0 Рік тому

    result imo it's... just awful?

  • @electricdreamer
    @electricdreamer Рік тому +1

    Can you do this with Invoke AI's webui? Or it has to be Automatic1111?