AI Video 2 Video Animation Beginners Guide , in Stable Diffusion and A1111

Поділитися
Вставка
  • Опубліковано 24 гру 2024

КОМЕНТАРІ • 61

  • @Nine-Signs
    @Nine-Signs 3 місяці тому

    its kind of like we have progressed forwards, onto a past form of flip book animation just by digital means. I find all this stuff fascinating, i had no idea i could do so much ai madness locally, never paid it much mind before as any websites with it all have limits and you never have enough "free'' time on those sites to complete your answer or make your project before having to pay for tokens.
    Dear programmers of this world, I love you dearly, and you have finally given me a reason to be reasonably happy with the massively inflated price i paid for an rtx 3070 in 2022. :) it's no longer a mostly expensive paperweight when not gaming! :D

    • @AI-HowTo
      @AI-HowTo  3 місяці тому

      :) yaaa, it is not so bad, can be usefulul when applied properly too

  • @pointandshootvideo
    @pointandshootvideo Рік тому +2

    Really great tutorials! Simple, easy to follow, and exactly what someone needs to replicate your process. Thank you!!

    • @AI-HowTo
      @AI-HowTo  Рік тому

      it is great to know, thank you.

  • @ЭаоуиПРайм
    @ЭаоуиПРайм Рік тому

    Оригинал просто прекрасна

  • @AI_News_2024
    @AI_News_2024 Рік тому

    Thank you for this very practical tutorial. All the best insh'Allah

    • @AI-HowTo
      @AI-HowTo  Рік тому +1

      Thank you.

    • @AI_News_2024
      @AI_News_2024 Рік тому

      @@AI-HowTo Thank you for your interaction, i would love to have your comment on the technoique in my new concept video, thanks a lot :)

  • @LastHiroshime
    @LastHiroshime 9 місяців тому

    Great tutorial, got to create my first video. But one thing i didn't get was consistency, I mean when I lower the denoising strength it keeps the original video and when I rise it I get an inconsistent model like changing clothes each image, changing hair style. Maybe it's a problem on my prompt?

    • @AI-HowTo
      @AI-HowTo  9 місяців тому

      the more you increase denoising level, the more random changes will occur, using a LoRA model with same clothes/hair style along with controlnets can help reduce incosistency in videos.

  • @itestthings5337
    @itestthings5337 Рік тому +1

    Love your videos man. This is unrelated but have you tried producing consistent environments? I only see people working on consistent faces and characters, but nothing about consistent places/environments from different angles. Would be amazing if you gave it a shot and shared your insights!

    • @itestthings5337
      @itestthings5337 Рік тому

      my method so far has been making a scene in 3d, producing canny,depth, and normal maps and pushing them through controlnet and finally a fourth control net for reference. But unfortuantely, theres very little consistency when changing the camera angle.

    • @AI-HowTo
      @AI-HowTo  Рік тому +1

      I dont think its normally possible with SD (given its diffusion model), as mentioned in the Video, the best and most practical way, is to actually use a third software tool, such as Davinci resolve to replace the background...as you mentioned, once angel changes, the background changes, this results in slight flickering/changes even with controlnets.

    • @itestthings5337
      @itestthings5337 Рік тому +1

      @@AI-HowTo I just started testing a promising idea. I think it can be done if i make a 360 panoramic depthmap,normalmap, and cannymap in 3d software of my environment, then feed the maps into controlnet, and then project the SD result back onto my 3d scene, then I'll get full camera rotation freedom. Still needs some testing Anyway, keep up the good work habibi

    • @AI-HowTo
      @AI-HowTo  Рік тому

      thank you, best of luck to you as well.

  • @squamataman
    @squamataman Рік тому

    Great tutorial! In the final realistic deepfake example, is there a way to prevent Adetailer from changing the hair? it seems like this is the greatest source of flickering/unrealistic results.

    • @AI-HowTo
      @AI-HowTo  Рік тому +1

      Thank you.
      Yes certainly, just set img2img denoising strength to 0, and after detailer will only deepfake the face, hair will remain the original person.

  • @dkamhaji
    @dkamhaji Рік тому

    Fantastic video. Really helped a lot. In your method above, while in ing2img you encourage a low denoise, med cfg and a very lo to no noise multiplier. Can you explain to me how to then bring back difundiese detail from your prompt back in. I keep fighting the generation looking either to blurry and basically looking too close to the img input. Then if I bring the multiplier to 1 and the de noise to something like 7, then the prompt starts to get come out. I realize it’s a push and pull but other then ghibli hand drawn anime, I have not been able to win that battle. Specially for stylize semi realistic looks and model checkpoints like juggernaut. Would love to see you do a video on this. Because I understand that to maintain stability in the batch, you need low denoise.

    • @AI-HowTo
      @AI-HowTo  Рік тому

      You are welcome.
      * regarding your first question, I am not sure what you meant, but clicking on interrogate can give you the image description which you can then improve on trying to get a better more clear image, adding blurry to negative...etc.
      * for a single image, some checkpoints fails to produce clear image, this is why it is experimental, one may need to try multiple checkpoints to find a suitable one for a certain image, for some checkpoints, even 0.3 can introduce a huge change in looks while maintaining a decent level of consistency, this process is difficult to control even when using controlnet the more we increase the denosing level unfortunately.... some style LoRAs are also better than others, here Ghibli is really good in this process
      * it is best if source video is of good quality, otherwise ,you may not get good results... and sometimes, one may need to use external tools such as Topaz AI image tool to batch sharpen all output images unfortunately.

  • @VooDooEf
    @VooDooEf Рік тому

    such an informative video, many thanks!

    • @AI-HowTo
      @AI-HowTo  Рік тому +1

      Glad it was helpful!

  • @Fried-Tofu
    @Fried-Tofu Рік тому

    Awesome results! Quick question purely out of curiosity, why aniverse 1.3 & majicmix v4 as opposed to the latest models (vs 1.5 full or pruned/ majicmix v7)? Do you get more consistent results with the earlier versions or are they just better suited to img2img batch generation?

    • @AI-HowTo
      @AI-HowTo  Рік тому

      for majicmix v4, i saw it producing better results than recent versions in my LoRA training, so i used it * latest was 1.6 when i made this video, was producing darker results so i didnt use it...
      for aniverse 1.3 , when i made this video v 1.5 wasnot published yet and 1.3 seemed producing better results than 1.4 back then for me.

  • @aminurrahman4150
    @aminurrahman4150 Рік тому

    how to install Stable Diffusion and A1111 ? did you make any video on it ?

    • @AI-HowTo
      @AI-HowTo  Рік тому

      yup, older one ua-cam.com/video/RtjDswbSEEY/v-deo.html but still good to use.

    • @aminurrahman4150
      @aminurrahman4150 Рік тому

      @@AI-HowTo Thank you so much

  • @karim_yourself
    @karim_yourself Рік тому

    Thanks for the video.
    Unfortunately it seems my export gets stuck at the ControlNet model, any idea what this could be? It's been at 0% for 30 mins :
    Using pytorch attention in VAE
    Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
    Using pytorch attention in VAE
    Leftover VAE keys ['model_ema.decay', 'model_ema.num_updates']
    Requested to load BaseModel
    Requested to load ControlNet
    Loading 2 new models
    0%|

    • @AI-HowTo
      @AI-HowTo  Рік тому

      sorry, have not got this before, but you should test Controlnet independently over one image, if it is not working, then probably you have a corrupted model file downloaded, so you can test redownloading it once again from huggingface.co/lllyasviel/ControlNet-v1-1/tree/main along with its yaml file ... also use gitpull to update your control net and libraries, you might have something corrupted for some reason.

    • @karim_yourself
      @karim_yourself Рік тому

      awesome tank you! @@AI-HowTo

  • @leonardhinkelmann5629
    @leonardhinkelmann5629 11 місяців тому

    Tuning down the noise multiplier also blurrs my image. Any ideas why this happens and how to fix it? Great tutorial btw

    • @AI-HowTo
      @AI-HowTo  11 місяців тому +1

      not sure, turning it down is optional, it increases consistency a bit, you can reduce it abit instead of making it 0 and test out hopefully it gives you better output.

    • @leonardhinkelmann5629
      @leonardhinkelmann5629 11 місяців тому

      ​@@AI-HowTo tried it, that only reduces the problem. However everything else was very useful and im now getting into vid2vid. So thnx

  • @chinese_thru_culture
    @chinese_thru_culture Рік тому

    Great stuff, do you have a plan of having a tutorial of making it by AnimateDiff, btw?

    • @AI-HowTo
      @AI-HowTo  Рік тому

      I was thinking about creating another for AnimateDiff especially after version v15_v2 which can be used for other purposes too, and have a different work flow , but unfortunately not sure if i will be able to make it in the upcoming weeks or not , will see how things go.

    • @chinese_thru_culture
      @chinese_thru_culture Рік тому

      @@AI-HowTo cool

  • @morozig
    @morozig Рік тому

    Thanks! Very useful.

    • @AI-HowTo
      @AI-HowTo  Рік тому

      Glad it was helpful!

    • @morozig
      @morozig Рік тому

      Hey,@@AI-HowTo, I'm trying to create a character spritesheet for a video game using this method. I have a 3d character walking animation and I'd like to make a 2d anime character facing front, side and back, 10-20 frames for each side. If this is interesting for you as well, I think a tutorial on this subject would be very appreciated by a lot of game developers!

    • @AI-HowTo
      @AI-HowTo  Рік тому

      thanks for the suggestion, will consider this for future videos, but I don't think I will be making intense videos like this for a while, possibly short simple videos later on, it's difficult to find time.

    • @morozig
      @morozig Рік тому

      Yeah, sure, thanks! I also want to point out that I think you could just use non adaptive sampler like "euler" to reduce randomness without changing settings.

  • @shitokenjpn
    @shitokenjpn 7 місяців тому

    Little confusing here. You have skipped the part at image selection , as per your video one image from the first frame of dance video been used in imgtoimg. How did the generated image created all the exact same images as per frames of the video in animation?

    • @AI-HowTo
      @AI-HowTo  7 місяців тому

      I used one image to test the output only, once satisfied with the results, we go to Batch tab and just filled (input directory with source images, and output directory for the output) and batch will generated all the images based on the prompt/details on my test on the single image

  • @jeanbeaumarchand1526
    @jeanbeaumarchand1526 Рік тому

    Thx for the tutorial, the problem is my character change clothes every frame, how can I fix that ?

    • @AI-HowTo
      @AI-HowTo  Рік тому

      You are welcome.
      once you increase denoising strength, that will happen naturally unless you use a LoRA with a consistent clothes or a checkpoint that you have trained yourself on a character with consistent clothes.
      in other cases, lowering denoising strength is the only option to maintain similar clothes.
      use of Controlnet could also help(normal/depth maps, reference, tile) may help reduce changes with higher denoising strength.

  • @YooArtifical
    @YooArtifical Рік тому

    I can’t seem to get the clothes to stay consistent like you did, it flickers too much.

    • @AI-HowTo
      @AI-HowTo  Рік тому

      use of a LoRA is best way to have consistent clothes, the use of IP Adapter can also help as in this video ua-cam.com/video/k4ZWJD6W8d0/v-deo.html ....there are other methods with AnimateDiff but it requires stronger PC than mine to do unfortunately and best used in ComfyUI, you can google that hopefully you find something that can help produce better consistency and less flickering.

  • @maxrinehart4177
    @maxrinehart4177 Рік тому

    This looks awesome. How much vram I need to use this feature of SD?

    • @AI-HowTo
      @AI-HowTo  Рік тому

      in general, if you are able to generate images in SD, then you can do this, I am running on 8GB, using img2img is faster than txt2img because it only uses 10 steps for instance to convert the image, and 7 or less steps for the after detailer... the slow down becomes when we use Contronlnet, for example, when having 8GB you may not be able to able to apply 2 or more control nets at the same time if target image size is large...any way, contrl net is not necessary if we use low denoising strength.

  • @Relentless_Games
    @Relentless_Games Рік тому

    Bro, how do I install this?

    • @AI-HowTo
      @AI-HowTo  Рік тому

      if you means A1111, you might be interested to watch this ua-cam.com/video/RtjDswbSEEY/v-deo.html... the img2img is part of A1111... Controlnet has another video in the channel , if you are starting with this, you need to take it step by step, but it is fun even if it took a while.

  • @zahrajp2223
    @zahrajp2223 Рік тому

    Plz this same Process on fooocus pllllz

    • @AI-HowTo
      @AI-HowTo  Рік тому

      I am sorry I have not used Fooocus before, but thanks for bringing it into attention, I saw good reviews about it, I will check it in the future to gain more insights about it, thank.

  • @tonywhite4476
    @tonywhite4476 5 місяців тому

    So much work for so little return.

  • @the_one_and_carpool
    @the_one_and_carpool Рік тому

    warpfusion is free

    • @AI-HowTo
      @AI-HowTo  Рік тому

      a good tool as well, there are several tools that can help create videos with some kind of style, in this video, only explaining img2img, controlnet usage, and other useful aspects of davinci resolve and such, may do another video in the future for warpdiffusion too.

  • @pmlstk
    @pmlstk Рік тому

    add prompts in comments, if your goal is for people to replicate your guide

    • @AI-HowTo
      @AI-HowTo  Рік тому +1

      You suggest to add the prompts inside the video or in the description for upcomming videos? what do you think is better?

    • @xanzxx
      @xanzxx Рік тому +1

      @@AI-HowTo description please. Makes it easy to copy and replicate. Love your videos btw

    • @AI-HowTo
      @AI-HowTo  Рік тому

      thanks, will do so.

  • @tjaylucas5020
    @tjaylucas5020 Рік тому

    Not worth it in my opinion