How To Use Custom Trained Motion Lora In Stable Diffusion AnimateDiff

Поділитися
Вставка
  • Опубліковано 2 жов 2024

КОМЕНТАРІ • 27

  • @weishanlei8682
    @weishanlei8682 4 місяці тому +1

    How come your user interface is different from mine?

    • @TheFutureThinker
      @TheFutureThinker  4 місяці тому

      Are you using ComfyUI?

    • @weishanlei8682
      @weishanlei8682 4 місяці тому

      @@TheFutureThinker I only jnow little. Do you have step by step video?

  • @michealhsujejdud
    @michealhsujejdud 4 місяці тому

    rtx 4050 with i5 13th gen laptop is ok for run Stable Diffusion Model software?

  • @MisterCozyMelodies
    @MisterCozyMelodies 4 місяці тому

    where i can find the workflow?

    • @TheFutureThinker
      @TheFutureThinker  4 місяці тому

      From the previous video mentioned ua-cam.com/video/oNTdPb4YOoo/v-deo.html

  • @drucshlook
    @drucshlook 4 місяці тому

    awesome as usual !
    damn I feel outdated every morning lol

    • @TheFutureThinker
      @TheFutureThinker  4 місяці тому

      Me too, hehe. And recent AI models release are crazy. I think i will do more videos talk about AI model more than ComfyUI Workflow soon.

  • @impactframes
    @impactframes 4 місяці тому

    Wow great tutorial 🎉 thanks 🙂

    • @TheFutureThinker
      @TheFutureThinker  4 місяці тому +1

      Thank you my friend, you have great work too, keep going, looking forward to see more custom node from you💪

  • @MilesBellas
    @MilesBellas 4 місяці тому

    Looks great small
    ....issue = details....

    • @TheFutureThinker
      @TheFutureThinker  4 місяці тому +1

      Motion lora doesn't relate to the detail. Its about about how the object motion move.

    • @MilesBellas
      @MilesBellas 4 місяці тому

      @@TheFutureThinker
      Yes I know.....
      Particle trajectories = motion too.
      Accurate physical simulation hasn't occured with diffusion models yet ?

    • @TheFutureThinker
      @TheFutureThinker  4 місяці тому +1

      @@MilesBellas Particle trajectories , oh you mean those details issue?
      Accurate physical simulation required an input dataset to train with that. When you just mention diffusion models, then I think we need to go deeper which diffusion models we are using on. As you can check out research paper in Arxiv. Not all diffusion models are doing so called physical simulation.
      Even an animation AI model like AnimateDiff, you gonna input a dataset to train on each motion, as what you called physical simulation. For example, the AI don't know what is the physical reaction if a bullet going through your head, will that bullet drop ? and how far it will fall?
      Without data, the AI model is just a container.

    • @MilesBellas
      @MilesBellas 4 місяці тому

      @@TheFutureThinker
      Yes ....
      Particle simulations = water droplets = complex in terms of motion, refractive index, specularites etc..

    • @MilesBellas
      @MilesBellas 4 місяці тому

      @@TheFutureThinker
      SD3 & SORA = DiT ?
      via Pi
      "Incorporating Diffusion Transformers (DiTs) into the Stable Diffusion framework could lead to several potential improvements and advancements in generative AI. Stable Diffusion is a powerful generative AI model that uses a diffusion process to generate high-quality images from text prompts. By introducing DiTs into this framework, the following benefits could be realized:
      * Improved scalability: Diffusion Transformers have demonstrated better scalability compared to traditional diffusion models based on U-Net architectures. Integrating DiTs into Stable Diffusion could result in enhanced performance when working with large datasets or generating high-resolution images.
      * Enhanced performance: DiTs have shown superior performance in terms of sample quality and diversity. Combining this with the already impressive capabilities of Stable Diffusion could lead to even higher-quality and more diverse image generations.
      * Greater flexibility: The adaptability of Diffusion Transformers to various data types, such as images, videos, and 3D data, might enable the extension of Stable Diffusion's capabilities beyond image generation to other domains like video or 3D model generation.
      Overall, incorporating Diffusion Transformers into Stable Diffusion has the potential to significantly advance generative AI's capabilities, leading to more scalable, flexible, and high-performing models for various creative and practical applications. However, it is essential to consider the computational resources required for training and running these models, as the Transformer-based approach might demand more powerful hardware."

  • @kalakala4803
    @kalakala4803 4 місяці тому

    So we can training lora with V3 MM .

    • @TheFutureThinker
      @TheFutureThinker  4 місяці тому

      Yes with Motion Director, you can do it with v3 MM.

    • @eyoo369
      @eyoo369 2 місяці тому

      Yes V3 MM but it works on the LCM motion module too, just trained 2 motion lora's for each module in less than 30 minutes.
      Both came out great. I prefer LCM since it generates a nice animation with only 8 steps

  • @SFzip
    @SFzip 4 місяці тому

    amazing