@@MilesBellas Particle trajectories , oh you mean those details issue? Accurate physical simulation required an input dataset to train with that. When you just mention diffusion models, then I think we need to go deeper which diffusion models we are using on. As you can check out research paper in Arxiv. Not all diffusion models are doing so called physical simulation. Even an animation AI model like AnimateDiff, you gonna input a dataset to train on each motion, as what you called physical simulation. For example, the AI don't know what is the physical reaction if a bullet going through your head, will that bullet drop ? and how far it will fall? Without data, the AI model is just a container.
@@TheFutureThinker SD3 & SORA = DiT ? via Pi "Incorporating Diffusion Transformers (DiTs) into the Stable Diffusion framework could lead to several potential improvements and advancements in generative AI. Stable Diffusion is a powerful generative AI model that uses a diffusion process to generate high-quality images from text prompts. By introducing DiTs into this framework, the following benefits could be realized: * Improved scalability: Diffusion Transformers have demonstrated better scalability compared to traditional diffusion models based on U-Net architectures. Integrating DiTs into Stable Diffusion could result in enhanced performance when working with large datasets or generating high-resolution images. * Enhanced performance: DiTs have shown superior performance in terms of sample quality and diversity. Combining this with the already impressive capabilities of Stable Diffusion could lead to even higher-quality and more diverse image generations. * Greater flexibility: The adaptability of Diffusion Transformers to various data types, such as images, videos, and 3D data, might enable the extension of Stable Diffusion's capabilities beyond image generation to other domains like video or 3D model generation. Overall, incorporating Diffusion Transformers into Stable Diffusion has the potential to significantly advance generative AI's capabilities, leading to more scalable, flexible, and high-performing models for various creative and practical applications. However, it is essential to consider the computational resources required for training and running these models, as the Transformer-based approach might demand more powerful hardware."
Yes V3 MM but it works on the LCM motion module too, just trained 2 motion lora's for each module in less than 30 minutes. Both came out great. I prefer LCM since it generates a nice animation with only 8 steps
How come your user interface is different from mine?
Are you using ComfyUI?
@@TheFutureThinker I only jnow little. Do you have step by step video?
rtx 4050 with i5 13th gen laptop is ok for run Stable Diffusion Model software?
Yup , average run is okay ☺️
where i can find the workflow?
From the previous video mentioned ua-cam.com/video/oNTdPb4YOoo/v-deo.html
awesome as usual !
damn I feel outdated every morning lol
Me too, hehe. And recent AI models release are crazy. I think i will do more videos talk about AI model more than ComfyUI Workflow soon.
Wow great tutorial 🎉 thanks 🙂
Thank you my friend, you have great work too, keep going, looking forward to see more custom node from you💪
Looks great small
....issue = details....
Motion lora doesn't relate to the detail. Its about about how the object motion move.
@@TheFutureThinker
Yes I know.....
Particle trajectories = motion too.
Accurate physical simulation hasn't occured with diffusion models yet ?
@@MilesBellas Particle trajectories , oh you mean those details issue?
Accurate physical simulation required an input dataset to train with that. When you just mention diffusion models, then I think we need to go deeper which diffusion models we are using on. As you can check out research paper in Arxiv. Not all diffusion models are doing so called physical simulation.
Even an animation AI model like AnimateDiff, you gonna input a dataset to train on each motion, as what you called physical simulation. For example, the AI don't know what is the physical reaction if a bullet going through your head, will that bullet drop ? and how far it will fall?
Without data, the AI model is just a container.
@@TheFutureThinker
Yes ....
Particle simulations = water droplets = complex in terms of motion, refractive index, specularites etc..
@@TheFutureThinker
SD3 & SORA = DiT ?
via Pi
"Incorporating Diffusion Transformers (DiTs) into the Stable Diffusion framework could lead to several potential improvements and advancements in generative AI. Stable Diffusion is a powerful generative AI model that uses a diffusion process to generate high-quality images from text prompts. By introducing DiTs into this framework, the following benefits could be realized:
* Improved scalability: Diffusion Transformers have demonstrated better scalability compared to traditional diffusion models based on U-Net architectures. Integrating DiTs into Stable Diffusion could result in enhanced performance when working with large datasets or generating high-resolution images.
* Enhanced performance: DiTs have shown superior performance in terms of sample quality and diversity. Combining this with the already impressive capabilities of Stable Diffusion could lead to even higher-quality and more diverse image generations.
* Greater flexibility: The adaptability of Diffusion Transformers to various data types, such as images, videos, and 3D data, might enable the extension of Stable Diffusion's capabilities beyond image generation to other domains like video or 3D model generation.
Overall, incorporating Diffusion Transformers into Stable Diffusion has the potential to significantly advance generative AI's capabilities, leading to more scalable, flexible, and high-performing models for various creative and practical applications. However, it is essential to consider the computational resources required for training and running these models, as the Transformer-based approach might demand more powerful hardware."
So we can training lora with V3 MM .
Yes with Motion Director, you can do it with v3 MM.
Yes V3 MM but it works on the LCM motion module too, just trained 2 motion lora's for each module in less than 30 minutes.
Both came out great. I prefer LCM since it generates a nice animation with only 8 steps
amazing
Glad you like it.