Generate AI Images with Stable Diffusion + Audio Reactive Particle Effects - TouchDesigner Tutorial
Вставка
- Опубліковано 27 чер 2024
- Hey! In this tutorial, we'll go over how to use Stable Diffusion in TouchDesigner to turn AI-generated images into a video and add audio-reactive particles for a blending effect.
The project file is available on my Patreon: / tblankensmith
Part 1 of this tutorial is available here: • TouchDesigner Tutorial...
Huge thank you to Peter Whidden for his support on this and his work on computerender which makes this project possible
0:00 Overview and Examples
1:23 Overview of Recording Component
2:38 Recording an Animation
4:20 Setup Particle System
5:57 Audio Analysis
7:31 Particle System Settings
Fantastic work. So exciting to see such an AI-integrated TD project
This is revolutionary, can’t wait for more 🤩
incredible, thank you ❤excited to experiment with td + sd!
Super fascinating for this AI integration in TD, thanks so much!
Brilliant work Torin, I will share the news on your amazing integration and tools and grab them from your Patreon. It’s especially impressive that you made it so straightforward with the use of the api service and created and excellent way to create a constant animation between the generated frames. Using these as textures composited into the base color of a PBR texture onto 3D objects also generated by AI would be an interesting way for for this to evolve. What an incredible holiday gift for the community! ❤
Thanks Rob! I'm glad you've been enjoying the tutorials! Yeah, I think it'd be really interesting to use these to generate HDRI maps, or for texture map for a 3D model. I should make an example of applying the image output to a 3d model. Looks like Spline added that into their web editor ua-cam.com/video/ma91lA51UJ8/v-deo.html
I love you so badly! Thanks for that. Huge fan ❤
very creative
thanks !!
Hey Torin, thanks so much for the tutorial and project file! I am having a few issues though and wondering if you can answer a question. So when I open the project the noise objects that are used for altering the img2img function do not have anything in them, and it seems like img2img is required for everything to work. How exactly do you get the noise populated with an image or noise data and running correctly? Thank you!
Torin, I was wondering if you could use this and computerenderer to generate images in HD or 4K resolution in Midjourney, DallE-2 or StableDiffusion Models and if the environment could show a count of how many images have been generated to be aware of the run cost as it accrues?
Great tutorials, keep up the good work! Is there a way to generate images through this method but with live audio coming from an external device, like a turntable?
Thanks Anderr! Yeah totally, you can use an Audio Device In CHOP instead of a Audio File In. Using that operator you can select your computer's built-in microphone, or if you're able to connect your turntables to your computer through an audio interface you can select your audio interface.
Amazing! Thank you for sharing this. I wonder if it would be possible to make the particles interactive through webcam or Kinect movement.. going to try :)
hey, glad you’re enjoying it! yes absolutely you could do it with both 😁
Hey, Torin! This is absolutely stunning... could this potentially be used in a live setting? For example could I get audio in from Ableton Live and then project the reactive visuals in real-time?
Hey Gian, yeah you could use an audio device in to get the microphone input and map the audio analysis to the particle system
This is fantastic. Could the image generation be done live as well, so that prompts could be entered during a performance rather than pre-recorded? @@blankensmithing
Is it possible to use multiple images as inputs? Maybe like around 30?
Great work bro! Do you know if it can work on mac machines? i know that there are some specs limitations on those
nvm i just saw that u use a mac😅😅
Would be cool to have depth map so than the particles becomes 3d… ❤
Thanku so much for this amazing video and also for the link of file. My API component is not working can u plzz tell me is it due to version difference of touch designer if it is so plzz tell me which version u used for this.
It works fine for me on the latest TD version. Just make sure you create an API key on computerender.com/ and swap out your key in the project
how can i stable diffuse a live video feed
You can just pass in a Movie File in TOP into the component. It's not going to convert them in real-time since it takes some time to process, but every time you generate a new image it'll snag the current frame from the TOP
please give me the project :D
it's in the description
this isn't a tutorial?