Generate AI Images with Stable Diffusion + Audio Reactive Particle Effects - TouchDesigner Tutorial

Поділитися
Вставка
  • Опубліковано 27 чер 2024
  • Hey! In this tutorial, we'll go over how to use Stable Diffusion in TouchDesigner to turn AI-generated images into a video and add audio-reactive particles for a blending effect.
    The project file is available on my Patreon: / tblankensmith
    Part 1 of this tutorial is available here: • TouchDesigner Tutorial...
    Huge thank you to Peter Whidden for his support on this and his work on computerender which makes this project possible
    0:00 Overview and Examples
    1:23 Overview of Recording Component
    2:38 Recording an Animation
    4:20 Setup Particle System
    5:57 Audio Analysis
    7:31 Particle System Settings

КОМЕНТАРІ • 29

  • @benchaykin4286
    @benchaykin4286 Рік тому +1

    Fantastic work. So exciting to see such an AI-integrated TD project

  • @rahsheedamcrae2381
    @rahsheedamcrae2381 Рік тому +2

    This is revolutionary, can’t wait for more 🤩

  • @elekktronaut
    @elekktronaut Рік тому +3

    incredible, thank you ❤excited to experiment with td + sd!

  • @plyzitron
    @plyzitron Рік тому +1

    Super fascinating for this AI integration in TD, thanks so much!

  • @therob3672
    @therob3672 Рік тому

    Brilliant work Torin, I will share the news on your amazing integration and tools and grab them from your Patreon. It’s especially impressive that you made it so straightforward with the use of the api service and created and excellent way to create a constant animation between the generated frames. Using these as textures composited into the base color of a PBR texture onto 3D objects also generated by AI would be an interesting way for for this to evolve. What an incredible holiday gift for the community! ❤

    • @blankensmithing
      @blankensmithing  Рік тому +1

      Thanks Rob! I'm glad you've been enjoying the tutorials! Yeah, I think it'd be really interesting to use these to generate HDRI maps, or for texture map for a 3D model. I should make an example of applying the image output to a 3d model. Looks like Spline added that into their web editor ua-cam.com/video/ma91lA51UJ8/v-deo.html

  • @smon1127
    @smon1127 Рік тому

    I love you so badly! Thanks for that. Huge fan ❤

  • @DSJOfficial94
    @DSJOfficial94 Рік тому

    very creative

  • @Fonira
    @Fonira Рік тому

    thanks !!

  • @hudsontreu
    @hudsontreu Рік тому

    Hey Torin, thanks so much for the tutorial and project file! I am having a few issues though and wondering if you can answer a question. So when I open the project the noise objects that are used for altering the img2img function do not have anything in them, and it seems like img2img is required for everything to work. How exactly do you get the noise populated with an image or noise data and running correctly? Thank you!

  • @therob3672
    @therob3672 Рік тому +1

    Torin, I was wondering if you could use this and computerenderer to generate images in HD or 4K resolution in Midjourney, DallE-2 or StableDiffusion Models and if the environment could show a count of how many images have been generated to be aware of the run cost as it accrues?

  • @AnderrGraphics
    @AnderrGraphics Рік тому

    Great tutorials, keep up the good work! Is there a way to generate images through this method but with live audio coming from an external device, like a turntable?

    • @blankensmithing
      @blankensmithing  Рік тому

      Thanks Anderr! Yeah totally, you can use an Audio Device In CHOP instead of a Audio File In. Using that operator you can select your computer's built-in microphone, or if you're able to connect your turntables to your computer through an audio interface you can select your audio interface.

  • @Nanotopia
    @Nanotopia Рік тому +1

    Amazing! Thank you for sharing this. I wonder if it would be possible to make the particles interactive through webcam or Kinect movement.. going to try :)

    • @blankensmithing
      @blankensmithing  Рік тому

      hey, glad you’re enjoying it! yes absolutely you could do it with both 😁

  • @GianTJ
    @GianTJ Рік тому

    Hey, Torin! This is absolutely stunning... could this potentially be used in a live setting? For example could I get audio in from Ableton Live and then project the reactive visuals in real-time?

    • @blankensmithing
      @blankensmithing  Рік тому +1

      Hey Gian, yeah you could use an audio device in to get the microphone input and map the audio analysis to the particle system

    • @bennettgrizzard5527
      @bennettgrizzard5527 8 місяців тому

      This is fantastic. Could the image generation be done live as well, so that prompts could be entered during a performance rather than pre-recorded? @@blankensmithing

  • @stiffyBlicky
    @stiffyBlicky 10 місяців тому

    Is it possible to use multiple images as inputs? Maybe like around 30?

  • @ricardcantm
    @ricardcantm Рік тому

    Great work bro! Do you know if it can work on mac machines? i know that there are some specs limitations on those

    • @ricardcantm
      @ricardcantm Рік тому

      nvm i just saw that u use a mac😅😅

  • @unveil7762
    @unveil7762 Рік тому

    Would be cool to have depth map so than the particles becomes 3d… ❤

  • @JannatShafiq-fr4lr
    @JannatShafiq-fr4lr Місяць тому

    Thanku so much for this amazing video and also for the link of file. My API component is not working can u plzz tell me is it due to version difference of touch designer if it is so plzz tell me which version u used for this.

    • @blankensmithing
      @blankensmithing  Місяць тому

      It works fine for me on the latest TD version. Just make sure you create an API key on computerender.com/ and swap out your key in the project

  • @xthefacelessbassistx
    @xthefacelessbassistx Рік тому

    how can i stable diffuse a live video feed

    • @blankensmithing
      @blankensmithing  Рік тому +1

      You can just pass in a Movie File in TOP into the component. It's not going to convert them in real-time since it takes some time to process, but every time you generate a new image it'll snag the current frame from the TOP

  • @carlottorose
    @carlottorose Рік тому

    please give me the project :D

  • @ddewwer23
    @ddewwer23 Рік тому

    this isn't a tutorial?