Real-time diffusion in TouchDesigner - StreamdiffusionTD Setup + Install + Settings
Вставка
- Опубліковано 24 чер 2024
- Important ! Read description for Installation Tips ! This uses TouchDesigner 2023 and will not work in TouchDesigner 2022.
Currently only working using windows + nvidia graphics card.
StreamDiffusionTD TOX available here - / dotsimulate
StreamDiffusion github - github.com/cumulo-autumn/Stre...
BEFORE Installing - make sure to have:
Python 3.10 (make sure Python is added to Path in your system's environment variables)
CUDA 11.8 or CUDA 12.1
NDI-SDK - ndi.video/download-ndi-sdk/
GIT (check add to PATH when asked) - git-scm.com/download/win
If GIT is already installed and not in PATH - www.delftstack.com/howto/git/...
00:00 Introduction to StreamDiffusionTD
00:48 System Requirements for StreamDiffusionTD
01:12 Setting up operator in network
01:37 Downloading StreamDiffusion repo
02:10 Installing venv (first install only)
03:04 Setting Model + parameters before Stream
03:30 Step Schedule part 1
03:46 Similar Image Filter
04:00 Starting the Stream / Cmd window
04:41 Disable / Enable to see Stream bug
05:10 Seed
05:47 Prompts
06:16 Setting up Prompt Box
07:10 Negative Prompts do NOT work
07:22 Step Schedule part 2
08:16 Guidance Scale / Delta
08:47 Experiment !
09:16 Lora loader page
09:33 Callbacks
10:03 Stopping the Stream
10:09 Setting OSC Ports / Stream name
10:18 Visible Window (ON is default)
10:32 Say hey on discord !
11:00 Thanks !! - Фільми й анімація
absolutely mindblowing!! getting 6fps on my laptop 3070, thank you so much ❤
me too
Amazing work and great tutorial. A small tip for your callbacks DAT, on the Common page of parameters if you switch 'Content Language' to Python then the DAT will use python syntax highlighting.
Dotsimulate InSession when?
Yessssss 100 %@@FunctionStore
❤Thank you !! I will make sure to set that when callbacks are created.
❤
Yeees finally! 😍 You‘re the best for putting this out! And the tutorial is well made made too 🖤
This is super inspiring, it gives a clear visual image of the fundamentals of diffusion models
Yesssss Lyell! Amazing work and tutorial!! Would love to see you create more tutorials in the future!!
Whoa! Now this is what I needed!
Thanks for the great work and generosity.
Incredible work and very accessible tools. Thanks so much!
Whoa! That was cool. Thanks for the breakdown. Downloading stream diffusion now. :)
nice nice! super nice! thank you for your work, it's amazing the opportunity to create and work with these tools. cheers
This is EXACTLY what I needed this weekend. Thank you so much for this.
Lyell you've got a voice ready for public access. ALSO this looks amazing! For people who know TD and already have a competent workflow, but are lacking in knowledge of SD, tools like this immediately open the door and make it more accessible than it has any right to be. I think seeing SD generating images in real-time, within TD in a format I can immediately wrap my head around to a better degree, is a pull I haven't had with SD until now. Awesome work!
Guys this is frickin insane. Amazing work!
Thank you so much for sharing this!! ❤ My dreams just came true..
That's amazing gonna try if for sure!!! Thank u for sharing this 🙏🌀
fabulous tool design and great explanation in the video.
Amazing. Thank you for this !
Proud of you Lyell! ❤
he's done it! I know what I'll be doing tomorrow...
Fantastic work!
Nice use of those new sequential parameters! 🔥
Thanks ! They are quite handy. definitely a great addition for 2023.
Nice job bro. Appreciate the information.
Thanks a lot for this! Running at 8 FPS with 4070
Thanks a lot for this amazing work on StreamDiffusion! Regarding the NDI source name issue: I had similiar issues with the refresh on another project and realized that using 'Bind' instead of 'Reference' fixes the issue.
Will investigate. Thank you.
AMAZING!!! FROM THE CREATUVES AROUND THE WORLD, THANK YOU
this is actually just the craziest thing i've ever seen
Thank you so much for sharing! I'm a big fan even after the first minute already! Sub
thank you for this amazing tutorial!
amazing stuff!
Wow thanks for sharing
You my new best friend!!! ❤❤❤❤
And now. Imagine an AI chatbot, when communicating with which you are accompanied by a visual series that displays your adventures in real time. Impressive
Awesome stuff! That 4090 really helps.
you are best!!!! thanks!!!!
Thank you.
amazing!
awesome!!!
Great tutorial and great asset man, thanks!
What's the OP that uyou use for the prompt list? is that a table dat or something else?
amazing
Really great! Got 1.5 fps with my GTX 1070 😅
same setup, confirming drop to 0.7/0.8 with live image segmentation running parallel
Amazing! I would really like a toturial (or get in touch with) about the fireball clip. Looks impressive, and i'm working on a project with similar aesthetic, but im having trouble on generating the refrence image part before the SD tox.
Seems from his video he is using hand tracking (possibly to move a circle and then generate from there) i'm currently experimenting with mediapipe live segmentation tool as a starting generation point
beautiful work, great tutorial, I'm running on average at 6 FPS . If I wanted to ask how to load new MODEL IDs?
Super excited to try this! The tutorial starts out with the Stream Diffusion operator on the screen but I'm not sure how to get that operator. Do I install the repo somewhere and point to the directory in the parameters box?
Amazing work. A quick question, can we connect this diffusion model in touch designer to depth camera, so the changes are done according to human motion?
i think im in love..
❤
This is pretty cool!
any advice on getting it running on a mac?
looks amazing. is there another way without an nvidia card?
Thanks for the video!
Is it a must to download the Python v 3.10.9 + cuda 11.8?
Hey! Thank you very much! Just became a patreon. I'm really excited. Does this work with python 3.12? or only 3.10.. I struggled to find the install file for 3.10 version.. had to change it directly on the url.
did you ever find out if it works on 3.12?
@@zpacetree in case it helps you: I have tried all versions and it only worked to me with 3.10
hello. I was enchanted by the possibilities of stream diffusion. I'm a complete newbie and I don't know how to do the installations prior to the tutorial. How can I access this operator? Do I need to subscribe to your Patreon to have access or would I be able to use this tutorial? tanks
simply wonderful, any insight on how to extract the live fps from streamdiffusion in order to set a time comp?
Ive got CHOPs exposed as out2 for both fps and framecount (frame total since stream began)
Thank you for the video! Is an image-to-image generation also possible? For example, a jpg file as input and stream diffusion creates similar looking images in real time?
Amazing 🔥🔥🔥 does it run offline?
Heyyy Mate ! Great tut and infos !! Thanks for sharing ! Does this work on Mac ?
As of a few days ago, yes ! On Apple silicon macs and definitely is slower than nvidia graphics cards though.
Hello, it's cool work. Does your plugin support tensorrt acceleration?
Yep! I’ve got an improved installation process with a separate step for tensorrt.
Amazing! Thank you for sharing. Would this work on a Mac? Specifically MBP M2 Max 64 GB
Awesome as usual, thank you so much for this!!
Just 2 questions:
Will it work with 2022 Touchdesigner Versions?
And do you, by any chance, also have a Gumroad page for purchasing, instead of Patreon?
nom it uses Custom Sequential Parameters (see wiki, opens a lot of doors!), which is in 2023.Official only.
thanks, when I ayked it was not in the description yet
Hey! I tried and it seems not to work with 2022 TD.
Great! How do I get/create the Container nvidia_smi in TD?
Hi there! i've previously gotten streamdiffusion to work locally with conda instead of venv - whats the best way to mod this TOP? :) or should i still use the venv process?
Thank you so much for this! I may be lost right now but how do I write the path to a local safetensor model? I can't seam to get it working
Incredible! Can this run on A Mac Studio with an M2 Max/ultra?
wow
wht is for you the best way to record the stream? thx in advance
How did you make a photoshop generation in examples on github?
very cool video.
I have a problem for the installation I have an error for cuda python
How can i resolve ?
since i work on a mac pro (win 11 is installed) with a vega 56, i question is will the amd GPU gets a support for this?
Hey :) would it be possible to do a turial of this tipic on Mac :) kind regards
thanks for the amazing work! but I have some issues when tochdesigner try to open powershell, the powershell will crash. Anyone knows the reason or solution?
Wen Mac version? :) is there compatibility issues?
Is there any way to integrate your own LoRa/safetensor into this ??
How do you load the operator, its not showing in the tab menu. im very new to td.
Thanks a lot ! But Lora models don't work, how do you setup this ?
Thanks for this! I am currently unable to run it, after the Intializing NDI streaming etc, it runs into:
D:\StreamDiffusion\StreamDiffusion\venv\lib\site-packages\diffusers\configuration_utils.py:135: FutureWarning: Accessing config attribute requires_safety_checker directly via 'StableDiffusionPipeline' object attribute is deprecated. Please access 'requires_safety_checker' over 'StableDiffusionPipeline's config object instead, e.g. 'scheduler.config.requires_safety_checker'.
deprecate("direct config name access", "1.0.0", deprecation_message, standard_warn=False)
Process Process-4:
and goes on and on..what should i do? thankssss🙏🙏🙏🙏🙏🙏
Hey, there is a way to use this with inpainting to select to afect only a determined area of the image?
i love you 200
wait, so the only way to do this is to subscribe to the patreon? i went through the trouble of installing all the stuff only to realize the TD base featured in this clip doesnt exist in TD :(
Hey when I start stream with the pulse and my cmd comes up, it get to the end of the code but doesn't get past 'preparing stream...'
Any idea why not? I've left it for over an hour and still the same. Have a good computer and high vram nvida card. Really appreciate any help - thanks!!!
great work !! would it be possible to use IPA adapters ?
I absolutely want to add this.
Wow, this is amazing. Do I gain access to this Operator as your Start SD Patreon ?
Yes ! thank you !
@@dotsimulateI will join.
Would this work on a Windows VM on a mac?
Can you tell me how to use controlnet? Are there any tutorials? Thank you very much
i am super unexperienced so how do I add the operator into touchdesginer network?
Hi! @dotsimulate, I am having issues with it, i try to install venv and all requirements and everything appears as an error
ERROR: Exception:
Traceback (most recent call last):
File "C:\TestRepostreamDiffusion\StreamDiffusion\venv\Lib\site-packages\pip\_internal\cli\base_command.py", line 160, in exc_logging_wrapper
status = run_func(*args)
^^^^^^^^^^^^^^^
Not sure what to do... i don't really understand it.
Thisi is incredible! Why wouldn’t it work on Mac?
how can i chane resolution internally?
I know this might be a no brainer but I've tried to get a couple SDXL models working, like FenrisXL and DreamshaperXL turbo and they dont work, do I need to be using the Stabilityai huggingface models? or can I use my own comfyUI?
sd-turbo is a 2.1 based model. It is not sdxl-turbo. Models supported are 1.5 based, because of the 1.5 LCM lora and also sd-turbo. You can use local model from safetensors file if you use the full file path instead of a hugging face id.
@@dotsimulate Awesome I got 1.5 models going, starting up the stream is way faster now. I have a 4090 and can get pretty fast generations with SDXL especially utilising the turbo Loras and such, is there a way to do it with StreanDuffusion Tox or do I need to try connecting it to ComfyTD? :)
It says my python 3.1 executable not found but i added it ot my path....?
Does this work on mac?
when all the setup is great how do i import streamdiffusion in touchdesigner
will this work with python 3.12 or only 3.10?
how can i speed up the fps? :(
Truly awesome.
Just became a patron on Patreon haha.
I'm having a slight trouble once i start stream. After the Triton error thing , i get a message saying :
OSError: [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted
Fetching 12 files: 0%| | 0/12 [00:00
close all other instances of the operator in TD and also close all old cmd windows. Seems like you are running into the osc server not being able to start up.
My stream keeps turning on and off ( starts and stops) and i'm getting like 0.96 fps whenever it comes on for a fraction of a second. Can you guide me as to how i can maintain my FPS and keep the stream active ? Thank you! I'm so close to this, can feeeeel it
How can i maintain a steady fps with no intermittent stops on stream?
@@yashasv.doodles might be a limitation of your graphics card. To stop the start/stop stream, there is a timer inside that loops/checks for active stream. Called osctimer. If you change the length of that timer to like a few seconds, the stream should stay on once on. You can toggle the stream active toggle if it gets stuck in the on position as well.
Okay start/stop issue is resolved. But i don't see any latent space / morphing take place on my null to the tox. Could you guide on how to connect hugging face model path/id to TD please!
I encountered an issue during installation. When I click the install/download button, a window prompts me that there is no network connection, even though my network is actually work fine. What could be the reason? My version is 0.1.9.
And I am exactly trying to do a firework show
Isthere any way of implementing it with an AMD Graphics Card?
hi there, im having issues. just upgraded my patreon account and downloaded the file but when i open it in TD the parameters box in the set up tab is empty. i already installed all that is needed or so i believe. can anyone please help me
Hi! Can you make an tutorial for Mac?
any chance to use control net? or masks on the inputs to control some shapes in the generation?
Can definitely give it a sense of the shape with input image. Not quite fast enough to run with controlnet at this point for realtime. But definitely possible non realtime with ComfyUI.
Comfy ui is in itself a whole process, I saw in their demos that they were using photoshop as an input for the generation, drawing a base shape and sent it to generation.
Hi I want to ask you one thing about this awesome tutorial I bought your patreon and I did what you taught me but the fps came out around 0.272 and the stream kept turning off and on and off. It doesn't work properly over and over again. What's wrong with this?
The fps is likely due to computer hardware speeds
Easy fix but for the start/stop. Send me a message on discord/Patreon and I will explain.
Looks amazing! Is it possible to make it portable with anaconda venv instead of installing directly on system?
It is using venv so it isn't installing to system python. A few people are using it with conda env as well, as the script can be run from cmd line. I plan to add option to support launch with conda in future.
@@dotsimulate sounds great!
How I can duble the resolution?
is anything out of pocket aside from becoming a patreon?