Any Node: the node that can do EVERYTHING - SD Experimental
Вставка
- Опубліковано 1 лип 2024
- Today we take a look at the node for Stable Diffusion that can do anything, Any Node!
Although highly experimental, this LLM in a box node can write Python code to be any code you want.
Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi
And a huge thank you to the people who have supported me until now!
Workflow with examples: openart.ai/workflows/Gxnv8CPV...
Any Node GitHub: github.com/lks-ai/anynode
Any Node Discord: / discord
Install Ollama (local LLM server): ollama.com/download
Install llama3 (the model I'm using): ollama.com/library/llama3
or run in the terminal:
ollama serve
ollama run llama3
Timestamps:
00:00 - Intro
01:10 - Any Node overview
03:12 - Image to Red Channel example
07:06 - How it writes Python code
08:01 - How Any Node works
09:47 - Limitations
11:07 - Using Any Node in complex workflows
13:06 - Setting up a local LLM (Ollama / llama3)
15:00 - Setting up Any Node with external LLMs (openAI, Gemini)
16:39 - Outro
#stablediffusion #anynode #stablediffusiontutorial #ai #generativeai #generativeart #comfyui #comfyuitutorial #risunobushi_ai #sdxl #sd #risunobushi #andreabaioni - Наука та технологія
I want to see more videos with AI agents. This is fascinating.
Thanks for the feedback, I'll put it on my to do list!
Thank you so much for this incredibly informative and well-explained video! It gave me many new insights and ideas for my own projects. The way you described the use of LLM models and their possibilities was very clear and helpful. Your explanations truly inspired me, and I am excited to apply what I've learned to my own projects. Thanks again for your effort and for sharing your knowledge!
Excellent explanation!! I was kind of lost trying to understand the devs videos but your video explains it perfect. Thanks!! This node is huge!.
Thank you! It's hard to maintain a repo and do good content about the repo as well, but the dev is very supportive of other people creating videos about Any Node too
The little mushroom guy is soooo cute!
Little mushroom guy 🤝 My Click Through Rate for this video
@@risunobushi_ai 🍄
I was on that live stream for a bit - thanks for breaking it all down
Wow, Wow, Wow, this is awesome.
It really is, and even if you don't get to always output a working code, you can still learn how comfyUI and custom nodes work by inspecting the terminal. This tiny node is what actually inspired me to write my first custom node, by making me go "oh, so that's how it works".
@@risunobushi_ai I haven't been this excited about image processing since the early days of working with Photoshop v2.5 and things like this with ComfyUi are why. With a node like this it's basically like being able to figure out how to build out Ps plug-ins without "much" coding knowledge. You have to know basic principles but not hard core coding which could allow a huge explosion in innovation in processes. It's all VERY exciting.
Truly incredible! Can't wait to see what people create with this - I just hope it's not too challenging for non-coders to describe a functional outcome.
I am a non-coder (well, if you exclude Turbo Pascal 3, thank you random high school class, very cool) and I usually manage to get it working. It's also an occasion to learn terms and stuff you don't know about, as a non-coder, like what tensors are, how many dimensions they have, how they are arranged, how they're processed before being used with openCV, etc.
Honestly the understanding part is as exciting as getting it right the first time, if not more, at least to me.
This is just amazing. Just today I was trying Tara-LLM-Integration. It lets you create elaborate prompts through external LLMs. Now this. Comfi UI just gets better and better.
I've only tried a few of the prompting LLMs, because I'm realizing that the more I create complex workflows, the less I actually prompt. But it's nice to have stuff that lets you dedicate less time to finding the right word salad and more time to the actual building process.
@@risunobushi_ai I understand. But my main purpose was to convert specific Color Hex codes to actual color names. SDXL does not understand Hex codes but llama3 does. So It worked.
Oh wow, I had never thought of that
OMG.. this is amazing. Thanks for sharing.
Thanks for watching!
Comfy will be truly comfy when it's language based and self maintaining in a safe sandbox
I think it's a tradeoff - it's an unstable environment for sure, but to me it's more important to have a fully customizable, modular sandbox rather than anything else.
Love the intro.
Thank you! Trying out some new stuff, I'll see if it works
I get the same error you did in the video and no matter how I phrase it the llm just can't do it. Not one to sit here for hours with the llm try to figure out what it wants.
game changer Andrea...a real game changer :)
ahah! I keep the stance I was on during the live, this is "potentially" a game changer, because it can "potentially" do anything!
@@risunobushi_ai Agreed...and if what you say about the dev is working on is true... then this potentially is rocket fuel for a car engine!
An amazing tool, but can I use it using the ComfyUI in a cloud server? like with ThinkDiffusion or RunDiffusion etc? Thanks!
I dont see why not
I don't know if you can run a local LLM on RunDiffusion or other cloud servers, but either way you could run the remote API ones. I usually run everything locally, so I'm not that well versed in remote instances.
I use RunDiffusion constantly and I'm about to try this out and comment again but they always implement big time nodes like this
@@henryturner4281 cool! Let us know
Keep getting this error. "Error occurred when executing AnyNodeLocal:
Expected metadata value to be a str, int, float or bool, got None which is a NoneType". I am Running locally
That’s the learning part of learning how to use Any Node. The LLM is expecting, based on your prompt, to receive or work on something that has those metadata, but either because the input doesn’t have those metadata or the operation it’s stripping those metadata from the input, it can’t proceed. You either have to change the input or the prompt, depending on what you want to do.
@@risunobushi_ai I used the exact prompt that you used
Then your LLM is not understanding the input and / or the prompt correctly. Being based on a LLM, there’s no guarantee that two different Any Nodes will ever get to the same solution based on the same input and prompt, unless they share a registry. Which prompt? The channel separation one?
@@risunobushi_ai Got it working, I just had to start out more basic using, load image, AnyNode, and preview image
hi, i got import failed! from the manager... any tips?
some dependencies do not install properly sometimes, like matplotlib. try running the requirements.txt intstall manually (pip install -r requirements.txt) and check what's not getting installed
@@risunobushi_ai thx, i will check it right now, thanks for your answer
😂😂😂
How do I get notifications for your live sessions? I'd love to see it and communicate. Also what's your discord?
My live sessions are still more spur of the moment, “if I’ve got time” kind of things, unlike my weekly videos, which I try to keep to a strict schedule.
This not being my main job only leaves me so much time for creating content, and as of right now I cannot structure a fixed livestream schedule.
I usually post a community post here either the day before or the morning on the same day, but I think you can click on the bell icon and get a notification!
Edit: I’d rather use emails than discord, you can write at andrea@andreabaioni.com
@@risunobushi_ai Great, I emailed you and clicked on the "all notifications" button ha. Thanks!