What you described at 18:42 of "Generating portraits of your entire party and then put them in group settings together" is exactly why I got into ComfyUI! I've been having trouble with composing multiple characters into a single image, so something along those lines would be incredibly helpful! Thanks for all of your guides! I've followed all of your tutorials and have joined the Discord which has awesome examples and very friendly people.
@@latentvision I'm not too concerned about the standalone app, it seems like a neat gimmick but I think I actually prefer the noodles. I feel more thorough actually seeing the full workflow. My main problem has been image composition with multiple IPAdapters. Also I'd like to emphasize that all of the time you put into this is greatly appreciated! There's no way I'd know what I was doing without these walkthroughs. Thank you again!
Hi! I have been working so hard trying to build apps with comfy and you just gave me a huge gift! I am really excited to start building with your example now! Thank you for the incredible resources you are creating for all of us at all levels!
Wow, as a long expeienced senior programmer I have the highest appreciation of your work, simply "staight forward". For testing, I extended the script for more checkpoints and prompting, had no problems to find the right spots. Therefore, a big THANK YOU!!!
Another absolute banger! I see great value in these experimental proof of concept types of videos. Even if some people aren't tech-savvy enough to write it from scratch, they can take your code that's already working and play with it, and especially with the help of modern code copilots and chat AIs that can make it easier to dive into.
🎯 Key Takeaways for quick navigation: 00:00 *🎮 Introduction to Comfy UI and Confid Dungeon* - Introduction to a simple demo generating D&D character portraits using Comfy UI. - Quick overview on changing results, styles, and iterating through character options. 02:04 *🛠️ Enhancing Comfy UI with Advanced Features* - Ideas for improving the application using IP adapters, ID UPS scalers, detailers, etc. - Beginning of step-by-step guide on building a similar application. 04:19 *📱 Designing the Base Workflow for a Mobile-Compatible Application* - Process of creating a super-fast base workflow using specific checkpoints and samplers. - Saving the workflow in API format and editing in a text editor. 06:44 *💻 Setting Up the Web Application Structure* - Creation of directories for the demo, including web and JavaScript files. - Introduction to the 1% of Python code needed for the demo. 08:56 *🚀 Launching and Testing the Application* - Basic setup of index.html for the application, making the background dark. - Restarting Comfy and navigating to the newly set up URL to test. 11:44 *🖼️ Handling Image Results in the Web App* - Managing message types from the server and extracting image data for display. - Utilizing the Comfi API to retrieve and display generated images on the web application. 12:52 *🌐 Sending Prompts to the Web Server* - Implementing POST requests to send the full workflow to the server. - Using a timer to dynamically send requests to Comfy as user types, for instant visual feedback. 14:35 *⚡ Enhancing the Application's Responsiveness and Quality* - Adjusting timeout for faster image generation and display based on server response times. - Introduction of high-quality image options and the use of CFG rescale to alter image details. 17:01 *📱 Making the App Accessible from Mobile Devices* - Configuring Comfy to allow access from anywhere in the user's network. - Highlighting the potential for mobile access and addressing security considerations for home users. 17:56 *🚀 Final Thoughts and Future Possibilities* - Recap of app development process and its efficiency in enabling technology use without extensive coding knowledge. - Potential future improvements and personal reflections on the Comfy UI and Confid Dungeon project. Made with HARPA AI
I appreciate the thought put into these videos, great comfyUI/SD info from someone with a knack for instruction. Please consider releasing an advanced vid2vid long format video utilizing unsampler and ip adapter to maximize consistency. SparseCtrl ControlNet / Advanced control nets would also be appreciated. Thanks for the awesome content!
Oh my god, this is so cool again 😮 I loooove every your video, Matteo 😊 I made before a js application using the comfyui API, but I didn't know there was a dev mode and nothing at all about how the api works, so I would just pick the api calls from the browser when using normal comfyui and then adapt them for my application. Its not the smartest way to go. It was a long time ago. I'm always waiting for all your videos, and besides the excellent content and a lot of value, I'm also falling in love with your beautiful accent every time.
@@latentvisionI'd love to see a comparison (and your opinion) on the various upscaling technologies (hires fix, deepshrink, latent upscale, pixel upscalers, UltimateUpscale, etc).
Thanks for all kindly guides ,your channel provide the most deep profound knowledge about stable diffusion in entire youtube, which worth watch two time ,three time to digest all the useful matter inside, could your think make a comfy basic about the mechanism of ksampler node in your future tutorial , I bet many people don’t have clear understanding of it , especially people lack of computer science background like me and they are ton of them out there , they are definitely the majority,thank you again!!!!
thanks! there are really a lot of topics to cover. I try to talk about a little bit of everything but there's really a lot of material... I need to finish the basics series (2-3 more videos) then maybe we'll go deeper
I love this idea for a tutorial video! Thank you so much for making it! :) I'm working in python and would like something like this using my own app and gui and not using a web browser. Maybe using Raylib. Do you know any way of doing that? Should I still use Comfy inbetween or try use just python?
hard to say, depends on the application. One very easy solution might be to package everything inside an electron app for example. But if you are skilled with python and the workflow is not terribly complicated you can also check diffusers.
I've been working on a complete rewrite of the comfyui / graph (user interface), the goal was to be very hackable with nodes being html instead of drawn canvas. My tooling I think would complement this sort of workflow greatly, as my goal was to make it so that each graph is itself a python project that can present itself as an API I need a couple more weeks before I can share it, warts and all. but as all projects "I think it's going to be great" ... lol we'll see if that's the case! I wish I could just quit my job and work on this stuff full time!
hi matteo, can you make a new ipadapter tutorial video? I want to know what's the new image_negative input in one of your node & how to use it . Thank you :D
Question at 10:24, how did you know comyui wants this format as an identifier? I mean where did you read it? Where I am supposed to learn about it if I did not have this video (awesome video btw) as a guidance? Thanks
hey, I seems to Have a problem IMPORT FAILED" error while trying to add or use the ComfyUI HD UltimateSDUpscale nodes in a software application. do you know how Can I fix it?
I had the same problem and fixed it. I noticed that the cmd window showed the checkpoint and lcm lora was not the same name of mine, even if I've modified them. As soon as you start to connect to the fastgen, it'll cache the model names. You can proof this error by testing the URL in Incognito mode since it doesn't use your cache. 1. You need to update the correct model names that you have for the ckpt_name and lora_name in base_workflow.json 2. Delete your browser cache 3. Not 100% sure if it affect this, but I removed the folder called __pycache__ that was created in the fastgen folder. 4. Restart Comfy and go back to the fastgen URL path.
Hello, my friend! I am interested in amateur photography and its processing. That's why I paid attention to neural networks. Please tell me, can I change or slightly edit the poses in a photograph or the overall lighting or color scheme without changing the context of the content (face, clothing, body)? For now, I can change individual elements of the mask (color or type of clothing). If this is possible.. I watched a lot of videos and only saw how they run a prompt for generation through ControlNet/IPadapter. But no one will process the original image. Could you make a short video with examples of real image processing when you have time?
Hi Matteo, thanks for the awesome tutorial. You are doing the best and most relaxed tutorials i know. I'am also using Comfy to generate parts of the storyline in a Pen&Paper adventure. I would love to see if it is possible to integrate IPAdapter and FaceID into the workflow. I'am not very good at writing code but so far i managed to integrate a lora selector in your Comfy Dungeon. Still struggeling with the upload of a picture from the interface to the workflow in the moment.
(This is my fourth and final question hopefully) You mentiond group pictures at the end of the video, would that need some additional node that identify faces and add them to an output image and then replace every character with the faces inputed I imagine? Would that be much more expensive in term of gpu usage I suppose? Would that work with lcm? (no reason not right) OK I was very excited with yoru video and had to comment 4 comments, hope you take a look to at least one of them. See ya.
This is great, but it sort of exposes the workflow, which may be something you want to keep hidden. It also looks like it cannot scale horizontally. How would you address this?
is there a documentation or description or more examples of comfyui API? for example how to receive the list of available checkpoints from the backend.
You can: 1. Do the same exact thing as in the tutorial 2. Go to your router and make sure you allow the 8188 or give any other number or any other port to access your internal network. 3. Then route that external port to your internal local machine port 8188. 4. Then, check what's your external IP address and you can try to put your external IP address and the external port and you should be able to access it.
Hey for generating multiple images in python I just did that to send multiple post requests and append all PIL images in a array can we do this in javascript (i am using gradio )
Awesome tutorial! Thanks for making this. (once again) Quick question - could you please share what kind of hardware you're using to get those super fast generations?
when you increase the CFG the model reiterate on the embeds too much and the image comes out over-saturated. It's like exposing a picture for too long in an old camera, hence the image gets "burned"
this has been amazing! Thank you...definitely new to a lot of this, but it is so fun to play around with this code and learn from it. I do have a question...I'm banging my head on how to save an image and then load it back into a workflow. I know that you somehow need to move it from the output to input folder and then somehow load the image to a node?
Hey, so I made a similar version of this, but when the generation finishes, the socket 'message' event data is always of data type "status" so I am unable to get any data from the output. However the images show up in the output folder of comyui. What triggers the "executed type to occur?
Nvm figured it out. I used the websocket setup you made in comfy_dungeon. For some reason the websocket setup in this video wasn't triggering the event.
I had the same problem and fixed it. I noticed that the cmd window showed the checkpoint and lcm lora was not the same name of mine, even if I've modified them. As soon as you start to connect to the fastgen, it'll cache the model names. You can proof this error by testing the URL in Incognito mode since it doesn't use your cache. 1. You need to update the correct model names that you have for the ckpt_name and lora_name in base_workflow.json 2. Delete your browser cache 3. Not 100% sure if it affect this, but I removed the folder called __pycache__ that was created in the fastgen folder. 4. Restart Comfy and go back to the fastgen URL path.
Hey there, I'm having a bit of trouble following along because my project is a bit more basic. I have a workflow with 13 image inputs. I just need to set values for frames, add starting and ending frames, input a positive and negative prompt, and maybe switch out the main checkpoint model. The UI would show these options along with the generated preview and final video. Essentially, it's an animation created from 13 images. I'm looking to create a simple interface to upload images, adjust settings, choose the main model, and then generate the animation. Could you make a tutorial on this? Or if possible, could you reach out for a private session? I'm happy to pay for your time.
@@latentvision I would appreciate it a lot, I need it for an academic video game project and searching on the Internet there are almost no resources that use Stable Diffusion for it. And although I have managed to create something with controlnet, the differences in outfits and other details are quite noticeable.
Third question if you don"t mind, you spoke about secrity concerns related to incoming connexions, my question is how to allow only a certain type of connexion (external) into some comfy running somewhere? I mean I want to be able to communicate with it from external network but it must only accept a certain type of external connecion otherwise it is refused Also, what happens if many external wants to enter and they keep beign refused, would that consume power aswell? (Lets say you get 1000000 unauthorized calls and they are all refused)
Hi again, where did you find this 10th line of code of the init file in the custom mnodes? I dont understand very well how the routing is happening, I actually have no idea what the code in the init means haha. I would like to learn more. Btw, your voice is awesome!
Thanks for sharing, I am currently at the end of programing course and I wanted integrate generative AI in my final project, so this came out at a perfect time! Do you have any other resources you recommend checking out? Update: I did finish my project :)
@@latentvision I have a few ideas. It’s probably one of the most wanted features. Having developed a few workflow with LLM. You explained things for mere mortals 😁👏
@@banzai316 so maybe like, take an entire workflow and turn it into a single node, with its own inputs / outputs / widgets? Like collapse a workflow into just one node? I’ve thought of something along those lines but I don’t think it exists yet
@@latentvision The problem was the model name was very sightly different and I didn't notice that. I already had the Proteus model installed but mine was named as Proteus_v03 and the code called for ProteusV0.3, I think the difference was between downloading the model from CivitAI and Hugginface. Different sites, same model yet named differently. But it is working wonderfully!
I had the same problem and fixed it. I noticed that the cmd window showed the checkpoint and lcm lora was not the same name of mine, even if I've modified them. As soon as you start to connect to the fastgen, it'll cache the model names. You can proof this error by testing the URL in Incognito mode since it doesn't use your cache. 1. You need to update the correct model names that you have for the ckpt_name and lora_name in base_workflow.json 2. Delete your browser cache 3. Not 100% sure if it affect this, but I removed the folder called __pycache__ that was created in the fastgen folder. 4. Restart Comfy and go back to the fastgen URL path.
Best AI content in all internet, is the luck we have having a top programmer sharing knowledge while creating AI art. This man is a legend.
facts
Bro this is one of my favorite channels! great videos! Love this! im inspired!
What you described at 18:42 of "Generating portraits of your entire party and then put them in group settings together" is exactly why I got into ComfyUI! I've been having trouble with composing multiple characters into a single image, so something along those lines would be incredibly helpful!
Thanks for all of your guides! I've followed all of your tutorials and have joined the Discord which has awesome examples and very friendly people.
I'd really like to work on that app, it's really fun to do... but yeah... time...
@@latentvision I'm not too concerned about the standalone app, it seems like a neat gimmick but I think I actually prefer the noodles. I feel more thorough actually seeing the full workflow.
My main problem has been image composition with multiple IPAdapters.
Also I'd like to emphasize that all of the time you put into this is greatly appreciated! There's no way I'd know what I was doing without these walkthroughs. Thank you again!
I love it that you made a long video again.
I prefer longer videos.
Hi!
I have been working so hard trying to build apps with comfy and you just gave me a huge gift! I am really excited to start building with your example now! Thank you for the incredible resources you are creating for all of us at all levels!
glad to help!
Wow, as a long expeienced senior programmer I have the highest appreciation of your work, simply "staight forward". For testing, I extended the script for more checkpoints and prompting, had no problems to find the right spots. Therefore, a big THANK YOU!!!
Another absolute banger! I see great value in these experimental proof of concept types of videos. Even if some people aren't tech-savvy enough to write it from scratch, they can take your code that's already working and play with it, and especially with the help of modern code copilots and chat AIs that can make it easier to dive into.
I can't believe how easy you make these topics appear. Your explanation is so helpful
🎯 Key Takeaways for quick navigation:
00:00 *🎮 Introduction to Comfy UI and Confid Dungeon*
- Introduction to a simple demo generating D&D character portraits using Comfy UI.
- Quick overview on changing results, styles, and iterating through character options.
02:04 *🛠️ Enhancing Comfy UI with Advanced Features*
- Ideas for improving the application using IP adapters, ID UPS scalers, detailers, etc.
- Beginning of step-by-step guide on building a similar application.
04:19 *📱 Designing the Base Workflow for a Mobile-Compatible Application*
- Process of creating a super-fast base workflow using specific checkpoints and samplers.
- Saving the workflow in API format and editing in a text editor.
06:44 *💻 Setting Up the Web Application Structure*
- Creation of directories for the demo, including web and JavaScript files.
- Introduction to the 1% of Python code needed for the demo.
08:56 *🚀 Launching and Testing the Application*
- Basic setup of index.html for the application, making the background dark.
- Restarting Comfy and navigating to the newly set up URL to test.
11:44 *🖼️ Handling Image Results in the Web App*
- Managing message types from the server and extracting image data for display.
- Utilizing the Comfi API to retrieve and display generated images on the web application.
12:52 *🌐 Sending Prompts to the Web Server*
- Implementing POST requests to send the full workflow to the server.
- Using a timer to dynamically send requests to Comfy as user types, for instant visual feedback.
14:35 *⚡ Enhancing the Application's Responsiveness and Quality*
- Adjusting timeout for faster image generation and display based on server response times.
- Introduction of high-quality image options and the use of CFG rescale to alter image details.
17:01 *📱 Making the App Accessible from Mobile Devices*
- Configuring Comfy to allow access from anywhere in the user's network.
- Highlighting the potential for mobile access and addressing security considerations for home users.
17:56 *🚀 Final Thoughts and Future Possibilities*
- Recap of app development process and its efficiency in enabling technology use without extensive coding knowledge.
- Potential future improvements and personal reflections on the Comfy UI and Confid Dungeon project.
Made with HARPA AI
I appreciate the thought put into these videos, great comfyUI/SD info from someone with a knack for instruction. Please consider releasing an advanced vid2vid long format video utilizing unsampler and ip adapter to maximize consistency. SparseCtrl ControlNet / Advanced control nets would also be appreciated. Thanks for the awesome content!
Thanks! I will talk about video soon. I need to finish the "basics" series first. I think next one will be about controlnets.
Another masterful and re-watchable tutorial by Maestro Latente... BRAVO!!!
Amazing as always! Love the longer more informative videos. So useful to help understand the ever changing Comfyui world. Keep it up!
That's impresive!! Just MASSIVE thanks for your knowledge and help for this amazing community!
This is really neat, Matteo. Thanks for sharing your knowledge!
I’m a traditional/digital artist and I find the stuff you do fascinating. Thanks for sharing, sorcerer.
Your channel has a huge quality of content for ComfyUI! Grazie del tuo lavoro!
just doing my part
you sir are awesome. Took your example and expanded it! added much more races and classes, planning to add 3d and more artstyles
Nice. Good to follow someone who knows how the code works. I'm going to try and install this later and make my own UI
Oh my god, this is so cool again 😮 I loooove every your video, Matteo 😊 I made before a js application using the comfyui API, but I didn't know there was a dev mode and nothing at all about how the api works, so I would just pick the api calls from the browser when using normal comfyui and then adapt them for my application. Its not the smartest way to go. It was a long time ago. I'm always waiting for all your videos, and besides the excellent content and a lot of value, I'm also falling in love with your beautiful accent every time.
lol, the "beautiful accent" made my day. It's excruciating for me to listen :D
Hey Matteo, great job again :) I'm really looking forward to part 3 of the ComfyUI video
I think it will be next. Controlnets and maybe upscaling if it's not too long
@@latentvisionI'd love to see a comparison (and your opinion) on the various upscaling technologies (hires fix, deepshrink, latent upscale, pixel upscalers, UltimateUpscale, etc).
Dang bro, this is my dream stuff. I always thought about making these things and self hosting to give to people. Thanks a lot
you are welcome! I wish I had more time to dedicate to the Dungeon... it's such a lovely tool :)
Another excellent video, very informative and easy to follow
Thx for taking the time and make something so undertandable. =)
You are nuts!!! every video is so enjoyable.
Nice RTX 4090 btw xD
Great video! Didn't even think this was an option... :o) The group pictures would also interest me very much... :o)
Amazing work! I learned so much from your content man! Thanks a ton :]
You are a Golden God! This is amazing stuff Matteo!
I'm not worthy...
Well, I hold you in the highest esteem for the clarity and simplicity you enable us with. Thank you for all you do :)
Thank you so much for that tutorial, that is exactly what I were wondering about for my project.
Another great knowledge from Laten!. Thanks for showing us
great video again, Thanks so much 👑👑
I will study and learn well from this fantastic lecture. Thank you always.
Thanks for this video! Just posting here to say I’d love a party/group picture generator, I think it’s a great idea :)
Great video! Good voice, no need for subtitles. Ahh! RTX 4090, i'm struggling with my gtx 1650 mobile 4gb.
You should monetize this app, people would pay for this.
wish there was automated built in feature for this, just mark the parameters that you want UI to show, and its done.
yeah that wouldn't be too difficult to do actually, I thought about it but I have so many projects going on :)
That's a project I'm looking on developing too.
In the mean timew you can try stableswarmui and ComfyBox which have similar feature
IIRC, mixlab does something like this.
Thanks for all kindly guides ,your channel provide the most deep profound knowledge about stable diffusion in entire youtube, which worth watch two time ,three time to digest all the useful matter inside, could your think make a comfy basic about the mechanism of ksampler node in your future tutorial , I bet many people don’t have clear understanding of it , especially people lack of computer science background like me and they are ton of them out there , they are definitely the majority,thank you again!!!!
thanks! there are really a lot of topics to cover. I try to talk about a little bit of everything but there's really a lot of material... I need to finish the basics series (2-3 more videos) then maybe we'll go deeper
I love this idea for a tutorial video! Thank you so much for making it! :)
I'm working in python and would like something like this using my own app and gui and not using a web browser. Maybe using Raylib.
Do you know any way of doing that? Should I still use Comfy inbetween or try use just python?
hard to say, depends on the application. One very easy solution might be to package everything inside an electron app for example. But if you are skilled with python and the workflow is not terribly complicated you can also check diffusers.
@@latentvision Thank you for the advice!
This is incredible! Thank you for the amazing tutorial!! Amazing
I've been working on a complete rewrite of the comfyui / graph (user interface), the goal was to be very hackable with nodes being html instead of drawn canvas.
My tooling I think would complement this sort of workflow greatly, as my goal was to make it so that each graph is itself a python project that can present itself as an API I need a couple more weeks before I can share it, warts and all. but as all projects "I think it's going to be great" ... lol we'll see if that's the case! I wish I could just quit my job and work on this stuff full time!
I know feeling... working on personal projects is so gratifying... but doesn't pay the bills :)
so good content man, congrats
Wow, thats awesome.thank you so much!!!
Matteo always has amazing ideas ... thanks for the knowledge.
thanks! interesting video, thinking of all the things to add :D
😍Very much what I need. Thanks for sharing!
hi matteo, can you make a new ipadapter tutorial video? I want to know what's the new image_negative input in one of your node & how to use it .
Thank you :D
Question at 10:24, how did you know comyui wants this format as an identifier? I mean where did you read it? Where I am supposed to learn about it if I did not have this video (awesome video btw) as a guidance?
Thanks
reading the source code and other examples
thanks so much! this is awesome!
hey, I seems to Have a problem
IMPORT FAILED" error while trying to add or use the ComfyUI HD UltimateSDUpscale nodes in a software application.
do you know how Can I fix it?
I had the same problem and fixed it. I noticed that the cmd window showed the checkpoint and lcm lora was not the same name of mine, even if I've modified them. As soon as you start to connect to the fastgen, it'll cache the model names. You can proof this error by testing the URL in Incognito mode since it doesn't use your cache.
1. You need to update the correct model names that you have for the ckpt_name and lora_name in base_workflow.json
2. Delete your browser cache
3. Not 100% sure if it affect this, but I removed the folder called __pycache__ that was created in the fastgen folder.
4. Restart Comfy and go back to the fastgen URL path.
grande matteo!! thanks for sharing: this IS huge :D
You are my new God!
goat sacrifices only on Friday, thanks
Tutorial is great, really great
but can you give us your requirements file?
which library should be installed to make "import server" work?
Thank you!
Hello, my friend! I am interested in amateur photography and its processing. That's why I paid attention to neural networks. Please tell me, can I change or slightly edit the poses in a photograph or the overall lighting or color scheme without changing the context of the content (face, clothing, body)?
For now, I can change individual elements of the mask (color or type of clothing). If this is possible.. I watched a lot of videos and only saw how they run a prompt for generation through ControlNet/IPadapter. But no one will process the original image. Could you make a short video with examples of real image processing when you have time?
Hey Matteo! Good to see you
Hi Matteo, thanks for the awesome tutorial. You are doing the best and most relaxed tutorials i know. I'am also using Comfy to generate parts of the storyline in a Pen&Paper adventure. I would love to see if it is possible to integrate IPAdapter and FaceID into the workflow. I'am not very good at writing code but so far i managed to integrate a lora selector in your Comfy Dungeon. Still struggeling with the upload of a picture from the interface to the workflow in the moment.
amazing!!!!!!!!!!! love u so much bro
thaaaaaanks!!!!!!! I'm doing my part!
(This is my fourth and final question hopefully) You mentiond group pictures at the end of the video, would that need some additional node that identify faces and add them to an output image and then replace every character with the faces inputed I imagine?
Would that be much more expensive in term of gpu usage I suppose? Would that work with lcm? (no reason not right)
OK I was very excited with yoru video and had to comment 4 comments, hope you take a look to at least one of them.
See ya.
Bravo, Mateo!
great job on this app.... comfyui really needs a mobile app that works with phones, if you can make that happen that would be amazing!!
This is great but I keep getting an error "Prompt outputs failed validation", any ideas??
cool, thank you :) i start use it on my mobile ;)
Fantastic video!
Matt3o we love you !
Last time I took apart one of your 19 minute videos, I got a face swapper, now? I see this potentially saving me from training 5 Lora models.
This is great, but it sort of exposes the workflow, which may be something you want to keep hidden. It also looks like it cannot scale horizontally. How would you address this?
is there a documentation or description or more examples of comfyui API? for example how to receive the list of available checkpoints from the backend.
there's no official documentation that I know of. check comfy dungeon, I get the list of checkpoints there.
hey that's great but how can we access through the web rather the LOCAL adress?
You need to host it somewhere
You can:
1. Do the same exact thing as in the tutorial
2. Go to your router and make sure you allow the 8188 or give any other number or any other port to access your internal network.
3. Then route that external port to your internal local machine port 8188.
4. Then, check what's your external IP address and you can try to put your external IP address and the external port and you should be able to access it.
I'm trying to find a good javascript course on internet wit a API focus. You have something to suggest? Thanks Buddy
I was looking exactly for this
Latent do you know how to add diffrent checkpoints models to pick from..... I tried adding a new entry in the index.html but nothing happened!
Hey for generating multiple images in python I just did that to send multiple post requests and append all PIL images in a array can we do this in javascript (i am using gradio )
have you got the link for the sd15 you use in the example?
Awesome tutorial! Thanks for making this. (once again)
Quick question - could you please share what kind of hardware you're using to get those super fast generations?
thanks! 4090, fast ssd and lots or ram
You're a legend!
it would be nice a comfyui sdk able to handle workflow without using hardcoded ids
Mateo, what does "burning image" mean? You mentioned it when describing CFG scale
when you increase the CFG the model reiterate on the embeds too much and the image comes out over-saturated. It's like exposing a picture for too long in an old camera, hence the image gets "burned"
this has been amazing! Thank you...definitely new to a lot of this, but it is so fun to play around with this code and learn from it. I do have a question...I'm banging my head on how to save an image and then load it back into a workflow. I know that you somehow need to move it from the output to input folder and then somehow load the image to a node?
I will add those option in a future update of the comfy dungeon, it's not something that can be explained on a youtube comment :)
good to hear!!!@@latentvision
Crazy stuff! Gracie matteo !
Hey, so I made a similar version of this, but when the generation finishes, the socket 'message' event data is always of data type "status" so I am unable to get any data from the output. However the images show up in the output folder of comyui. What triggers the "executed type to occur?
Nvm figured it out. I used the websocket setup you made in comfy_dungeon. For some reason the websocket setup in this video wasn't triggering the event.
hey i want to learn everything about comfy but when i try i get confuse ...can u suggest me anything
i have a problem, when i try to write a prompt on the fastgen, on the console have this error. 400 (bad request) and dnt do nothing.
I had the same problem and fixed it. I noticed that the cmd window showed the checkpoint and lcm lora was not the same name of mine, even if I've modified them. As soon as you start to connect to the fastgen, it'll cache the model names. You can proof this error by testing the URL in Incognito mode since it doesn't use your cache.
1. You need to update the correct model names that you have for the ckpt_name and lora_name in base_workflow.json
2. Delete your browser cache
3. Not 100% sure if it affect this, but I removed the folder called __pycache__ that was created in the fastgen folder.
4. Restart Comfy and go back to the fastgen URL path.
Hey there, I'm having a bit of trouble following along because my project is a bit more basic. I have a workflow with 13 image inputs. I just need to set values for frames, add starting and ending frames, input a positive and negative prompt, and maybe switch out the main checkpoint model. The UI would show these options along with the generated preview and final video. Essentially, it's an animation created from 13 images. I'm looking to create a simple interface to upload images, adjust settings, choose the main model, and then generate the animation. Could you make a tutorial on this? Or if possible, could you reach out for a private session? I'm happy to pay for your time.
that's not something it can be explained in a YT comment, try to reach out on discord if you want
@@latentvision What is your discord name? I think i'm already a member of your server lol didn't even know it.
@@DorothyJeanThompson matt3o
Do you a have a tutorial for SD Forge?
Nice guide bro! By the way, you have a tutorial for front - back - profile view for characters with controlnet and some consistency?
yeah that's something I wanted to do sooner or later
@@latentvision I would appreciate it a lot, I need it for an academic video game project and searching on the Internet there are almost no resources that use Stable Diffusion for it. And although I have managed to create something with controlnet, the differences in outfits and other details are quite noticeable.
Fantastic video, subscribed.
How challenging do you believe this is for a non-character system? I'm looking to change styles for interior rooms.
people are generally more difficult to handle
Another question: If I wanted to have this inside a website, how much "gpu cost or cpu" would I need? (Just the LCM)
depends on the traffic that you have and how long of a queue you want. technically for one connection it doesn't require much compute
Third question if you don"t mind, you spoke about secrity concerns related to incoming connexions, my question is how to allow only a certain type of connexion (external) into some comfy running somewhere?
I mean I want to be able to communicate with it from external network but it must only accept a certain type of external connecion otherwise it is refused
Also, what happens if many external wants to enter and they keep beign refused, would that consume power aswell? (Lets say you get 1000000 unauthorized calls and they are all refused)
this is more complicated to answer in a YT comment. maybe reach out on discord
@@latentvision OK will do ^^ (By PM or is it just by mentioning your name on a discord room?)
Hi again, where did you find this 10th line of code of the init file in the custom mnodes? I dont understand very well how the routing is happening, I actually have no idea what the code in the init means haha. I would like to learn more. Btw, your voice is awesome!
this is hard to answer in an YT comment, maybe chime in on discord!
This is brilliant, Is there any way to connect this to real time camera?
yes with the browser camera API
Thanks for sharing, I am currently at the end of programing course and I wanted integrate generative AI in my final project, so this came out at a perfect time! Do you have any other resources you recommend checking out? Update: I did finish my project :)
the best way is to read the actual comfyui code. I'm not aware of any API documentation
amazing, there's a way to include the upload of a reference img2img on it ?
yeah that would be possible, I need to implement the dungeon... when I have time...
Brillant
Cool, thanks Matteo! My wishlist Develop easier way to manage workflows, (more like Reusable Workflow for ComfyUI)
yeah that is something I have in mind since a looong time!
@@latentvision I have a few ideas. It’s probably one of the most wanted features. Having developed a few workflow with LLM. You explained things for mere mortals 😁👏
you mean like saving and organizing workflows?
@@PaulFidika yes to reuse multiple nodes to create the workflow. It’s kind like grouping feature to create workflow. Grouping is already very useful.
@@banzai316 so maybe like, take an entire workflow and turn it into a single node, with its own inputs / outputs / widgets? Like collapse a workflow into just one node? I’ve thought of something along those lines but I don’t think it exists yet
Do you recommend some no code app for create an UI app using comfyUI workflow api ?
to speed up the development I'd recommend svelte
@@latentvision I mean no code app like Weweb, Glide, Softr etc... who not require code knowledges
Thanks for everything
awesome!
I keep getting this error "Prompt outputs failed validation", any ideas? Tried searching your Discord but no luck.
from the comfy dungeon? it's hard to escalate on an YT comment
@@latentvision The problem was the model name was very sightly different and I didn't notice that. I already had the Proteus model installed but mine was named as Proteus_v03 and the code called for ProteusV0.3, I think the difference was between downloading the model from CivitAI and Hugginface. Different sites, same model yet named differently. But it is working wonderfully!
I had the same problem and fixed it. I noticed that the cmd window showed the checkpoint and lcm lora was not the same name of mine, even if I've modified them. As soon as you start to connect to the fastgen, it'll cache the model names. You can proof this error by testing the URL in Incognito mode since it doesn't use your cache.
1. You need to update the correct model names that you have for the ckpt_name and lora_name in base_workflow.json
2. Delete your browser cache
3. Not 100% sure if it affect this, but I removed the folder called __pycache__ that was created in the fastgen folder.
4. Restart Comfy and go back to the fastgen URL path.
I use similar things with gradio because I know python and it do the thing in less code
Awesome, TY.
👀 🤯 phenomenal
Hah, time to reimplement a1111 on top of comfy!