"It's Time to D-D-D-D DUEL!"... sorry... Awesome concept, beautifully explained
10 місяців тому+24
"I like to tinker with ComfyUI". Aren't we all? :) You seriously makes this magical. I've already figured a bunch of things out, but this is next level. That frame generation alone was mind-blowing. Also you've shown yet again something I didn't know: that you can enter formulas as color value. Thank you!
I said the same thing to my self about the color value formula! 🤩 I was trying to figure out how to direct Comfy to use a specific hex color in a persons shirt. Not sure if this could be used in that way though but wow!
This is one of the best tutorials (for any subject) I've ever seen. It was very helpful to see the entire process from scratch, without skipping ahead. And the result is fantastic. Looking forward to more of your videos.
I love your tutorials so much as you always push the boundaries what one can do with ComfyUI. Especially since you translate basically traditional composting workflows for example that usually you'd do manually and incorporate them into these automated pipelines. This really shows the immense potential for real world applications, product ideas etc. Very awesome and thanks a lot for that!
The ComfyUI MVP makes another must-see tutorial! The amount of USEFUL WORKFLOW knowledge in 25 minutes is like an intense cardio workout. Love these goal oriented workflows, which is why ComfyUI exists and does best. Thank you as always for the inspiration!
Wow, this is truly stunning, especially the creation of dynamic cards. As a Chinese ComfyUI creator, you have given me a lot of inspiration. I continue to support all of your tutorials and video creations; they are magnificent!👍👍👍
This is awesone I've been working on some trading cards a deck of cards and you've made n life so much easier in terms of the production. Also you've lessened the steps that I need to now go to Photoshop to make adjustments. Thanks for sharing this has been truly invaluable.
Amazing work, really. All these automated tools out there that just do the work for you, while this type of video helps you understand way more what is going on. I wish my brain was this big to understand it all and keep up with the pace. One day.
Man this is some workflow, so much to learn from this video. Was a lot of fun following along. The Segment results for the masks aren't the best, I get some nice results using the "Image Remove Background (Alpha)" node from the WAS Node Suite. Especially arround the branches for the frame.
As always full with information in simple way 👍👍👍. We wait for in-depth prompt tutorial video 🙏. Hopefully next video will be on in-depth prompt tutorial video 🙏🙏. Please make this video
Mateo I just had to come down into the comments and thank you for all that you do for the SD community. This video in particular blew my mind, you rock 👊
3:24. im using your workflow and i got it back down to just the same nodes to get to the border. but after selecting the mask it looks like it just send the mask to the preview image. no plant stuff is showing up. if i remove the load image part with the mask it shows a plant thing. guess i need to learn more stuff before i try this :(
I was having a similar problem. I was using "Load Image" and "Load Image as Mask" simultaneously though. I still haven't figured out how to get that to work (tried 0.99 denoise, switching VAE decode to VAE decode for inpainting). Finally gave up and did it exactly as seen in the tutorial (with the rough/hand-drawn built-in maskeditor) and it started behaving like his. Not sure if what I tried first has a bug or if there's something I don't understand at play (I had both mask and image at same resolutions, used alpha/opacity in the mask... it works when its just the mask :/) EDIT: It was a bug. My alpha channels got translated to RGB after reload or something. Just had to reupload the transparent photo to the mask node.
amazing tutorial! You sir are a Legend! Always learn a lot from your videos. Am looking forward to the possible follow up implementing the animated video to the card as per the bonus stage! Or if you are not able to do one (I know you're pretty busy!) can you point me to any other resources that can help me achieve that? Thanks!
Love it! Will try it this weekend. Thanks for the tutorial :). In your next tutorial on this you should show how to create a foil overlay on the card too.
Mate this is incredible...what about a new tutorial for this grat trading cards stuff? Is anything changed in your workflows with the recent growth of comfy?
Thank you for this tutorial - you are a great teacher - it was perfect for learning. Started with a simple idea, built upon established skills from previous videos, and contained lots of repetition to allow us to get a hang of how to do it on our own. Once we got to the icon I paused the video to see if I could do it on my own before you explained it, and it felt like things were really clicking. Question - when you were editing the plant mask at 2:39, how did you get your mask color to be transparent gray? Mine is black, which then makes it impossible to see over top of the black center of the image.
Your tutorials are amazing! I'm just having trouble with my Draw Text node, it says font "Undefined" and I don't know how to fix that and can't seem to use another text node because it doesnt have teh mask out output
@@latentvision Thanks Mateo, you are my ComfyUI hero, I'm learning a lot thanks to you. I'm trying to use this text node to print some of the parameters into the images, is there any way to do that? I already tried the %Load Checkpoint.ckpt_name% as an input but it is not working.
Matteo really loved it😍😍😍😍😍. just imagining the variety of cards I can do. What if I wan't to use my face in the cards. please can you make that workflow 🥺
Thank you for this tutorial. It is unique and it gave me a new perspective on using Comfyui. That's being said, I couldn't find where to put fonts or how to install it. Is there a custom node for it? I'll appreciate if you can explain. Thank you!
Hi Matteo, love your video and this tutorial is amazing, but I have a problem replicating it, when I use the Draw Text node, the updated one, it does not hallow me to use trasparent background, and if i put everything to #00000000 (text and background) it doesn't show anything. What can I do? It give me an error if I put a non existing color in background. Thx for the answer, love this type of video :D
Tried to use this flow on a 4Gig card and it puttered out i'm afraid at the groundingdinosegment node for the shield (said ran out of memory), though it had made it through the groundingdino for the paper ok. until i can upgrade hardware, and for those stuck on lower end, any advice or alternative? do i just need to break out the flows into separate pieces? Also, not sure how to configure the font as it just shows 'undefined' or null for me, but will look around.
@@latentvision well i tried it this morning and it processed through ok. i found that i had incorrectly imported the wrong image at your video timestamp 12:24 (had imported the yellow graphic there by mistake in stead of the circle with dots, thats what i get for not just following the video, too excited and trying to rush ahead haha) so i dont know if that was part of the issue. thanks for this fun flow and video.
How much vram do you have? Having all the segmenters and controlsnets and upscalers and ipadaptors withing a single workflow causes me to just run out of memory (T4). I know I can and should save each resulting image separately and compose in a separate workflow (perhaps even not using comfyUI) but I wonder how much vram you need to run all this.
Matteo you are the best ever ,, please iam struggling getting good result for my product photos using ip or instant lorqs . Even with the workflows of changing the background it's adding arround the product and hard to upscaled , is this possible in comfy to have good product photography for my products ?
@@latentvision can you make a video for this ?? It will be very useful. And unique , I am following all Ai creators. Nobody makes such content in product or fashion photography uses of AI
I tried to run this and get "Error occurred when executing KSampler: 'NoneType' object has no attribute 'shape'" message... any idea what the cause might be? I got all the png files loaded, got all the models in the right folders... everything updated....
i think the issue is the openpose controlnet model... if i chose "t2iadapter_openpose_sd14v1.pth" it works... but with openposexl2.safetensors it doesnt...
Really amazing what you do Mateo👍👍. Thank you for this vid 👍. I have question that hopefully someone can answer: I just upgraded to a new laptop with RTX 4090 16GB. Now my comfyUI monitor (Crystools node) shows VRAM usage at 99%, and GPU usage at 0%. 1) is this how it's supposed to be? 2) if not, what can I do to have the GPU run as well? Thanks to anyone that can help me with this.
I tried using the new nodes, "IPAdapter" and even " IPAdapter Advanced" like instructed in your "IPAdapter v2 video", but it doesn't seem to work, like it literallty seems like the workflow is ignoring ipadapter, I change the weight_type, or the preset with the same seed to try and test things out and it doesn't even generate a different image. Please help? :( @Latent Vision
Can you show us how to automate the image creation by some rules. i want to make some cards. I want to bring images and text (from a dataset or folder or txt...) and click a button and just let it do its thing. If that's possible to do I would love to learn :)
Great tutorial your skill is extraordinary! If it would be possibly can someone talk me through fixing my 'Draw Text' node as it appears with red border and I have been unable to fix it. TIA😍
Thanks for your reply. I uninstalled upgraded reinstalled to no affect, turns out I needed a font in the node folder (I had put one there just never noticed there were 2 needed). @@latentvision
"It's Time to D-D-D-D DUEL!"... sorry... Awesome concept, beautifully explained
"I like to tinker with ComfyUI". Aren't we all? :) You seriously makes this magical. I've already figured a bunch of things out, but this is next level. That frame generation alone was mind-blowing. Also you've shown yet again something I didn't know: that you can enter formulas as color value. Thank you!
I said the same thing to my self about the color value formula! 🤩
I was trying to figure out how to direct Comfy to use a specific hex color in a persons shirt. Not sure if this could be used in that way though but wow!
This is one of the best tutorials (for any subject) I've ever seen. It was very helpful to see the entire process from scratch, without skipping ahead. And the result is fantastic. Looking forward to more of your videos.
these tutorials on specific subjects are great, I think there is a very deep learning in realizing that this is not just for tik tok videos.
this is so good wtf, these tutorials are way beyond what I thought was possible with comfyui
I’m a big fan of bonus stages in retro games, cool that you’ve implemented it as part of your content :)
His content is the entire makeup of the bonus stage
I love your tutorials so much as you always push the boundaries what one can do with ComfyUI. Especially since you translate basically traditional composting workflows for example that usually you'd do manually and incorporate them into these automated pipelines. This really shows the immense potential for real world applications, product ideas etc. Very awesome and thanks a lot for that!
The ComfyUI MVP makes another must-see tutorial! The amount of USEFUL WORKFLOW knowledge in 25 minutes is like an intense cardio workout. Love these goal oriented workflows, which is why ComfyUI exists and does best. Thank you as always for the inspiration!
Wow, this is truly stunning, especially the creation of dynamic cards. As a Chinese ComfyUI creator, you have given me a lot of inspiration. I continue to support all of your tutorials and video creations; they are magnificent!👍👍👍
I wanted to exactly do that! Thank you so much! Would never had done it as good as you did. Very much appreciated!
Insane video, this is the kind of content I'm looking for, practical uses! Thank you so much!
Best on the internet... Learnt a lot from your videos. Thanks a ton !!! God bless
This is awesone
I've been working on some trading cards a deck of cards and you've made n life so much easier in terms of the production. Also you've lessened the steps that I need to now go to Photoshop to make adjustments.
Thanks for sharing this has been truly invaluable.
This is wonderful! Thanks matteo
Amazing work, really. All these automated tools out there that just do the work for you, while this type of video helps you understand way more what is going on. I wish my brain was this big to understand it all and keep up with the pace. One day.
As always just full of useful information that can be used in various ways. What a treasure trove my friend. Thank you. =]
Amazing tutorial! This workflow brings so many other ideas to try next.
Man this is some workflow, so much to learn from this video. Was a lot of fun following along.
The Segment results for the masks aren't the best, I get some nice results using the "Image Remove Background (Alpha)" node from the WAS Node Suite. Especially arround the branches for the frame.
As always full with information in simple way 👍👍👍. We wait for in-depth prompt tutorial video 🙏. Hopefully next video will be on in-depth prompt tutorial video 🙏🙏.
Please make this video
Mateo I just had to come down into the comments and thank you for all that you do for the SD community. This video in particular blew my mind, you rock 👊
You sir are a brilliant teacher
The video that I was waiting for so long time! thank you!
This is so awesome ❤it made me feel like a complete stranger to ComfyUI - I didn't realize something like this was possible before
What a great, practical use of the technology. Beautifully explained too, Matteo. Love it! ❤
It's amazing how you handle this tool! I really enjoy every video you make. It's so helpful.
this is much appreciated. maraming salamat, Mateo!
Matteo, sei davvero un mito! Your videos have made the steep learning curve much more accessible.
Outstanding work! I had so much fun following along and making a card, and I think it came it out swell. Your videos are the best.
Man, you are simply amazing. Would love to watch full ComfyUI video course made by you. Many thanks!
omg! thank you so much for the content!!
This is Godlike level!
Love the community, thanks for the knowledge you share.
oh my gahd what a tutorial. wish i could do this with such ease as you. i have so many questions. think i need to check out the discord of yours.
End result looked incredible!
I think I need a course to understand what is going on here... So impressed
Love your workflows and tutorials
Amazing and super inspiring work looking forward for your next videos :)
thanks a lot, this video specifically will help me a lot
Matteo el gran maestro!! Awesome video!!❤️🇲🇽❤️
Wonderful work again. Thank you!
Great detail, thank you
3:24. im using your workflow and i got it back down to just the same nodes to get to the border. but after selecting the mask it looks like it just send the mask to the preview image. no plant stuff is showing up. if i remove the load image part with the mask it shows a plant thing. guess i need to learn more stuff before i try this :(
I was having a similar problem. I was using "Load Image" and "Load Image as Mask" simultaneously though. I still haven't figured out how to get that to work (tried 0.99 denoise, switching VAE decode to VAE decode for inpainting). Finally gave up and did it exactly as seen in the tutorial (with the rough/hand-drawn built-in maskeditor) and it started behaving like his. Not sure if what I tried first has a bug or if there's something I don't understand at play (I had both mask and image at same resolutions, used alpha/opacity in the mask... it works when its just the mask :/)
EDIT: It was a bug. My alpha channels got translated to RGB after reload or something. Just had to reupload the transparent photo to the mask node.
amazing tutorial! You sir are a Legend! Always learn a lot from your videos. Am looking forward to the possible follow up implementing the animated video to the card as per the bonus stage! Or if you are not able to do one (I know you're pretty busy!) can you point me to any other resources that can help me achieve that? Thanks!
Yery interesting and a whole lot more original!!! subbed and liked
Amaaaaazing 🎉🎉
Amazing stuff as usual, thank you for sharing!
Love it! Will try it this weekend. Thanks for the tutorial :). In your next tutorial on this you should show how to create a foil overlay on the card too.
*Wizards of the Coast taking notes*
nice one thanks for sharing
Mateo is a wizard
Mate this is incredible...what about a new tutorial for this grat trading cards stuff? Is anything changed in your workflows with the recent growth of comfy?
oh yeah a lot changed... :D it's impossible to keep up
@@latentvision would you please please please bring a new tutorial about card creation with comfy? :')
Very smart and creative
Matteo, bellissimo, grazier per il tutorial +1
amazing! thank you!
You're a wizard! :)
very good
Thank you for this tutorial - you are a great teacher - it was perfect for learning. Started with a simple idea, built upon established skills from previous videos, and contained lots of repetition to allow us to get a hang of how to do it on our own. Once we got to the icon I paused the video to see if I could do it on my own before you explained it, and it felt like things were really clicking.
Question - when you were editing the plant mask at 2:39, how did you get your mask color to be transparent gray? Mine is black, which then makes it impossible to see over top of the black center of the image.
Thanks!
the mask thing is a new feature, you probably need to update comfyui
That worked. Thanks. @@latentvision
Your tutorials are amazing! I'm just having trouble with my Draw Text node, it says font "Undefined" and I don't know how to fix that and can't seem to use another text node because it doesnt have teh mask out output
you are right, I'm sorry that is not documented. Put your fonts under ComfyUI/custom_nodes/ComfyUI_essentials/fonts
@@latentvision Thank you, you replied so fast haha, I just figured that out :), amazing workflow, you are a godsend to the space :)
@@latentvision Thanks Mateo, you are my ComfyUI hero, I'm learning a lot thanks to you. I'm trying to use this text node to print some of the parameters into the images, is there any way to do that? I already tried the %Load Checkpoint.ckpt_name% as an input but it is not working.
@@pressrender_ I'm sorry I'm not sure I understand, maybe try to ask in my discord
Hey Mateo, will u be coming out with instant id install guide and tutorial?? Love your videos, u are my go to for everything comfy.
Matteo really loved it😍😍😍😍😍. just imagining the variety of cards I can do.
What if I wan't to use my face in the cards. please can you make that workflow 🥺
oh yeah that would be super simple, just add faceid to the main character generation!
Brilliant!
Hi, thanks for your fantastic tutorials. but HOW do you box select multiple filters? found nothing :(
Very enjoyable tutorial, thank you! Where can I find the Draw Text node? The only one I have is the CR Draw Text, and it doesn't have Mask output.
thanks! the Draw Text is in ComfyUI Essentials
Never mind, I just had to update your Essentials node pack.
Thank you for this tutorial. It is unique and it gave me a new perspective on using Comfyui. That's being said, I couldn't find where to put fonts or how to install it. Is there a custom node for it? I'll appreciate if you can explain. Thank you!
they are under custom_nodes/ComfyUI_essentials/fonts
Hi Matteo, love your video and this tutorial is amazing, but I have a problem replicating it, when I use the Draw Text node, the updated one, it does not hallow me to use trasparent background, and if i put everything to #00000000 (text and background) it doesn't show anything. What can I do? It give me an error if I put a non existing color in background. Thx for the answer, love this type of video :D
Tried to use this flow on a 4Gig card and it puttered out i'm afraid at the groundingdinosegment node for the shield (said ran out of memory), though it had made it through the groundingdino for the paper ok. until i can upgrade hardware, and for those stuck on lower end, any advice or alternative? do i just need to break out the flows into separate pieces? Also, not sure how to configure the font as it just shows 'undefined' or null for me, but will look around.
you can try with other segmentators. A very light one is clipseg, but there are many you can try
@@latentvision well i tried it this morning and it processed through ok. i found that i had incorrectly imported the wrong image at your video timestamp 12:24 (had imported the yellow graphic there by mistake in stead of the circle with dots, thats what i get for not just following the video, too excited and trying to rush ahead haha) so i dont know if that was part of the issue. thanks for this fun flow and video.
How much vram do you have? Having all the segmenters and controlsnets and upscalers and ipadaptors withing a single workflow causes me to just run out of memory (T4). I know I can and should save each resulting image separately and compose in a separate workflow (perhaps even not using comfyUI) but I wonder how much vram you need to run all this.
24gb. there are more efficient segmentators now anyway
Matteo you are the best ever ,, please iam struggling getting good result for my product photos using ip or instant lorqs . Even with the workflows of changing the background it's adding arround the product and hard to upscaled , is this possible in comfy to have good product photography for my products ?
yes, of course it's possible! It's not something I can explain in a YT comment though :(
@@latentvision can you make a video for this ?? It will be very useful. And unique , I am following all Ai creators. Nobody makes such content in product or fashion photography uses of AI
Great project!
How to Place Parchment Paper Inside a Climbing Plant Frame with the Frame Overlapping the Paper???
AMAZINGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGGG
I have the Draw Text Node, however there are no fonts. How does one add new fonts?
sos buenisimo
I tried to run this and get "Error occurred when executing KSampler: 'NoneType' object has no attribute 'shape'" message... any idea what the cause might be? I got all the png files loaded, got all the models in the right folders... everything updated....
i think the issue is the openpose controlnet model... if i chose "t2iadapter_openpose_sd14v1.pth" it works... but with openposexl2.safetensors it doesnt...
Do you know why "draw text" would not be defined for me? I can't get to the font
update the extension. Inside the extension directory there's a "font" folder. Put your fonts there!
Really amazing what you do Mateo👍👍. Thank you for this vid 👍.
I have question that hopefully someone can answer:
I just upgraded to a new laptop with RTX 4090 16GB. Now my comfyUI monitor (Crystools node) shows VRAM usage at 99%, and GPU usage at 0%.
1) is this how it's supposed to be?
2) if not, what can I do to have the GPU run as well?
Thanks to anyone that can help me with this.
Thanks for sharing. How did you get so good at ComfyUI? It seems like you do this for years already :o
at one point it just "clicks" in your head... comfyui is like pointers in C++, when you get it, it's like magic
How can I get the Draw Text node?
In my case the node was not missing but until I added the fonts there was a red outline (just in case this helps?)
how do i add the actual fonts@@bwheldale
how can you import new fonts?@@bwheldale
Hi! What node should we use to replace the Apply IPAdapter?
I tried using the new nodes, "IPAdapter" and even " IPAdapter Advanced" like instructed in your "IPAdapter v2 video", but it doesn't seem to work, like it literallty seems like the workflow is ignoring ipadapter, I change the weight_type, or the preset with the same seed to try and test things out and it doesn't even generate a different image. Please help? :( @Latent Vision
IPAdapter Advanced
Tutorial for handling breaking changes due to the big IPAdapter update -> ua-cam.com/video/oC_BDjbw9Jo/v-deo.html
Can you show us how to automate the image creation by some rules.
i want to make some cards. I want to bring images and text (from a dataset or folder or txt...) and click a button and just let it do its thing.
If that's possible to do I would love to learn :)
yeah that's an interesting topic, maybe I'll do a PILL about it, thanks for the suggestion
Ciao bello ❤
Great tutorial your skill is extraordinary! If it would be possibly can someone talk me through fixing my 'Draw Text' node as it appears with red border and I have been unable to fix it. TIA😍
you just need to upgrade the extension (comfyui essentials)
Thanks for your reply. I uninstalled upgraded reinstalled to no affect, turns out I needed a font in the node folder (I had put one there just never noticed there were 2 needed). @@latentvision
How you duplicate nodes while it maintains it connections?
CTRL+SHIT+V (check my basics tutorial)
@@latentvision haha! Yours is better Maestro… especially without the F 😂
Can this workflow be used with SDXL?
I guess so, yeah, apart for the animation of course. It will require some minimal refactoring though. At the very least in the upscaling
But can i do anything else with SDXL and just the animation with 1.5?@@latentvision
Anyone know why he used canny for the controlnet for the parchment paper as opposed to lineart?
they would both work, canny is faster and in this case more than enough
Que maravilha
Nice....You should make a Video Course(paid) on all these...
Great stuff but Discord invites are invalid
mh weird, seems to be working. try latent.vision/discord
5555 the original is Magic the Gathering? Havnt seen that in a while 🙂
tell me about it 😄
Ok. On another note. Did you know that the subreddit r/comfyui allows bullying?
sorry what?
fuckin amazing