yes, I think what you said is great. I am using mimicpc which can also achieve such effect. You can try it for free. In comparison, I think the use process of mimicpc is more streamlined and friendly.
how do i use this for general inpainting and not specifically for text? struggling to find an inpainting workflow that doesnt affect the general image quality
Thanks for the great video! However, I encountered an issue while running your flow. The image isn't being combined, and the blue section's preview mask and compare image are both empty.
I was able to get this workflow to complete by changing the two "Preview Bridge" nodes in the Composite Group. I changed the 'block' value from "if_empty_mask" to "never." Execution halts if the mask in the bridge is empty.
I think you are best to look for ComfyUI tutorial videos first. This is an advanced inpainting workflow and can seem complex for beginers. But once you wrap your head around how ComfyUI works, and how most people generate their workflows, this workflow will make sence.
Hi, first of all, thank you very much for this mega tutorial. The results are really great right from the start and very cleverly structured. I have a question, where exactly can I set it to create several examples of the inpainting at once, for example, currently it only creates one example based on the prompt. And another question is, I'm trying to change the front view of a car by generating a new front grill and lights, but it always creates a strange logo on the front grill. Is there something like a negative prompt to avoid this? Many thanks in advance
Hey, so that's really cool but some issues that I had is the final image has artifacts, such as an extra hand following the shape of the face where I painted the mask. What I suggest is adding a canny, depth or openpose to have more control
hi can you do the same video but for OUTPAINTING. cause i tried so many workflows never worked well and since i discover your channel with that UNIC inpaint workflow wich worked great i wonder if you have such of thechnics for outpainting :) thanks in advance :)
Great workflow! Thank you for your video. The only thing is that when i que the prompt, the proccess freezes when it comes to the SAMModelLoader node. Any thoughts on that?
Thanks for the video. Could this workflow be adapted for the checkpoint version of the Flux, while also running inside the portable version of ComfyUI (without the Manager component)? Right now with this workflow, I'm missing like 20+ nodes and most of the workflow is in red, so I don't even know where to start from to get it working..
@@PixelEasel Yeah, I'm doing that regularly, and I have the latest changes right now. The thing is that the ComfyUI portable version cannot even search available nodes, and when adding new ones I need to git clone into the custom_nodes folder each of them manually. So now I would have to search git repos for all these 20+ missing nodes, and clone them one by one. Which would be tedious and a no-go for me. Don't ask why I downloaded the portable version :) Maybe because I just needed to extract the zip and run it. Anyways, waiting for the new revamped ComfyUI version, maybe something will be improved regarding this functionality. If you don't have any other idea..
3 місяці тому
what if i want to change a bodypart like skin color, ?
Hello, Could you please help me understand how to use this workflow to change the hair color of an anime character? For example, let’s say I have a portrait of an anime character with green hair, and I want to change the hair color to red. I’ve tried applying a mask to the image, focusing only on the hair area, but the result alters the shape of the hair without changing its color - it stays green. I also attempted masking both the hair and face, but while the face gets heavily redrawn, the hair remains green. In the prompt, I always specified something like "a portrait of a woman with red hair," but the prompt seems to be ignored. Can this workflow actually be used to change the color of various objects, or is it primarily designed for adding text to objects? Thank you!
Thanks for the tutorial, but i have an error when i run the workflow : "GroundingDinoSAMSegment (segment anything) 'Sam' object has no attribute 'image_size'" i do'nt understand why i did not touch anything.
Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.
Thanks for the video! In your workflow, I get an error on a node: SamplerCustomAdvanced - mat1 and mat2 shapes cannot be multiplied (3712x16 and 64x3072) Do you know what causes this?
im sayin! When loading the graph, the following node types were not found: PreviewMask_ SAMModelLoader (segment anything) GroundingDinoModelLoader (segment anything) MultiplicationNode DF_Text GroundingDinoSAMSegment (segment anything) Nodes that have failed to load will show as red on the graph.
@@mendthedivide For anyone that experienced the problem of missing nodes type, here is how I fixed it: Manager > Custom Nodes Manager > Check Missing > install all the ones shown > restart > fixed
Wow! This is a VERY professional Workflow - Subscribed :) Quick question, kind of embarrassing, I've just started using and I can't seem to change the prompt in your workflow. I'm stuck with "photo of the word "FLUX" on a white jar , high quality, detailed" How can I change my prompt? Feel so dumb :p
Ended up bricking up my Comfy UI install because of the modules installed, proceed with caution if you use Rocm with ComfyUI Edit: Had to reinstall pytorch and a few other dependencies that are compatible with the pytorch version I needed and it appears to have fixed things.
Sadly this workflow doesn't seem to work anymore, it outputs no image at the end. It still previews the corrected area but seems to fail for combining the two images back together. Otherwise neat idea for a complex inpainting workflow.
works for me too, though at first, it did look like it didn't do anything then I realize it preview is a compare preview.. so I had to slide the mouse accross to see before and after... maybe that was your issue too?
I really love your videos it's amazing I am learning a lot with you and I like the way that you teach things but please please reduce the frequency of the "of course".
When someone adds the ability to add and edit by layers, and creates a UI like Photoshop to do all these things ComfyUI can do, it’s going to put Photoshop outta business.
You can use stable diffusion and I think ComfyUI and probably Flux in Krita. Krita itself is like Photoshop, though not as many features. But you can do all of the things you mentioned and much much more. I set it up once, it was interesting. but like with most of this stuff on the cutting edge, its janky and a hassle, but still freaking amazing. A few years down the line will bring so much usability. COmfy ui is incredibly unnecessarily janky, but itll take what you mentioned and a lot of work to build up something much more accessible. Its definitely coming though.
I've had the same trouble before, and it's really a bit new to this operation for newbies. So much so that I was a bit disappointed with this so-called new AI before I used mimicpc, but then after I experienced it for free in Mimicpc online, I started to fall madly in love with this technology and it has a huge
I've had the same trouble before, and it's really a bit new to this operation for newbies. So much so that I was a bit disappointed with this so-called new AI before I used mimicpc, but then after I experienced it for free in Mimicpc online, I started to fall madly in love with this technology and it has a huge
For anyone that experienced the problem of missing nodes type, here is how I fixed it: Manager > Custom Nodes Manager > Check Missing > install all the ones shown > restart > fixed
Your clever inpainting workflow is the best I have seen so far and produces excelent results. Great work, thanks for sharing!
thanks for your kind words 🙏😊
yes, I think what you said is great. I am using mimicpc which can also achieve such effect. You can try it for free. In comparison, I think the use process of mimicpc is more streamlined and friendly.
@@Huang-uj9rt and costs a lot of money as far as i saw
everytime i visit your videos, it always brought me exicited about your new well-rounded workflow!
thanks !! great to hear!
This was exact the workflow I was looking after! Very good Work, PixelEasel. Thank you so much for sharing! BR Five-Birds
I can't imagine a more complicated way of altering a label
🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣🤣
43 likes and no alternative links. Poser.
Yeah it’s easy to get carried away and make big workflows for tasks you could do with photoshop in five minutes!
You all just don’t understand the power of this setup 🤣
amazing tutorial and great lessons for inpainting thank you
why cant i change the text ? it doesnt allow to change the text prompt in U-NAI Get Text ?
Well organized and presented! Thank you very much for your informative workflow and for always uploading it. Appreciated! 🙂
thanks for commenting!!!
such a great vid again! you go through the nodes so well.
thanks! It's such a great comment 😉
I don't know how you came up with this workflow, but it works so well. Even when using my LORAs it looks good, which failed on my previous attempts!
Nice ! I use ComfyUI-Inpaint-CropAndStitch , which reduces nodes quite massively.....
The speech synthesizer has come a long way. I remember a year ago the recap channels were unbearably bad. Great workflow too.
Thanks for this tutorial !
how do i use this for general inpainting and not specifically for text? struggling to find an inpainting workflow that doesnt affect the general image quality
What a legend! Amazing work, subed!
Thanks for the great video! However, I encountered an issue while running your flow. The image isn't being combined, and the blue section's preview mask and compare image are both empty.
I have the same issue. The last past part is not working
I was able to get this workflow to complete by changing the two "Preview Bridge" nodes in the Composite Group. I changed the 'block' value from "if_empty_mask" to "never." Execution halts if the mask in the bridge is empty.
@@scebadoff Thank you! This solution is simple and amazing! You saved me some headache today
we want a breakdown for the workflow, step by step .. start from opening comfyui and go on!
I'll take it into consideration!
I think you are best to look for ComfyUI tutorial videos first. This is an advanced inpainting workflow and can seem complex for beginers. But once you wrap your head around how ComfyUI works, and how most people generate their workflows, this workflow will make sence.
how do i install the nodes needed for this workflow they show up red?
via comfyui manager
Great vid again! How do the resize with "resize and fill" mode like in photoshop
Hi, first of all, thank you very much for this mega tutorial. The results are really great right from the start and very cleverly structured.
I have a question, where exactly can I set it to create several examples of the inpainting at once, for example, currently it only creates one example based on the prompt.
And another question is, I'm trying to change the front view of a car by generating a new front grill and lights, but it always creates a strange logo on the front grill. Is there something like a negative prompt to avoid this?
Many thanks in advance
This is what we need! But it would be good to update, considering that controlnets have already appeared
Works nicely! So glad fp8 is good and more manageable ... Appreciate you, bro! Apologies if you're not a bro. But there's a good chance you are.
thanks! 50%, you are right
Amazing tutorial and great lesson for inpainting, thank you. How can I use this WF to change color of a specific object?
a huge thank's you are fabulous !
Flu cream, does it help if you have the flu or give you the flu?
Hey, so that's really cool but some issues that I had is the final image has artifacts, such as an extra hand following the shape of the face where I painted the mask. What I suggest is adding a canny, depth or openpose to have more control
Absolutely. I mentioned it in the video... We'll wait for ControlNet to be released (at least in beta) and then add it
Awesome workflow, what about OutPaint?
working on it 💪
hi can you do the same video but for OUTPAINTING. cause i tried so many workflows never worked well and since i discover your channel with that UNIC inpaint workflow wich worked great i wonder if you have such of thechnics for outpainting :) thanks in advance :)
This is great! Works like a charm. But how do I save the new image? lol
Great workflow! Thank you for your video. The only thing is that when i que the prompt, the proccess freezes when it comes to the SAMModelLoader node. Any thoughts on that?
maybe is not the correct version or the file SAMModel Loader is corrupt. try to re-download it and update confyUI.
Do you know if it is possible to use the XLabs control nets alongside this technique?
Yeah I wonder about that too. The additional ControlNet may required more VRAM but it could get better result, I'm not sure though.
The seg2.1 broke a lot of workflows and yours was one.
Thanks for the video. Could this workflow be adapted for the checkpoint version of the Flux, while also running inside the portable version of ComfyUI (without the Manager component)? Right now with this workflow, I'm missing like 20+ nodes and most of the workflow is in red, so I don't even know where to start from to get it working..
did you try to update Comfy? it should help to find the missing nodes
@@PixelEasel Yeah, I'm doing that regularly, and I have the latest changes right now. The thing is that the ComfyUI portable version cannot even search available nodes, and when adding new ones I need to git clone into the custom_nodes folder each of them manually. So now I would have to search git repos for all these 20+ missing nodes, and clone them one by one. Which would be tedious and a no-go for me.
Don't ask why I downloaded the portable version :) Maybe because I just needed to extract the zip and run it. Anyways, waiting for the new revamped ComfyUI version, maybe something will be improved regarding this functionality. If you don't have any other idea..
what if i want to change a bodypart like skin color, ?
Hello, Could you please help me understand how to use this workflow to change the hair color of an anime character? For example, let’s say I have a portrait of an anime character with green hair, and I want to change the hair color to red.
I’ve tried applying a mask to the image, focusing only on the hair area, but the result alters the shape of the hair without changing its color - it stays green. I also attempted masking both the hair and face, but while the face gets heavily redrawn, the hair remains green.
In the prompt, I always specified something like "a portrait of a woman with red hair," but the prompt seems to be ignored.
Can this workflow actually be used to change the color of various objects, or is it primarily designed for adding text to objects? Thank you!
Thanks for the tutorial, but i have an error when i run the workflow : "GroundingDinoSAMSegment (segment anything)
'Sam' object has no attribute 'image_size'" i do'nt understand why i did not touch anything.
how you have the queue button and menu in the top, instead of classic annoy right side?
update comfy. and in the manager settings, choose menu top
Please can I operate comfy UI on my android
how do you managed to draw this beautiful Love in clouds ? I tried many times and i have always really bad result. What is the magic prompt you used ?
i just wrote - the word "love" written in clouds . if it doesn't work, try to change the seed and play with the denoise
@@PixelEasel thank you so much, i will give it a try.
You can do a video using and explaining magic clothing nodes ❤🙏🏻🙏🏻🙏🏻
is this only to change label?
i made it work here and it only changes labels
А как заменять обьекты??? С текстом понятно а где выбирать обьект и где вписывать егго????
Cool!
thanks!
Yes, I think flux is awesome, I tried Stable diffusion on Mimicpc, and of course this product also includes popular tools for AI such as RVC, Fooocus, and others. I think it handles detail quite well too, I can't get away from detailing images in my profession and this fulfills exactly what I need for my career.
Thanks for the video! In your workflow, I get an error on a node: SamplerCustomAdvanced - mat1 and mat2 shapes cannot be multiplied (3712x16 and 64x3072)
Do you know what causes this?
How do I install the missing nodes ?
im sayin! When loading the graph, the following node types were not found:
PreviewMask_
SAMModelLoader (segment anything)
GroundingDinoModelLoader (segment anything)
MultiplicationNode
DF_Text
GroundingDinoSAMSegment (segment anything)
Nodes that have failed to load will show as red on the graph.
@@mendthedivide Same!
@@mendthedivide For anyone that experienced the problem of missing nodes type, here is how I fixed it:
Manager > Custom Nodes Manager > Check Missing > install all the ones shown > restart > fixed
@@Thejasonshelby if only that worked, thanks anyways
@@Thejasonshelby Worked for me. Thanks!
Wow! This is a VERY professional Workflow - Subscribed :)
Quick question, kind of embarrassing, I've just started using and I can't seem to change the prompt in your workflow. I'm stuck with "photo of the word "FLUX" on a white jar , high quality, detailed"
How can I change my prompt? Feel so dumb :p
I think it is described on the video... 🤔
Ended up bricking up my Comfy UI install because of the modules installed, proceed with caution if you use Rocm with ComfyUI
Edit: Had to reinstall pytorch and a few other dependencies that are compatible with the pytorch version I needed and it appears to have fixed things.
how would I plug a Lora correctly? because I get cutoff masks with Loras as I connect them
Edit: this workflow works beautifully
В начале видео вы заменили и нарисовали кота а как это сделать если тут только текст вы заменяете?)) где что изменить чтобы вписыаать обьетк??
I can't change a blue dress for a red dress. What's wrong? I use shnell
Schnell is weaker in inpainting tasks, which is why I chose to use Dev instead.
Thank you 🙏
thx!
Sadly this workflow doesn't seem to work anymore, it outputs no image at the end. It still previews the corrected area but seems to fail for combining the two images back together. Otherwise neat idea for a complex inpainting workflow.
works fine for me
works for me too, though at first, it did look like it didn't do anything then I realize it preview is a compare preview.. so I had to slide the mouse accross to see before and after... maybe that was your issue too?
how to install this, please make a simple guide
I really love your videos it's amazing I am learning a lot with you and I like the way that you teach things but please please reduce the frequency of the "of course".
of course 😉
Thanks!😍
thx for commenting!
Please create a background change workflow with flux ❤
this workflow can do it bro, just mask the background then write a prompt of your imagination
can you add a node for LoRa?
Yup. No problem. Just insert it (or them) between the unet loader node and the three nodes it leads to.
can you upload to Replicate?
I need to remove things and/or send hands to the back of the body. But for some reason it's being impossible. Could you help me please. :) Thanks.
When someone adds the ability to add and edit by layers, and creates a UI like Photoshop to do all these things ComfyUI can do, it’s going to put Photoshop outta business.
You can use stable diffusion and I think ComfyUI and probably Flux in Krita. Krita itself is like Photoshop, though not as many features. But you can do all of the things you mentioned and much much more. I set it up once, it was interesting. but like with most of this stuff on the cutting edge, its janky and a hassle, but still freaking amazing.
A few years down the line will bring so much usability. COmfy ui is incredibly unnecessarily janky, but itll take what you mentioned and a lot of work to build up something much more accessible. Its definitely coming though.
Care to share the background song? ^^
something broke, I spent a while trying to fix it but cant figure it out
How to invert the mask?
In case anyone is looking for a solution: there is a mask invert node. It's very simple.
can you provide an image to copy your workflow? thanks
you have the link to the workflow in the description
Siguen las imgenes engomadas...
it seems to be ignoring the mask i draw and does whatever
No way, im back to fooocus.
good luck!
No matter what I change, all I get is a smudge
I've had the same trouble before, and it's really a bit new to this operation for newbies. So much so that I was a bit disappointed with this so-called new AI before I used mimicpc, but then after I experienced it for free in Mimicpc online, I started to fall madly in love with this technology and it has a huge
I've had the same trouble before, and it's really a bit new to this operation for newbies. So much so that I was a bit disappointed with this so-called new AI before I used mimicpc, but then after I experienced it for free in Mimicpc online, I started to fall madly in love with this technology and it has a huge
For anyone that experienced the problem of missing nodes type, here is how I fixed it:
Manager > Custom Nodes Manager > Check Missing > install all the ones shown > restart > fixed
Where is the Manager section in ComfyUI?? I'm sorry if the question is very basic, I'm new to this.
@@lexwimp It is an addon that needs installed. Definitely worth getting.
Did anyone else figure out how to fix it ?
Please just use your own voice. AI voices are not as good as yours.
It gives to me the following error: UNETLoader
'conv_in.weight'
What Can I do?