ComfyUI Fundamentals - Masking - Inpainting
Вставка
- Опубліковано 16 жов 2024
- A series of tutorials about fundamental comfyUI skills
This tutorial covers masking, inpainting and image manipulation.
Discord:
Join the community, friendly people, advice and even 1 on 1 tutoring is available.
/ discord
Workflow: drive.google.c...
Excellent tutorial, thank you! I've learned a lot from this series. "Set Latent Noise mask" is a revelation. I would never have thought to use that instead of the default.
I love the way you explain, you start from the basics and add complexity as you explain in detail. It is also noteworthy the neatness in the nodes, THANK YOU!
You are a life saver, I kept trying and trying and messing around with the mask and it turns out what I was missing was using a second load image for the mask. That put me back so much for so long.
Happy to help, once you master masking you can do almost anything in comfyUI
Theres a cool technique I recently found thats not in this tutorial.
If you make an image out of three colors red, green and blue you can use a single image to make 3 different masks using the image to mask node and setting the appropriate channels. :D
Despite for some reason being a nightmare to install the nodes today (doesn't want to do it automatically through comfyui manager and written instructions aren't great) once I finally got it loading this is absolutely amazing and trounces other ways of masking, positioning, and inpainting. Thanks a lot for the heads up about these very cool nodes!
Wow would never have figured that out myself. Thanks!
This is the one! Seen a few tutorials here and there and read a lot of Reddit posts that didn’t make a whole lot of sense but this is solid! Thank you!
Happy to help :D
Thanks for the video, you really helped me understand the different approaches and the pros/cons! I'm going to watch the Masquerade video now
Glad it was helpful :D
Thank you for this. This was a great tutorial.
The best tutorial on this topic. Bravo and thank you !
This is exactly what I was looking for. Thank you!
Quick thing i found out: "IMAGE" input/outputs only have R,G,B channels, while "MASKS" only works with ALPHA (or, at least, have a single channel).
So "Load Image" has two outputs: "Image" (RGB) and "Mask" (A); it splits the channels that way. So if you try to convert from "Load Image" to mask as alpha you get an error.
Following the reasoning, nodes as "Mix Color By Mask" use "image" as input (and not mask) so we have more freedom. Also they have only r, g, b options and not alpha because "image" data in ComfyUI does not have it, but on the other hand if they used "mask" they should use a mask (1 channel only) already processed previously
That different node to set a latent noise mask is gem, I wasn't happy about comfyUI inpainting but that should improve the results.
However, I still think inpainting in A1111 is better.
I'd love if there were more tools in the comfyui inpainting window, like brush blur, transparancy and stuff. I love easydiffusion for that reason.
i agree. so i use comfyUi for faster generation and then a1111 for inpainting.
At the 17:04 mark, you were about to explain how to paste characters from different renders into one picture. I wish you would have shown examples! If you ever find the time, a demo of that would be great. Thank you!
Ill be making a tutorial on 'compositing' for that process I think. however that doesnt help you right now.
Grab the masquerade nodes pack, use crop by mask and paste by mask, there are other tools which can be used in conjunction with this to direct where to paste masked stuff.
Hopefully thats enough to work with for you to figure it out.
@@ferniclestix Thank you; I’ll experiment with that for now! I’m eagerly anticipating the new tutorial. I truly love the content you’ve been releasing. You’re among the few teachers who can be technical without being overly complicated. Kudos for consistently releasing tutorials, especially given the fast pace of updates-it must be challenging to keep up. So, thanks once more! 🤝
Thank you!
Very useful!
Also thanks for including the json file!
what if i want to edit the image i already inpainted, how do i do that?? For example the image you have in the end, what if i want to change that one, maybe change a nest, and keep the dragon?? how can i do this? It doesnt work for me.
You would want to use image load nodes to re-run it maybe, or add some more samplers and masks, you can prettymuch extend the workflow by building it out and connecting the different outputs.
It helps if you build the workflow yourself but basically you can bring in a finished image using 'image load' then do a vae encode and send this to your samplers instead of 'empty latent'
the vae inpaint was screwing me up. thank you for showing us the right way!
Well yes, I made that mistake too, then I set up a controlnet for inpainting, that didn't work, then I found your video!
Thank you, liked and subscribed.
:D happy to help
I just realized that "pipes" like kpipeloader that I just saw on the Impact Pack custom nodes tutorial page might run your bus a little more effeciently for you. And thanks for these vids, they help a lot!
Yes, impact packs pipes are cool! however I find using a split up bus more flexible for my purposes... plus its easier to make a tutorial where people can see all the things im doing, if I fill my workflow with custom nodes then it creates a barrier for learning so yeah, I generally avoid too many custom nodes if I can :) Thanks for the advice tho! there are some amazing nodes in impact pack.
@@ferniclestix I figured you had a special reason, lol. I'm still very new obviously, but now I can inpaint in comfy! Thanks again! 🖌
Wow, thx for the correct method for inpainting in comfyui
i love how we went from here's some text go figure it out. to planning and customizing processes which are used to optimize the process. One almost could say it feels like factorio from the future. which i love! the idea behind it is allready a good base to build on. stuff like native linux support, modular and people can mod the software to their own liking. It builds community. Where we were sharing blueprints of factorio we're now sharing png's to make images. What a wonderful time to be alive!
only until the corps find a way of ruining it. screeeeeeee, capitalism screeee! :P
Thank again. Your Comfyui tutorial series are very informative and valuable
I'm trying to figure out how to use the 'Mix Color by Mask' node, but I'm having some trouble locating it. I've searched in the manager tab, but can't find anything with that name. Any guidance on where I can find this node or how to use it would be greatly appreciated!
You would need the Masquerade node pack, which is what this node belongs to in order to use it.
Very instructive video! Does ComfyUI have anything similar to "Inpaint Conditional Mask Strength"?
im unfamiliar with what you refer to.
for most things in A1111 there are similar things in comfyui, however comfyUI's inpainting works differently than A1111 so they don't really behave the same way.
Another life-saver! Thanks so much!
I have downloaded your workflow, it's very useful to me, many thanks. Will you launch fix hand tutorials? It's very hard to fix hands for me. Please arrange fix hands tutorial, thank you again.
try the face restore tutorial, one of the nodes there can do hands although you may have to use clipseg to find the hands.
This was literally the best tutorial. Thank you
great tutorial, clipseg looks like a useful node, thank you
I'll be doing an updated tutorial using something better than clipseg soon :D
I look forward to it@@ferniclestix
I can;t file " mix color by mask".Where I can find it ? Thank for making great video series
it belongs to the masquerade node pack.
@@ferniclestix tks
Excelent tutorial, thanks! Sadly the "Image Blend by mask" is missing in my confyui... I have searched for it on the web, and in the manager, with no luck... Where could I find it? Thanks again!
I'm fairly sure it is a WAS suite node.
Great video. Thank you a lot
3:25 Because it's not a mask in a pixel image format (blue). It's only the values of the alpha channel (green), I think.
alpha channels are still stored as a black and white pixel image, it is black and white with no rgb which is the difference.
you should be able to control-shift V and paste with wiring intact for your bus
doesnt work on reroute nodes, unless there has been a recent update im unaware of.
AH, your right, they patched it, nice!
i'm trying to do a group photo using roop, and i was able to draw my friends face, but i am having trouble drawing a long beard on the one guy without it targeting the wrong person. Any advice on how i could fix that?
Reactor let you pick which face to replace, its 'rooplike' i think I cover it in the face restoration tutorial.
Wow great video very helpful!
Very useful! Thank you.
Thanks for this. Can you explain the setup in comfyui for inpainting models?
inpainting model setups just use the inpainting model in the checkpoint loader really. you could use vae for inpainting or set latent noise mask. its really up to you. generally I find inpainting models less useful than normal ones but it really depends what im doing .
keep trying to incorporate this into an img2img workflow (so no starting ksampler, image loader and the maskloader) but results arent coming out at all. i can tell its trying to impact the mask but doing so in undesirable ways.. not seeming to take into account the big picture which you talk about and provided a solution for in a non-img2img way so not sure why it doesnt work in an img2img (or i've prolly got something wrong in my flow)
make sure you are masking correctly and that everything is plugged in correctly.
for the most part img2img is no different from standard but you have to make sure you treat it as the first step in the workflow (the loaded image should replace the starting sampler at 1.0 denoise)
@@ferniclestix so blessed for a reply! so far its been interesting to compare vae inpainting vs setlatentnoise. i've got a flow that parallels both options with the same input image & mask to compare and sometimes theyre very close, and sometimes theyre not close and so far i dont know of a pattern, just depends on the seed and other variables i guess. i need to review your vids some more around cleaning up images, sometimes masking lines are noticeable though the generation is good, just needs cleaned up and i believe you mention being able to do that by sending it back through another ksampler or other. just need more time to dive into the videos more and play around.
I probably should do an indepth on getting good results from inpainting.. which is kind of a skill you have to learn. but its really dependant on your method of approach and with so many different ways of doing it... kinda hard to tailor a good tutorial for it.
@@ferniclestix Understandable, we don't have time to do everything. Thanks for what you can give time to.
Really useful ! Thanks
Very helpful, thanks! Unfortunately I cannot find the Clipseg node in the search window after I installed it with manager which shows that it is installed. Do you have eventually have any clue why?
unfortunately I cannot help with debugging nodes.
My advice head to the github page of the node in question and make a bug report.
I would love to help but there are simply too many different possible setups for a comfyUI install and as a result I'm not really able to devote time to this, my work and tutorials.
About clipseg though, it like some other nodes relies on external modules to do its magic and when those are broken often it will break the node. recently i think clipseg and blip implimentations have been a little glitchy.
second possible issue.
make sure you restart the server after an install and reload your comfyUI tab. or it wont update new nodes.
@@ferniclestix thanks for your suggestions! today after uninstalling and reinstalling it for severel times over some days clipseg seems to show up in the search bar...perhaps they fixed something.
yes, there are people working behind the scenes all over the place to get these kind of things sorted out, its a good idea to keep an eye on the github pages of nodes you used or better yet find one of the places where all us AI artists hang out and chat. the comfyui reddit is a great place to get info.
Eventually someone who addresses those topics! Thank you! For some reason this inpainting process for me has the same issues like you had with the VAEEncodeForInpaint. In your example the VAEEncodeForInpaint works surprisingly well anyway. I still get inpaint areas where the subject of the new prompt is not even visible with the Latent Noise Mask. At least when the new subject has a much different size (like mouse vs. elephant). I feel like I'd have to crop & upscale the masked area first and then put it back into position (with something like WAS nodes). However I didn't figure out yet how to do this with latents only.
with latent noise mask, if you lower the denoise ammount your result should conform more to the original and becomes more likely to fit within the masked area.
if you use vaeencodeforinpaint, it gets greyer and greyer the lower the denoise.
Latents are bad for image type processes like cropping and stuff. its better to go to images for that.
nice tutorial thank you
great tutorial - but I can't find the CLIPseg node. is this a custom install, where can I find it?
oops just got to the end of the video and found my answer ... great tutorial thank you!
:D must have missed that issue I try to mention important nodes near the start. eh. ill do better in future :)
This is great but why can't I find a single tutorial/workflow for outpainting? Literally can find 0 discussions on ComfyUI + outpainting. Is it just as simple as adding extra whitespace to the image in Paint and painting a mask over that area? Because I tried that and it did nothing. Please help.
I mean, you would crop your image into a larger image and then mask the white space and sample it using inpainting.... basically. - This assumes you don't have an outpainting node or special workflow.
I may cover this in my compositing tutorial if I have time, currently its going to be over 20 minutes.
Merci beaucoup pour ce tutoriel :)
How are you able to see the rendering going on in Ksampler? Mine doesn't do that at all? I am running super low VRAM could this be the reason?
ua-cam.com/video/hdWQhb98M2s/v-deo.html from the basic introduction tutorial
You are amazing THANK YOU SO MUCH!!!!!! WORKS PERFECTLY@@ferniclestix
Thank you so much!!
What ksampler do you use in order to have an output while generating ?
its a command line argument, open your bat file that starts comfyUI, add the line --preview-method auto to the end of your command line. restart the server and now your samplers will have a preview.
@@ferniclestix thanks
Wow, how did you discover such a useful and important trick about the Set Latent Noise Mask node? I have a few questions:
- How did you make the Ksampler to node show preview images like that? I can only use Save/Preview image nodes to see the final image.
- How to make invert mask? For example, I want to keep subject 1 and change everything else.
Thank you very much for sharing.
There is an invert node in... was suite? lets you invert an image, you can convert a mask to an image then invert it and plug it in again and it will do the opposite of what was masked.
Alternatively using the word "background" in clipseg can be successful.
how to do live previews: ua-cam.com/video/hdWQhb98M2s/v-deo.html
@@ferniclestix Nice thank you again I should try them asap.
@@ferniclestix I just found out that there are Invert Image Node and Invert Mask Node and they are working great (I think that they are default nodes of ComfyUI). Thank you very much.
Inpainting video for hands please. How can I use the "Set latent noise mask" if it needs a "samples" input ? I mean, the normal way would be to input an image with a drawn mask on it that goes straight to a VAE Encode (for Inpainting) that accepts input from pixels / vae and mask. Should I have to redo the whole image process with a fixed seed and take it out to a Latent Noise mask node ?
I'd completely remove the 'vae encode (for inpainting) node because its not needed and not fit for purpose. instead just use the latent noise mask node.
thanks for workflow
Enjoy
Great tutorial, thank you.
I'v downloaded the workflow and I am following the tutorial but can't seem to get the workflow to actually inpaint it just renders a new image every time?? no clue why
check the mode below seed on your sampler, youll want to run the one you are inpainting on as fixed so that the mask doesn't have to be re-built everytime.
first of all tnx for the tutorial, i have the same problem, ksampler set to fixed and still it renders a new image every time@@ferniclestix
huh, thats strange. you sure its plugged into the right places? may take a while to respond, youtube doesnt like showing comments in the depths of other comments :P if you have more questions post a fresh comment I'm more likely to spot it.
is there any tutorial on inpainting that it can disturb the mask to make the transition between the mask and the background seamlessly sir, i found that the transition of the mask and the outside is hard
Ive found fixes for this kind of stuff but usually its related to the models/inpaint method being used. you can do some stuff like blurring the mask to try and smooth the edges but this can be unreliable with certain inpaint methods.
@@ferniclestix thank you
Differential inpaint which had just been released may do the great job, i'm going to try it
The tutorial is just so awesome.
What kind of change would you recommend if I wanted to inpaint using control net?
I haven't made much use of control net so far in comfyUI, I used it plenty in A1111 though.
I think because control net acts on the conditioning of the image... you should be able to use it in conjunction with this without any problem. just make sure its applying to the ksampler and it will probably influence the output correctly.
@@ferniclestix Hello , here is an issue that I am facing. You can use this workflow when you are directly feeding the latent node from a Ksampler to the set latent noise mask node.
But If I am using a preexisting image and I plug into it using VAE encode . Then it's not working at all ,it just gives me the same image back. what may be the cause.
www.reddit.com/r/comfyui/comments/15ldwds/can_anyone_help_me_with_this/ was helping someone with similar issues, have a look in there and you may find a solution.
@@ferniclestix 😂😂 . I am actually the guy named darkmeme 9. I accidentally replied to your post. Also I have no issue with image to image . But the moment I use inpaint in it is when issue happens
could you do an in-depth tutorial on clipseg? everytime I use it with the cut/paste-by-mask, it creates this fuzzy mask that pastes with this non-100% opacity making the thing I cut out with clipseg look "ghostly" when I don't want that and I'm struggling to figure out how to fix that.
I mean, clipseg theres not a huge ammount there, put in a keyword, set sensitivity settings and output mask... my advice, pull out preview nodes from all your clipseg outputs and see if they look un-usual, off colored and stuff.
It could also be a downstream issue somewhere and not related to clipseg.
If you want to get in touch via reddit, ill take a look at your workflow.
Many thanks!!😀
Very well done, thank you.
This works very nicely! But it is also quite slow for larger images, even when the mask is very small. I suspect that it denoises the full image and then leaves only the mask. Is there a way to constrain denoising only to the masked area, plus some padding for additional info (like it can be done in a1111)? I imagine facedetailer nodes do something like that, because they operate much faster with smaller masks.
I think I use the Impact pack detailer in my inpainting for artists video which just denoises a selected area.
Great vids you have here, help a lot , thanks!
Just wondering if you can do video about Roop / face swap in comfyUi
This will be the topic of my next video, although I haven't tried roop specifically so Ill have to do some more research.
probably take me a couple of days.
Annnnd just finished it, ua-cam.com/video/FShlpMxbU0E/v-deo.html
Been watching it, thank you very much, really appreciate it. Will wait for another great videos 🤟🤟🤟
looks good but in your workflow shared i dont have any image in my ksampler like in your video.
ua-cam.com/video/hdWQhb98M2s/v-deo.html I show how to set that up in this tutorial
I am having trouble finding where to put the masking.json. I usually use the png images to load workflows.
click load from the comfyui interface and go find the masking.json
Thank you !@@ferniclestix
thank you
Thanks for adressing this. But how can i inpaint using latent noise mask on a png i created earlier?
image load to load the png, vae encode to latent, latent noise mask then sampler. easy.
@@ferniclestix alright, thanks!
How to inpaint at full resolution?
hopefuly the reply on reddit makes sense. I'm working on an example image atm to show how it might be achived
What GPU are you using? Seems like fast prompts
2080 8gb, 42gb system ram, the graphics card is dedicated for SD as there is no monitor plugged in nor is it being used by windows. this makes it quite fast, additionally these are 512x512 images and there is no upscaling.
Thank you, your tutorials are actually the best. I'm trying to put something into an image but it doesnt seem to be working. I'd love a tutorial on like a img2img masking sort of thing. Like putting a dragon in my backyard for example 🤣
I cover img2img masking in the compositing video and artist inpainting video :D hope they can help.
Yes thank you I watched it and achieved what I wanted :)
Thanks a Lot!
How to upload my photo to this workflow?
image load node, replace the first sampler. basically.
Thank you so much, this is incredibly helpful. I don't understand why they did not take the open source code from Blender Nodes, with years of Polish behind it and tons of functionality and extensions available for it. Instead there is this absurdly unconventional interaction (ctrl to select and shift to drag? Really? Wow guys!..) I'm amazed at the utter lack of documentation in this age of writting documentation...
There's so much I admire and respect about StabilityAI, and then I still can't see the full file name of a file on huggingface on mobile? Come on! There are paid professionals with budgets and timelines behind these tools right? I don't understand why I can't even find a document explaining how to code a node for ComfyUI? Honestly quite absurd.
I get that this is early development, but these seem like very strange priorities.
I dont do coding, but I believe ive seen someone reference a node template if that helps, no idea what it is though. someone was using chatGPT to make nodes.
As for the Blender Nodes - ComfyUI is mainly made in Python - not C (as blender is), so it'd be difficult to take and/or use code from Blender itself. But I do agree Blender's nodes are better, and ComfyUI could definitely have taken more inspiration from them - because everything in Comfy just feels messy. Reroute nodes are awful, IMO.
i love you
We miss your videos
You're pretty bad at baby dragons :D... but your tutorial is very well explained and interesting. Thanks a lot !
informative!
Wow!...all that for inpaint? Insane but nonetheless great tutorial, keep up the good work.
Dude your audio is too low even on full volume without headphones. Please up them decibels so i hear it on speakers.
Ill see what I can do, I've got a microphone on order. -id like to add, its as loud as it goes lol.
Do you know how to avoid artifacts being created around a masked out subject lets say when Inpainting a background? For me, VAE Incode for Inpaint is working but adding artifacts, meanwhile SetLatentNoiseMask is not really inpainting much of the background. Maybe too little noise?
for me its usually a matter of using setlatent noise mask and setting denoise lower. also making use of greyscale masking can really help, unfortunatley comfyUI uses binary masking which is what causes the artifacting issues.
I'll look into greyscale masking. Right now I am trying to generate backgrounds around a masked out subject. I think SetLatentNoiseMask is struggling to have enough noise to inpaint the larger area. I also tried injecting more noise but still not much luck. VAE Incode for Inpaint has enough noise for larger areas but definitely too many artifacts. Would you suggest A1111 for this type of inpainting? What are the fundamental differences?@@ferniclestix
ComfyUI is fine,
the thing is in comfyUI you do have to do some extra steps after inpainting to fix the inpainted area. this usually involves a low level denoising pass.
I've seen people attempt to denoise the area around the inpainted area and all kinds of stuff, you can get very complex, generally speaking though a simple 0.10 denoise pass tends to fix it.