EASY Inpainting in ComfyUI with SAM (segment Anything) | Creative Workflow Tutorial
Вставка
- Опубліковано 5 лют 2025
- Ready to take your image editing skills to the next level? Join me in this journey as we uncover the most mind-blowing inpainting techniques you won't believe exist! Learn how to extract elements with surgical precision using Segment Anything and say goodbye to manual editing masks and hello to cutting-edge technology from Meta.
** Links from the Video Tutorial **
Segment Anything Website: segment-anythi...
ComfyUI-Impact-Pack: github.com/ltd...
Interactive SAM : github.com/ltd...
Workflow** : www.patreon.co...
** Let me be EXTREMELY clear: I don't want you to feel obligated to join my Patreon just to access this workflow. My Patreon is there for those who genuinely want to support my work. If you're interested in the workflow, feel free to watch the video - it's not that long, I promise! 🙏
❤️❤️❤️Support Links❤️❤️❤️
Patreon: / dreamingaichannel
Buy Me a Coffee ☕: ko-fi.com/C0C0...
When you are so addicted to comfyui and forget you watching a UA-cam video then try to click and drag the page down to comment section... 🤦♂
This looks great! I will come back when I am ready to fully embrace ComfyUI. Invoke has nodes now so I'm uncommitted.. And thank you for NOT adding music to your videos while the instruction is happening.
Awesome video. No filler, just the goods.
I love this guy. Thank you for starting from blank.
Great walkthrough. But instead of just 1 result, how would you inpaint with a batch result of multiple images?
Very clear. Thank you for sharing. :O)
Great tutorial, simple and easy to understand, thank you so much!
I can't find the first version of SAM anymore (which is way simpler to use)... only V2 now... Any idea were could I find it?
Hi,Thank you, it is a nice tutorial, I would like to use inpainting to show a model/character showcasing product in hand or any other way, I will look forward to try this , also doing this in a video would be excellent
good clip, thanks, it is funny the load image node is not to be found under loaders if you dont have that node search plugin installed...
i played around with successfully with clipsegs txt2mask, your approach looks much more interesting.
Thank you. Is it possible to apply a consistent character created with ipadater and controlnet instead of textprompt in inpaint?
This is going to be so useful. Thank 😊 you!
Thank you!❤️
when I use sam detector it dosent detect it any suggestions ?
What about using VAE encoder for inpainting ?
Nice and simple, Thanks. Subscribed. Could you explain how to save this workflow so i can pull it out everytime is needed, please? And any other simple additional workflows such as Controlnet, Image to image etc? Thanks and keep them coming
Thanks! Do you mean the templates? Just press CTRL and select all the nodes you need with the mouse, then right click in an empty space of the board and select "Save selected as Template" and then you can recall those nodes every time you need right clicking and going in "Node Templates" -> "name of your templates"
@@DreamingAIChannel thank you. I'll give it a go.
Any image you create using the workflow will have the complete workflow's layout in its metadata, so you can just drag and drop the image from a folder into ComfyUI and everything will become exactly as it was when you hit Queue Prompt.
You can create an image with this specifically in mind using the most basic settings (so that you won't have to overwrite a bunch of stuff every time you want to use it) and keep it in a folder somewhere to pull out at a moment's notice!
@@helloofthebeach super useful thanks
안녕하세요 영상 잘 보고 있습니다!
설치를 하면 ERROR: To modify pip, please run the following command:
C:\Users\PC\AppData\Local\Programs\Python\Python310\python.exe -m pip install -r requirements.txt 이런 문구가 뜨는데 어떻게 해야 할까요?ㅜㅜ
Great tutorial! Thank you!
What a great video!
Thank you!
10 months later, what is the best solution for inpaint for ComfyUI? Thanks!
Outpainting please
Great video, thanks
Thanks!!
My workflow appears to be the same as yours, but when I run the prompt the masked area doesn't change at all. What might I be missing?
same here
thanks, you help me a lot.
That's like Photoshop with extra steps.
Just for free .. and works also on Linux
Damn how did you get your node lines to look so angular and clean?
Hi! Here i've explained how to put the straight lines: ua-cam.com/video/AjwfswzLmxU/v-deo.html
@@DreamingAIChannel thanks love
I tried this, and some other tutorials, for fixing hands. The hands remain garbled. Please help?
Hands are a different story. Maybe the model you are using is not good with hands. Try a different model or a Lora to help with the hands.
never mind I found it didn't know it was built in to comfyui, to change this just goto settings at the bottom under link render mode select straight
hi thank you soo much for creating this awesome tutorial is there any way to tell that it generate more than one resault ? (like Empty Latent Image node )
Uhm I don't know if exist something that can pass the same image more than once, stimulating a batch. You can for sure set more than one queue with the native setting in the main menu!
what would be the best way to use this to correct badly generated hands? When I try to use this workflow, it's still generating bad hands, even if the masking and everything else is working perfectly xD
Well, "hands" are the black sheep of stable diffusion models.... I would simply recommend you to change model, I have seen that models like CetusMix WhaleFall2 many times does it pretty well
Is there a way to do inpaint with Comfyui using Automatic1111's technique in which it allows you to apply a resolution only to the mask and not to the whole image to improve the quality of the result?
I'm actually looking for something like that myself, the problem is that I haven't yet found something capable of reproducing the same results as A1111. As soon as I find it, I'll let you know!
@@DreamingAIChannel Any luck so far? Very curious about this.
Great stuff, I saw an alternate way where you can use a word instead of dots, so "T-shirt" to get the SAM to select the T-shirt and change it's colour.
Has anyone asked yet...What are you using to make the AI generated voice-over....it's very good, but not perfect 😉
(don't be saying....damn! that my real voice dude!)😁
Thanks! But THAT'S MY RE.... nope is not 🤣 i'm using more than one tool (bark + VITS + custom script/batch) and ALOT of patience! I' will make a video someday like if i will ever reach my 1000 subs or so! Yes i'm aware about using words to detect part of what i want to inpaint but i don't believe that is an efficient way to work!
You referring to the CLIPSeg Node. It does work well, but this method may provide better control.
Please for all that is holy, the new and old gods. please make a tutorial in how to create custom addons/nodes. I searching and did not yet found good docs.
🤣 yes, I know that there is no doc whatsoever and that's one of the reasons why I haven't made a tutorial yet, I like to explain things when I'm pretty sure of what I'm doing. In custom node programming, on the other hand, I pretty much always reverse engineer the already made nodes, once i will have more knowledge i will try to make some tutorial for it!
3050
awesome ! can you share your Json worfklow file ?
how did you get your Node Lines to be striaght like that as you create the connections? thanks
simply find in Link Render Mode in settings of the ComfyUI, set it to Straight
You know in A1111, the inpaint is such that only a portion of image is inpainted while nothing is done to rest of the image but in comfy The whole image is rendered even though only a small part is changed.
Is it possible to do A1111 like inpaint in comfy?
Uhm i didn't tried yet but i think Is totally possibile that someone made an extension for A1111 that use SAM
@@DreamingAIChannel I am sorry but that's not what I meant. In A1111, you can set inpaint to masked only at render only that region where as in comfy it renders the whole image, even though we inpaint a small area.
@@musicandhappinessbyjo795 I also thought it was weird that a tiny masked area took just as long to render as a full image even though the rest of the image stays the same, hope there's a way to fix that
sorry I read your message too quickly! The way comfyui does inpainting is quite different from the way A1111 does it, but I know you can use the VAEEncodeForInpainting node which more clearly limits the inpainting to the area you want to redo, but I haven't tried it yet. Otherwise I think you can refer to this thread on reddit for more info (www.reddit.com/r/comfyui/comments/15rvnlh/masked_content_and_inpaint_area_from/)
@@DreamingAIChannel thanks, I will have a look.
К сожалению, это несколько не то, чего бы мне хотелось. Как сделать так, чтобы именно дорисовать. Например, у меня есть арт, где нарисован ковер. И я хочу, чтобы с помощью inpaint можно было выделить область, прописать туда "человека сидящий на ковре" и нейросеть нарисовала его в том же стиле, что и основной арт. Пока у меня не получается достичь результатов, которые выходили, когда я пользовался Автоматиком.
Hi! You are talking about normal inpainting so you simply have to exclude the SAM part and in the loaded image do Open in MaskEditor and draw the mask there manually. However, I can't guarantee that the results will be the same as A1111 because things are handled a little differently. There is a thread on reddit about this (www.reddit.com/r/comfyui/comments/15rvnlh/masked_content_and_inpaint_area_from/).
@@DreamingAIChannel лично у меня абсолютно ничего не получается в inpaint в comfi, в отличии от Автоматика.
also, music is great!
i installed the comfyUI impact pack but the SAm detector does not seem to be working. Is there any fixes you would suggest for this?
Hi! Do you have any errors?
@@DreamingAIChannel after a long day of carefully reinstalling more packages, no I don't have more errors
@@jeterpilled_memester Prefect! I'm glad to hear it!
@@DreamingAIChannel maybe want to add that you have to download SAM to your video description, otherwise excellent guide
@@jeterpilled_memester oh so the problem wat that you didn't have the same models? Weird!🤔 I never downloaded SAM, Impact-pack did everything when I've installed it, I just followed the step in the main page of the repository!
The whole point of comfy is to be able to reproduce workflows by simply importing a json or photo w/ metadata. So tired of seeing tutorials and guides that don't have the file attached.
"You know nothing, Jon Snow."
@@DreamingAIChannel Dude just attach the fricking file, it takes two seconds, and it saves us 10 minutes.