ComfyUI Inpaint Anything workflow
Вставка
- Опубліковано 6 сер 2024
- Comfy-UI Workflow for Inpainting Anything
This workflow is adapted to change very small parts of the image, and still get good results in terms of the details and the composite of the new pixels in the existing image
Here is used:
ControlNet
Segment Anything
IP-Adapter
Image Composite masked
masks
and more..
#comfyui #stablediffusion #ipadapter #mask #controlnet #segment
follow me @ / pixeleasel
workflow
drive.google.com/file/d/1-hd4...
Segment anything
github.com/storyicon/comfyui_...
was-node-suite-comfyui
github.com/WASasquatch/was-no...
juggernaut model
civitai.com/models/133005/jug...
IP-Adapter GitHub
github.com/cubiq/ComfyUI_IPAd...
Excellent video, and thanks for sharing the workflow
thanks! excellent comment 😉
wonderfully explained. nice
thanks! wonderful comment!
thanks for the explanation and the method. I always had problem with changing small areas beautifully, hope now I can do something better.
you welcome! I'll be glad to see the results
very well explained, awesome workflow. 10/10 video. extra points for sharing the workflow for free so i dont need to pause a billion times to recreate it myself and can just look at the workflow myself to learn it.
thanks!!! 10/10 comment !
Thank you great video 👌
thanks!
Thanks a lot, very useful.
you very welcome🙏
Great ! 😊
thx!
incredible work mate
thanks man!
Thanks bro... subscribed :)
thanks!
nice :))
thanks 😊!
Thank you for the workflow and tutorial, I'm new to comfy. Since u're going back to the basics, would love to learn about masks and segments and how it relates to each other. Also would really appreciate a way to automatically mask a face and only apply changes to that. Thank you again for all the content:)
thanks for commenting! I will cover those topics soon
They should give an option where we can select that part of the area just by dragging the cursor the way we do cropping in Photoshop so that particular selected part only gets changes
this is the "problem" with open source programs. There's no such thing 'they'
Wow the best is, that it keeps the original image size. Thats the first solution i found like this. Would be great if you build a solution for outpainting like this. I know that i can use it almost for outpainting. Enlarge image and mask the new part etc. But a real outpaintingsotion where i can set expand to the right xy pixel without the need for IP Adapter manual input would be great. One step further would be to incrementally expand if the aera is to big for a certain pixel limit, since systems and models limit are exeedet.
Just an idea. Keep going!
Thanks! I'm working on an outpainting workflow... I hope I'll finish it soon
I was thrilled to finally find someone combining control net with differential diffusion, as no one else seemed to have covered it! However, despite spending hours trying to modify your node to blend character to the scenery, I couldn't get it to work due to my lack of knowledge. Is there a way to mask an existing background photo and seamlessly integrate a character into it using openpose and differential diffusion?
thanks! if I understood correctly, I think u can use this workflow ua-cam.com/video/k76f8aVgS4c/v-deo.htmlsi=QNEtQSwoo5okPtuw
to achieve what u are looking for
Thanks! Have you tried SEGS nodes from Impact Pack? You can inpaint only masked area less complicated way (MASK to SEGS node)
thanks! I will check it out..
Thanks for your wonderful work, can I ask where you downloaded the safetensor of the clip vision model used in your ipadapter advanced node?
thanks! from the ip adapter repo. the link un the description
can you make a tutorial for control net inpainting on video to video??
thanks! yes i will do one on vfx
I have a few photos that I just want to put a smile on your face. But if it looks real, it means that the reaction of the face should change in such a way that the face does not change. Is it possible to make a tutorial about this?
check this one ua-cam.com/video/VwEcGIBwsyw/v-deo.html
Hello! I've downloaded the workflow, and (tried) to install the models. The blip image captioning and the two prompt generators don't seem to be working correctly. First they wouldn't install through git, so I downloaded them manually and uploaded them to my GDrive (I use colab). But now it keeps saying it can't detect __init__.py. Also the Derfuu_ComfyUI_ModdedNodes doesn't want to install either. I'm unfortunately not well versed at all in python and git so I don't know what to do... Thank you again for your help!
hi. the prompt nodes are just a simple input text, you can use any input text node instead
amazing tutorial, however , it is a little bit unfriendly to beginners who haven't used the nodes you mentioned in the video. is it possible to deconstruct this complex workflow with short episode ?
still thanks for your video.
thanks! ill do my best to make it clearer next vid
I want to use this workflow so badly but Derfuu doesnt load up, even though it has been installed. I'm using the Comfyui Mobile version. Is that the issue?
Hi,Derfuu_Comfyui_ModdedNodes can not be opened at present. It can be used before. But it is not work at these days. Can you please help to resolve the problem. Thanks.:)
i think you can replace it with any other input text
Thanks for the great workflow!
I want to try it as soon as possible too, but I get the following error and can't try it, please help!
When loading the graph, the following node types were not found:.
Derfuu_ComfyUI_ModdedNodes 🔗.
Nodes that have failed to load will show as red on the graph.
thanks! try to update comfy... and if it still doesn't work for you , write to me, and we'll think of another solution
@@PixelEasel im getting the same thing too. please help
thanks for the tutorial, I am getting this error
Error occurred when executing KSamplerAdvanced:
'ModuleList' object has no attribute '1'
tried with different settings and even checked in community
bypassed controlnet and got the result
thanks for sharing!
Hey, I loaded your workflow and install everything but two notes are missing, first „text“ and „ShowTextForGPT“ can u help me where can I finde these? Both are part of the first group, called group in your workflow
for the text, you can use any input text node. and the same for the show text, u can try pythongosssss
Thanks for sharing!but.how to resolve ……out of memory?🙏 my video card is rtx 4060
sounds weird . Check if you haven't set the resolution too high
thanks.If I set the mask area resolution 768 768,instead of 1024 1024,can resolve this problem?
I get
Error occurred when executing ImageResize+:
'bool' object has no attribute 'startswith'
try to delete it and reload.
I'm getting ugly and low quality face, i'm using pony is there a way to fix the face quality ?
u can always use reactor, but u shouldn't get distorted images
do you create workflows on request?
yes. you can send me an email to gophoto101@gmail.com
@@PixelEasel I've sent you an email
The author (Derfuu) deleted the file?
which one?
@@PixelEasel Thanks! Already solved the problem.
The reason was the package 'Derfuu_ComfyUI_ModdedNodes' in the
ComfyUI-Windows-11-Portable, I could not update this package. But in ComfyUI-MX-Linux everything worked. I've been looking for a good build for INPAINT-SDXL for a long time, and I found it! You have!
@@alexk1072 Derfuu isn't working for me in Portable. Are you saying you have to use a different installation of Comfy?
@@jasondulin7376 Yes. In MX Linux, for example, install ComfyUI according to Git instructions/ I just mechanically moved the Derfuu_ComfyUI_ModdedNodes folders from ComfyUI-Linux to ComfyUI-Windows-Portable and everything worked, although mixlab keeps offering to help.)) Perhaps mixlab is to blame. Also FaceSwap and comfyui-reactor-node don't work in ComfyUI-Windows-Portable, but they work fine from ComfyUI-Linux)))
Hello pixel , i have a problem in my custom workflow i need your help for a task job . Please sent me contact info
I hope i can help... gophoto101@gmail.com
@@PixelEasel done i have already emailed
Big thanks to share your knowledge but for me it's strange the masked area stay's empty, The main subject doesn't show.
Sorry, my bad... i didn't put the good controlnet model. control-lora-depth-rank256.safetensors