💥The secret of easy Flux inpainting in ComfyUi - forget about stable diffusion
Вставка
- Опубліковано 9 лют 2025
- In this video, I’ll be showing you a simple workflow for Flux inpainting in ComfyUi. You can even combine this inpainting method with the optimized GGUF models that I covered in my previous videos to achieve faster execution and higher quality results. If you haven’t installed these models yet, make sure to check out the previous tutorial where I explain how to download and set up GGUF models on ComfyUi.
---
In this tutorial, I’ll walk you through the inpainting process using Flux with an easy-to-follow workflow, perfect for customizing images. You’ll be able to effortlessly change clothes, hairstyles, or other elements in your pictures with a simple brush tool, eliminating the need for complicated steps found online.
Complete Guide for Beginners (watch videos below on by one) :
1-Install ComfyUi and Flux Locally : • Install FLUX locally i...
2-Guide for Low-end systems for Flux : • Install Flux 1.0 Dev 2...
3-How to Create ai images with your own face : • I Tried Flux Lora trai...
In this video:
How to install and set up three essential custom nodes for ConfiUI.
A full breakdown of each node’s function and how to connect them for inpainting.
How to switch between default Flux models and optimized JJUF models for better performance on lower-end systems.
A step-by-step guide to masking and applying changes to images using Flux, including tweaks for blending and smoothing edges.
---
Links:
Download new text encoder for flux: huggingface.co...
Learn how to set up ComfyUi : • Install FLUX locally i...
installing GGUF models for Flux : • Install Flux locally i...
Download workflow : drive.google.c...
---
Thanks for watching! If this video helped, don’t forget to give it a thumbs up. Make sure to subscribe to the channel and hit the notification bell so you won’t miss any of my upcoming tutorials. Have any questions? Drop them in the comments below, and I’ll do my best to help!
click the link below to stay update with latest tutorial about Ai 👇🏻 :
www.youtube.com/@Jockerai?sub_confirmation=1
Very good video! So simple and amazing. Thank you
THANK YOU! The step by step, full setup and explanation is incredible! Everyone else just goes "here look at this massive node graph, and it does this" but no setup, no explanation of nodes... you are amazing. Please keep making this type of content, it's the best! 🙏
You're welcome my friend. ✨
Thank you for explaining each node what it does, i am completely clueless when it comes to ComfyUI, and this helps out very much, looking forward for more content from this channel!
This is fantactic! It's great that you show how to build the workflow. It is very helpful! Thank you for the enlightenment! 🌟
Great detail and I liked that you showed how to build the workflow! Well done! 😀
@@GenoG thank you mate✨😉
Hi mate, awesome content, I like the fact that you explain in a little bit more detail than the other youtubers.
Thank you for your keen eye, and I appreciate you sharing this beautiful thought with me!
Thank you this was exactly what I needed :)
You are amazing!!! A great teacher. You will explode on the internet!!!
Thank you so much that was uplifting comment ✨❤
Thanks for the workflow , it worked nice
Waiting for more :)
You're welcome bro
thank god, something that works!! thank you!!
@@clflover you're welcome bro 😉
Five golden stars. Thank you!
@@CsokaErno thank you ✨😍
Thank you man you mage this easy and I understand it perfectly. Just one thing if you explain more about what each node does and when to be used best. Overall though this is the best tutorial I have seen about AI. Thanks again!
@@rafedalwani Thank you so much for your uplifting message. I wish I could explain everything in detail, but that would take us off topic and make the video too long. So, I'll just give a brief explanation.
Thank you for explaining this! Have you ever tried combining ReActor Fast Face Swap & Face Booster with Inpainting ?
Great lesson thanks!👍
Thx a lot !!! You made my day !!!
You're welcome. Happy to hear that🔥
Thank you for your great content. I'm learning a lot from you. I'm having problems for using this workflow... for some reasons, the inpainted area is generated in a lower scale than the original. For example, if i were using the photograph of your video, the body would be replaced for a smaller one looking pretty weird with the original head. I also tried it with some persons that were in a second plane, but the generated result was really strange, generating smaller people instead. Any idea what I am doing wrong? Thanks again.
Watched, liked, subscribed.
You're the MVP! 🙌🔥 Appreciate the support!
Excellent! Can you do one for background remover and lighting?
Yes, you can do that task using the same method you saw in the video
thanks guru take love
Where do you learn about all these nodes and how to use them? Do you have any good resources for people like me just starting and wanting to understand the how and why behind everything and get a good foundation of learning?
Honestly I haven't fined any resources with such details and I have learnt these by myself and studying ai. But you can start with using chatgpt. Simply ask it whatever you want but not exactly about comfyui. Ask it about the foundation of ai image generation and learn the basic for example what is the "First noise" what are the "Text embeddings" or " Image embedding", "VAE encode and Decode process" etc. These are basics and help a lot understanding the function of each node and what is behind of them. I do my best to create a course for teaching concepts and more behind of the nodes.
Is for a technical reason that instead of using KSampler you used Sampler with separated components?
Hi Jocker, I watched now your 3 months old tutorial, but i want to tell you that i'm afraid that differential diffusion node, was removed in the latest version of comfyui, i've searched it on google but i can't able to find anywhere. Can you help me? There is a way to replace it with something newer?. Thank you
Great video! Do you know is there a quick way for e.g. you rendered that shirt but it also made belt for him, but you want to use that rendered image to mask and re-render belt out? Or you just need to go your windows folder to pick up that new rendered image and put it back in inpaint?
it is better to render two separated images as you said
Hi, thanks for the tutorial, this works great. How does the diffusion model knows the position of the shirt (and other inpainting things) - without any control net like openpose?
@@erans Inpainting is one of the examples of img2img. You brush some areas of an image,but yet the ai will scan the whole image to see what the image is about then make changes to brushed areas.
your workflows have been amazing, usually get some errors but some small tweaking into it and it gets even better!
is there a way to add lora in this? as flux doesnt support nsfw, so with some lora we can rectify the images as needed.
thank you , yes you can just watch one of my videos has title "Multi-Lora" and you can add Power lora loader to use loras even for inpainitg
great !!!
how would you update the workflow to use lora ?
Hi how can I inpaint jewellery based on trained lora or zero shot method like Pulid
Is there any way to upload an image here instead of text?
awsome video! thank you! but my processing stops at "Attempting to release mmap (234)" - and just 0% without any movement. Can you help with that?
Hi there. Thanks for the video. Don't even try to use this recommendations with you Mac M2 Max - it takes a loooooot of time
Ai image generation and Mac are just enemies😕
کارت درسته
@@motion_time شما فارسی زبان هستید؟
@@Jockerai بله
و برام خیلی جذابه که یک ایرانی انقدر حرفه شده توی یک تکنولوژی جدید
واقعا کار هاتو میپسندم
راستی برای یک موقعیت کاری در حوزه هوش مصنوعی و بخصوص comfyui دنبال یکمتخصص هستیم
اگر وقتت آزاد بود بهم بگو
@@motion_time send me a message in tel-egram please @graphixm
Thanks for the amazing video!
I had some pretty good results using your workflow, then i realised the guidance was 3.6 instead of 3.5, i switched it and started getting awful results (head that doesn't match the body, right now i just got a lamp instead of the head). I also tried 2.0 and same, awful results that doesn't match the prompt nor the image. Switched back to 3.6 and got good results again. Isn't that weird? Are you actually able to change the guidance and still get good results? Or maybe i'm just being crazy and it's about the seed?
@@banished8622 you're welcome my friend ✨
Actually all controlnet workflows for flux are still being improved. You have to make many attempts to get a good result. But for guidance I have no idea why it happens for 3.5 . I had good and bad results with it.
@@Jockerai Yeah i kept trying again and again, i think it has more to do with the seed than with the guidance actually
@@banished8622 yes I think it has.
Hi, in my load VAE node, I don't have ae.safetensor option. May I know how to add it?
How add lora? 🤔
@@CraftBlack in comfyUi search for PowerLoraLoader and add it in your workflow. Link load diffusion model node and dual clip node to that
Could I be doing something wrong? I just spent hours testing the workflow, trying numerous different combinations of models, and even matched the ones you used exactly, to no avail. It keeps repeating the error CLIPTextEncode 'NoneType' object has no attribute 'device'.
Chechk again DualClipLoader, 1) VIT... 2) t5-v1... 3) type: flux !
How can I purchase the face swap workflow in this video?
@@valorantacemiyimben Wich face swap workflow?
How can I fix this error?
mat1 and mat2 shapes cannot be multiplied (1x1280 and 768x3072)
Thanks for the great video!
@@hatimunfiltered what is the size of your image?
@@Jockerai I made it work. I realized what was my mistake... type was sdxl instead of flux in the clip loader, my bad lol
@@hatimunfiltered I made this mistake several times enjoy experiencing 😎😁
Just need a LORA node ;)
Yes it can be added which covered in "Mulri-Lora" video ;) : ua-cam.com/video/-Xf0CggToLM/v-deo.html
@@Jockerai 🤙
I tried this to add text to my image and no go. It changed the image but no text.
can u please make tutorial about how to use Flux Lora model trained in Fal to use in localy installed comfyui. the model trained in Fal doesnt resemble when using the Lora in comfyui even with the trigger word.
I haven't test Fal trained Loras yet. But you can use different nodes to test that. watch my video titled Flux Multi-lora
Any way of making it img2img inpaint? Like, I add an image, mask it then add another image as prompt for the ai to replace the masked part?
@@andrino2012 the best way for this is not to add second image, just make a prompt of second image and add that in prompt node
@@Jockerai I've used krea enhancing feature to change the background before and if I could achieve anything similar with this workflow it would be amazing.
Btw, thanks man, love this video and will keep watching your new ones!
@@andrino2012 you can change background with this workflow.
You're welcome bro ✨😉
where to add LoRa?
Where are the unet loader, dualcliploader (gguf), load vae folders, guys?
@@valorantacemiyimben all are in the main comfyUi folder and models folders
if somone needed to make comic how with compyui what workflow he should use ?to capture the characters in separate?
You need to have an appropriate Prompt to make that. use "character sheet" phrase in your prompt
@@Jockerai is there any good workflow for that im desperately looking around to find it
@@TheOneWithFriend you can use my workflow in this video : ua-cam.com/video/txDFK-RcUq4/v-deo.html
and use this LoRA for comic : civitai.com/models/210095/the-wizards-vintage-comic-book-cover
Does a partial denoise work? Like say, 0.70?
yes in Basic Guider node you can adjust lower denoises, but 0.7 is very low and probably prompt will not work good. Set it around 0.85-1.0
Hi, I got SamplerCustomAdvance mat2 and mat2 shapes cannot be multiplied (2016x16 and 64x3072) error. Can you help me to resolve this issue?
check the Load Diffusion model node see if it sets on Flux or SDXL. set it on Flux
Hi I just want to know what are you pc specs ? I'm about to buy a new laptop rtx 4050 how much time do you think that will take to generate an image ?
@@rishabhp1762 the time of generating image depends on many factors like size, models, loras etc. but in general it takes 88 seconds with Q8-GGUf Flux model for an image with 1024*1024 with RTX 3060 12G which I have.
@@Jockerai ok thank you
@@rishabhp1762 you're welcome
Whatever you do, do not get anything less than 12GB vram
i m getting harsh edges, what to do?
@@karankatke increase mask blur
it's working but i have a problem, it's so slow on flux dev fp8, only happen with inpainting (around 16 min). when i do txt2img it's 40 seconds, am i doing something wrong? my gpu is a radeon rx 7900XT
@@Hecbertgg I will make a video tomorrow and it's even faster
This isn't meant to detail faces right? I tried detailing a face like with Fooocus' detailed inpainting but I get results that look equally low res.
@@rick-deckard sometimes you get good results and sometimes not
i have problem with node
"SamplerCustomAdvanced"
Allocation on device, how can i fix it, thanks
@@minhnguyen-jg6gu what is your system configuration?
I am a beginner and if I ask a stupid question, please excuse me. I can paint in your workflow on the foreground object. As soon as I try to paint on the background, e.g. a bottle with glasses, nothing happens and I don't get an error message. Am I doing something wrong? I would be very grateful for an answer
@@wolfgangterner7277 it's totally ok to ask questions feel free to do so.
What do you mean by nothing happens?
@@Jockerai
If I try to create a bottle with two glasses and I have painted a mask in the background, nothing changes in my picture. Only if I paint in the foreground object, e.g. change the color of a jacket, does this also happen
@@wolfgangterner7277 you have to try changing the prompt or increase flux guidance and try multiple times to gain the right reault
thanks for the tip, now everything works
Answers
@@wolfgangterner7277 happy to here that 🤩😉
1:50 KJ nodes pack seems confilctd 🤔
@@technicusacity yes I know. I update all of my custom nodes and some conflicts still remain. It doesn't cause any disruption to our work with comfyUi
@@Jockerai Just a bit anoyin. Sadly CUI not indicate what modules the conflict arose. It wflow work strange. I tried to describe the flight of the plane over the city, but the result was disappointing. The plane was drawn, but the merging of the original image and the background under the mask does not occur. But blimp was successfully inserted 🙄
RG3 nodes no longer show up :(
its working but it seems doesn't follow prompt instruction exactly
Prompt outputs failed validation
UnetLoaderGGUF:
- Value not in list: unet_name: 'flux1-dev-Q4_K_S.gguf' not in []
DualCLIPLoaderGGUF:
- Required input is missing: clip_name1
- Required input is missing: clip_name2
VAELoader:
- Required input is missing: vae_name
What is the solution to this problem?
make sure you download all models you need and place it in proper location. Then in ComfyUi slecet them in evrey node
türk bu ^^
@@PrensCin English please
değilmiş şiştin
Unfortunately, not working for me. I'm using exactly the same models, etc, settings, but my results are horrible. I trying and trying, All exactly like yours and OMG lol completely nightmare result. 4 hands, smaller etc..
Gguf are faster? What? In my tests Gguf are slower
@@p_p if your GPU is 16G or higher it is possible to run it slower
@@Jockerai ah ok. make sense. yeah 3090
Lol is it GGUF. Not goof? Think PNG :D lol. First I heard this.
Spelling 4 letters is much harder than saying simple goof🤩🤩😎Although there's no specific rule for pronouncing abbreviations...;)
Inpainting still sucks, it never really gives you what you want, it has a mind of its own. Just look a the suit you put on the guy - it's terrible, no suits fit tight like that
@@researchandbuild1751 there is a new Inpainting method which I will make video for that. Indeed it will be V2.
Which I said in my last short
Hey man, how can I contact you? Can you share your email?
@@valentynshumakher5842 you can email me . Email is in the channel info but here it is : jockerai.yt@gmail.com
i have problem with node
"SamplerCustomAdvanced"
Allocation on device, how can i fix it, thanks
@@minhnguyen-jg6gu what is your system configuration?