💥The secret of easy Flux inpainting in ComfyUi - forget about stable diffusion

Поділитися
Вставка
  • Опубліковано 9 лют 2025
  • In this video, I’ll be showing you a simple workflow for Flux inpainting in ComfyUi. You can even combine this inpainting method with the optimized GGUF models that I covered in my previous videos to achieve faster execution and higher quality results. If you haven’t installed these models yet, make sure to check out the previous tutorial where I explain how to download and set up GGUF models on ComfyUi.
    ---
    In this tutorial, I’ll walk you through the inpainting process using Flux with an easy-to-follow workflow, perfect for customizing images. You’ll be able to effortlessly change clothes, hairstyles, or other elements in your pictures with a simple brush tool, eliminating the need for complicated steps found online.
    Complete Guide for Beginners (watch videos below on by one) :
    1-Install ComfyUi and Flux Locally : • Install FLUX locally i...
    2-Guide for Low-end systems for Flux : • Install Flux 1.0 Dev 2...
    3-How to Create ai images with your own face : • I Tried Flux Lora trai...
    In this video:
    How to install and set up three essential custom nodes for ConfiUI.
    A full breakdown of each node’s function and how to connect them for inpainting.
    How to switch between default Flux models and optimized JJUF models for better performance on lower-end systems.
    A step-by-step guide to masking and applying changes to images using Flux, including tweaks for blending and smoothing edges.
    ---
    Links:
    Download new text encoder for flux: huggingface.co...
    Learn how to set up ComfyUi : • Install FLUX locally i...
    installing GGUF models for Flux : • Install Flux locally i...
    Download workflow : drive.google.c...
    ---
    Thanks for watching! If this video helped, don’t forget to give it a thumbs up. Make sure to subscribe to the channel and hit the notification bell so you won’t miss any of my upcoming tutorials. Have any questions? Drop them in the comments below, and I’ll do my best to help!

КОМЕНТАРІ • 132

  • @Jockerai
    @Jockerai  4 місяці тому +1

    click the link below to stay update with latest tutorial about Ai 👇🏻 :
    www.youtube.com/@Jockerai?sub_confirmation=1

    • @jonrich9675
      @jonrich9675 3 місяці тому

      Very good video! So simple and amazing. Thank you

  • @AberrantArt
    @AberrantArt 21 день тому

    THANK YOU! The step by step, full setup and explanation is incredible! Everyone else just goes "here look at this massive node graph, and it does this" but no setup, no explanation of nodes... you are amazing. Please keep making this type of content, it's the best! 🙏

    • @Jockerai
      @Jockerai  21 день тому +1

      You're welcome my friend. ✨

  • @generated.moment
    @generated.moment 2 місяці тому +1

    Thank you for explaining each node what it does, i am completely clueless when it comes to ComfyUI, and this helps out very much, looking forward for more content from this channel!

  • @myworld342
    @myworld342 3 місяці тому

    This is fantactic! It's great that you show how to build the workflow. It is very helpful! Thank you for the enlightenment! 🌟

  • @GenoG
    @GenoG 4 місяці тому +1

    Great detail and I liked that you showed how to build the workflow! Well done! 😀

    • @Jockerai
      @Jockerai  4 місяці тому

      @@GenoG thank you mate✨😉

  • @guillaumegaudin596
    @guillaumegaudin596 3 місяці тому

    Hi mate, awesome content, I like the fact that you explain in a little bit more detail than the other youtubers.

    • @Jockerai
      @Jockerai  3 місяці тому

      Thank you for your keen eye, and I appreciate you sharing this beautiful thought with me!

  • @IsZomg
    @IsZomg 24 дні тому

    Thank you this was exactly what I needed :)

  • @RobertaMontagnini
    @RobertaMontagnini 4 місяці тому

    You are amazing!!! A great teacher. You will explode on the internet!!!

    • @Jockerai
      @Jockerai  4 місяці тому

      Thank you so much that was uplifting comment ✨❤

  • @Naomi-b9d
    @Naomi-b9d 4 місяці тому

    Thanks for the workflow , it worked nice
    Waiting for more :)

    • @Jockerai
      @Jockerai  4 місяці тому

      You're welcome bro

  • @clflover
    @clflover 4 місяці тому +1

    thank god, something that works!! thank you!!

    • @Jockerai
      @Jockerai  3 місяці тому

      @@clflover you're welcome bro 😉

  • @CsokaErno
    @CsokaErno 4 місяці тому +1

    Five golden stars. Thank you!

    • @Jockerai
      @Jockerai  4 місяці тому

      @@CsokaErno thank you ✨😍

  • @rafedalwani
    @rafedalwani 3 місяці тому

    Thank you man you mage this easy and I understand it perfectly. Just one thing if you explain more about what each node does and when to be used best. Overall though this is the best tutorial I have seen about AI. Thanks again!

    • @Jockerai
      @Jockerai  3 місяці тому

      @@rafedalwani Thank you so much for your uplifting message. I wish I could explain everything in detail, but that would take us off topic and make the video too long. So, I'll just give a brief explanation.

  • @Minimalici0us
    @Minimalici0us 17 днів тому

    Thank you for explaining this! Have you ever tried combining ReActor Fast Face Swap & Face Booster with Inpainting ?

  • @myta6op402
    @myta6op402 3 місяці тому

    Great lesson thanks!👍

  • @davidvideostuff
    @davidvideostuff Місяць тому

    Thx a lot !!! You made my day !!!

    • @Jockerai
      @Jockerai  Місяць тому

      You're welcome. Happy to hear that🔥

  • @joseduenas9006
    @joseduenas9006 2 місяці тому +1

    Thank you for your great content. I'm learning a lot from you. I'm having problems for using this workflow... for some reasons, the inpainted area is generated in a lower scale than the original. For example, if i were using the photograph of your video, the body would be replaced for a smaller one looking pretty weird with the original head. I also tried it with some persons that were in a second plane, but the generated result was really strange, generating smaller people instead. Any idea what I am doing wrong? Thanks again.

  • @luridape7486
    @luridape7486 2 місяці тому

    Watched, liked, subscribed.

    • @Jockerai
      @Jockerai  2 місяці тому +1

      You're the MVP! 🙌🔥 Appreciate the support!

  • @dumitruploscar5303
    @dumitruploscar5303 4 місяці тому +1

    Excellent! Can you do one for background remover and lighting?

    • @Jockerai
      @Jockerai  3 місяці тому

      Yes, you can do that task using the same method you saw in the video

  • @TwentyFourZap24
    @TwentyFourZap24 2 місяці тому

    thanks guru take love

  • @AberrantArt
    @AberrantArt 21 день тому

    Where do you learn about all these nodes and how to use them? Do you have any good resources for people like me just starting and wanting to understand the how and why behind everything and get a good foundation of learning?

    • @Jockerai
      @Jockerai  21 день тому +1

      Honestly I haven't fined any resources with such details and I have learnt these by myself and studying ai. But you can start with using chatgpt. Simply ask it whatever you want but not exactly about comfyui. Ask it about the foundation of ai image generation and learn the basic for example what is the "First noise" what are the "Text embeddings" or " Image embedding", "VAE encode and Decode process" etc. These are basics and help a lot understanding the function of each node and what is behind of them. I do my best to create a course for teaching concepts and more behind of the nodes.

  • @JacopoMarrone-r1n
    @JacopoMarrone-r1n 26 днів тому

    Is for a technical reason that instead of using KSampler you used Sampler with separated components?

  • @DreamFilmVFX
    @DreamFilmVFX 17 днів тому

    Hi Jocker, I watched now your 3 months old tutorial, but i want to tell you that i'm afraid that differential diffusion node, was removed in the latest version of comfyui, i've searched it on google but i can't able to find anywhere. Can you help me? There is a way to replace it with something newer?. Thank you

  • @sl4ddu
    @sl4ddu 3 місяці тому

    Great video! Do you know is there a quick way for e.g. you rendered that shirt but it also made belt for him, but you want to use that rendered image to mask and re-render belt out? Or you just need to go your windows folder to pick up that new rendered image and put it back in inpaint?

    • @Jockerai
      @Jockerai  3 місяці тому

      it is better to render two separated images as you said

  • @erans
    @erans 4 місяці тому

    Hi, thanks for the tutorial, this works great. How does the diffusion model knows the position of the shirt (and other inpainting things) - without any control net like openpose?

    • @Jockerai
      @Jockerai  4 місяці тому +1

      @@erans Inpainting is one of the examples of img2img. You brush some areas of an image,but yet the ai will scan the whole image to see what the image is about then make changes to brushed areas.

  • @mayankgupta2937
    @mayankgupta2937 4 місяці тому

    your workflows have been amazing, usually get some errors but some small tweaking into it and it gets even better!
    is there a way to add lora in this? as flux doesnt support nsfw, so with some lora we can rectify the images as needed.

    • @Jockerai
      @Jockerai  4 місяці тому

      thank you , yes you can just watch one of my videos has title "Multi-Lora" and you can add Power lora loader to use loras even for inpainitg

  • @ronbere
    @ronbere 3 місяці тому

    great !!!

  • @RFsalman
    @RFsalman 2 місяці тому +1

    how would you update the workflow to use lora ?

  • @MrDebranjandutta
    @MrDebranjandutta 2 місяці тому +1

    Hi how can I inpaint jewellery based on trained lora or zero shot method like Pulid

  • @Djonsing
    @Djonsing 17 днів тому

    Is there any way to upload an image here instead of text?

  • @andrewshaaa
    @andrewshaaa 2 місяці тому

    awsome video! thank you! but my processing stops at "Attempting to release mmap (234)" - and just 0% without any movement. Can you help with that?

  • @dreamcatcher973
    @dreamcatcher973 3 місяці тому

    Hi there. Thanks for the video. Don't even try to use this recommendations with you Mac M2 Max - it takes a loooooot of time

    • @Jockerai
      @Jockerai  3 місяці тому

      Ai image generation and Mac are just enemies😕

  • @motion_time
    @motion_time 4 місяці тому

    کارت درسته

    • @Jockerai
      @Jockerai  4 місяці тому

      @@motion_time شما فارسی زبان هستید؟

    • @motion_time
      @motion_time 4 місяці тому

      @@Jockerai بله
      و برام خیلی جذابه که یک ایرانی انقدر حرفه شده توی یک تکنولوژی جدید
      واقعا کار هاتو میپسندم
      راستی برای یک موقعیت کاری در حوزه هوش مصنوعی و بخصوص comfyui دنبال یک‌متخصص هستیم
      اگر وقتت آزاد بود بهم بگو

    • @Jockerai
      @Jockerai  4 місяці тому

      @@motion_time send me a message in tel-egram please @graphixm

  • @banished8622
    @banished8622 3 місяці тому

    Thanks for the amazing video!
    I had some pretty good results using your workflow, then i realised the guidance was 3.6 instead of 3.5, i switched it and started getting awful results (head that doesn't match the body, right now i just got a lamp instead of the head). I also tried 2.0 and same, awful results that doesn't match the prompt nor the image. Switched back to 3.6 and got good results again. Isn't that weird? Are you actually able to change the guidance and still get good results? Or maybe i'm just being crazy and it's about the seed?

    • @Jockerai
      @Jockerai  3 місяці тому +1

      @@banished8622 you're welcome my friend ✨
      Actually all controlnet workflows for flux are still being improved. You have to make many attempts to get a good result. But for guidance I have no idea why it happens for 3.5 . I had good and bad results with it.

    • @banished8622
      @banished8622 3 місяці тому

      @@Jockerai Yeah i kept trying again and again, i think it has more to do with the seed than with the guidance actually

    • @Jockerai
      @Jockerai  3 місяці тому

      @@banished8622 yes I think it has.

  • @nuristiqlalzulfarizbinabdu7170
    @nuristiqlalzulfarizbinabdu7170 3 місяці тому

    Hi, in my load VAE node, I don't have ae.safetensor option. May I know how to add it?

  • @CraftBlack
    @CraftBlack 3 місяці тому

    How add lora? 🤔

    • @Jockerai
      @Jockerai  3 місяці тому

      @@CraftBlack in comfyUi search for PowerLoraLoader and add it in your workflow. Link load diffusion model node and dual clip node to that

  • @Tobias-gb1hd
    @Tobias-gb1hd 2 місяці тому +1

    Could I be doing something wrong? I just spent hours testing the workflow, trying numerous different combinations of models, and even matched the ones you used exactly, to no avail. It keeps repeating the error CLIPTextEncode 'NoneType' object has no attribute 'device'.

    • @ag.4937
      @ag.4937 24 дні тому

      Chechk again DualClipLoader, 1) VIT... 2) t5-v1... 3) type: flux !

  • @valorantacemiyimben
    @valorantacemiyimben 3 місяці тому

    How can I purchase the face swap workflow in this video?

    • @Jockerai
      @Jockerai  3 місяці тому

      @@valorantacemiyimben Wich face swap workflow?

  • @hatimunfiltered
    @hatimunfiltered 4 місяці тому

    How can I fix this error?
    mat1 and mat2 shapes cannot be multiplied (1x1280 and 768x3072)
    Thanks for the great video!

    • @Jockerai
      @Jockerai  4 місяці тому

      @@hatimunfiltered what is the size of your image?

    • @hatimunfiltered
      @hatimunfiltered 4 місяці тому

      @@Jockerai I made it work. I realized what was my mistake... type was sdxl instead of flux in the clip loader, my bad lol

    • @Jockerai
      @Jockerai  4 місяці тому

      @@hatimunfiltered I made this mistake several times enjoy experiencing 😎😁

  • @AInfectados
    @AInfectados 4 місяці тому +1

    Just need a LORA node ;)

    • @Jockerai
      @Jockerai  4 місяці тому +1

      Yes it can be added which covered in "Mulri-Lora" video ;) : ua-cam.com/video/-Xf0CggToLM/v-deo.html

    • @AInfectados
      @AInfectados 4 місяці тому

      @@Jockerai 🤙

  • @generalawareness101
    @generalawareness101 2 місяці тому

    I tried this to add text to my image and no go. It changed the image but no text.

  • @user-nb6kx3qn3p
    @user-nb6kx3qn3p 4 місяці тому

    can u please make tutorial about how to use Flux Lora model trained in Fal to use in localy installed comfyui. the model trained in Fal doesnt resemble when using the Lora in comfyui even with the trigger word.

    • @Jockerai
      @Jockerai  4 місяці тому +1

      I haven't test Fal trained Loras yet. But you can use different nodes to test that. watch my video titled Flux Multi-lora

  • @andrino2012
    @andrino2012 3 місяці тому

    Any way of making it img2img inpaint? Like, I add an image, mask it then add another image as prompt for the ai to replace the masked part?

    • @Jockerai
      @Jockerai  3 місяці тому +1

      @@andrino2012 the best way for this is not to add second image, just make a prompt of second image and add that in prompt node

    • @andrino2012
      @andrino2012 3 місяці тому

      @@Jockerai I've used krea enhancing feature to change the background before and if I could achieve anything similar with this workflow it would be amazing.
      Btw, thanks man, love this video and will keep watching your new ones!

    • @Jockerai
      @Jockerai  3 місяці тому +1

      @@andrino2012 you can change background with this workflow.
      You're welcome bro ✨😉

  • @Biaju_Project
    @Biaju_Project 3 місяці тому +2

    where to add LoRa?

  • @valorantacemiyimben
    @valorantacemiyimben 3 місяці тому

    Where are the unet loader, dualcliploader (gguf), load vae folders, guys?

    • @Jockerai
      @Jockerai  3 місяці тому

      @@valorantacemiyimben all are in the main comfyUi folder and models folders

  • @TheOneWithFriend
    @TheOneWithFriend 4 місяці тому

    if somone needed to make comic how with compyui what workflow he should use ?to capture the characters in separate?

    • @Jockerai
      @Jockerai  4 місяці тому

      You need to have an appropriate Prompt to make that. use "character sheet" phrase in your prompt

    • @TheOneWithFriend
      @TheOneWithFriend 4 місяці тому

      @@Jockerai is there any good workflow for that im desperately looking around to find it

    • @Jockerai
      @Jockerai  4 місяці тому

      @@TheOneWithFriend you can use my workflow in this video : ua-cam.com/video/txDFK-RcUq4/v-deo.html
      and use this LoRA for comic : civitai.com/models/210095/the-wizards-vintage-comic-book-cover

  • @DarioToledo
    @DarioToledo 4 місяці тому

    Does a partial denoise work? Like say, 0.70?

    • @Jockerai
      @Jockerai  4 місяці тому

      yes in Basic Guider node you can adjust lower denoises, but 0.7 is very low and probably prompt will not work good. Set it around 0.85-1.0

  • @nuristiqlalzulfarizbinabdu7170
    @nuristiqlalzulfarizbinabdu7170 3 місяці тому

    Hi, I got SamplerCustomAdvance mat2 and mat2 shapes cannot be multiplied (2016x16 and 64x3072) error. Can you help me to resolve this issue?

    • @Jockerai
      @Jockerai  3 місяці тому

      check the Load Diffusion model node see if it sets on Flux or SDXL. set it on Flux

  • @rishabhp1762
    @rishabhp1762 4 місяці тому

    Hi I just want to know what are you pc specs ? I'm about to buy a new laptop rtx 4050 how much time do you think that will take to generate an image ?

    • @Jockerai
      @Jockerai  4 місяці тому

      @@rishabhp1762 the time of generating image depends on many factors like size, models, loras etc. but in general it takes 88 seconds with Q8-GGUf Flux model for an image with 1024*1024 with RTX 3060 12G which I have.

    • @rishabhp1762
      @rishabhp1762 4 місяці тому

      @@Jockerai ok thank you

    • @Jockerai
      @Jockerai  4 місяці тому

      @@rishabhp1762 you're welcome

    • @amxaas4450
      @amxaas4450 4 місяці тому

      Whatever you do, do not get anything less than 12GB vram

  • @karankatke
    @karankatke 3 місяці тому

    i m getting harsh edges, what to do?

    • @Jockerai
      @Jockerai  3 місяці тому

      @@karankatke increase mask blur

  • @Hecbertgg
    @Hecbertgg 4 місяці тому

    it's working but i have a problem, it's so slow on flux dev fp8, only happen with inpainting (around 16 min). when i do txt2img it's 40 seconds, am i doing something wrong? my gpu is a radeon rx 7900XT

    • @Jockerai
      @Jockerai  4 місяці тому

      @@Hecbertgg I will make a video tomorrow and it's even faster

  • @rick-deckard
    @rick-deckard 3 місяці тому

    This isn't meant to detail faces right? I tried detailing a face like with Fooocus' detailed inpainting but I get results that look equally low res.

    • @Jockerai
      @Jockerai  3 місяці тому

      @@rick-deckard sometimes you get good results and sometimes not

  • @minhnguyen-jg6gu
    @minhnguyen-jg6gu 3 місяці тому

    i have problem with node
    "SamplerCustomAdvanced"
    Allocation on device, how can i fix it, thanks

    • @Jockerai
      @Jockerai  3 місяці тому

      @@minhnguyen-jg6gu what is your system configuration?

  • @wolfgangterner7277
    @wolfgangterner7277 4 місяці тому

    I am a beginner and if I ask a stupid question, please excuse me. I can paint in your workflow on the foreground object. As soon as I try to paint on the background, e.g. a bottle with glasses, nothing happens and I don't get an error message. Am I doing something wrong? I would be very grateful for an answer

    • @Jockerai
      @Jockerai  4 місяці тому

      @@wolfgangterner7277 it's totally ok to ask questions feel free to do so.
      What do you mean by nothing happens?

    • @wolfgangterner7277
      @wolfgangterner7277 4 місяці тому

      @@Jockerai
      If I try to create a bottle with two glasses and I have painted a mask in the background, nothing changes in my picture. Only if I paint in the foreground object, e.g. change the color of a jacket, does this also happen

    • @Jockerai
      @Jockerai  4 місяці тому

      @@wolfgangterner7277 you have to try changing the prompt or increase flux guidance and try multiple times to gain the right reault

    • @wolfgangterner7277
      @wolfgangterner7277 4 місяці тому

      thanks for the tip, now everything works
      Answers

    • @Jockerai
      @Jockerai  4 місяці тому

      @@wolfgangterner7277 happy to here that 🤩😉

  • @technicusacity
    @technicusacity 4 місяці тому

    1:50 KJ nodes pack seems confilctd 🤔

    • @Jockerai
      @Jockerai  4 місяці тому

      @@technicusacity yes I know. I update all of my custom nodes and some conflicts still remain. It doesn't cause any disruption to our work with comfyUi

    • @technicusacity
      @technicusacity 4 місяці тому +1

      @@Jockerai Just a bit anoyin. Sadly CUI not indicate what modules the conflict arose. It wflow work strange. I tried to describe the flight of the plane over the city, but the result was disappointing. The plane was drawn, but the merging of the original image and the background under the mask does not occur. But blimp was successfully inserted 🙄

  • @steve-w1w9s
    @steve-w1w9s 3 місяці тому

    RG3 nodes no longer show up :(

  • @mchalst
    @mchalst 3 місяці тому

    its working but it seems doesn't follow prompt instruction exactly

  • @Quraan-114
    @Quraan-114 4 місяці тому +1

    Prompt outputs failed validation
    UnetLoaderGGUF:
    - Value not in list: unet_name: 'flux1-dev-Q4_K_S.gguf' not in []
    DualCLIPLoaderGGUF:
    - Required input is missing: clip_name1
    - Required input is missing: clip_name2
    VAELoader:
    - Required input is missing: vae_name
    What is the solution to this problem?

    • @Jockerai
      @Jockerai  4 місяці тому +1

      make sure you download all models you need and place it in proper location. Then in ComfyUi slecet them in evrey node

  • @PrensCin
    @PrensCin 4 місяці тому

    türk bu ^^

    • @Jockerai
      @Jockerai  4 місяці тому

      @@PrensCin English please

    • @xyzxyz324
      @xyzxyz324 4 місяці тому

      değilmiş şiştin

  • @cyberbol
    @cyberbol 3 місяці тому

    Unfortunately, not working for me. I'm using exactly the same models, etc, settings, but my results are horrible. I trying and trying, All exactly like yours and OMG lol completely nightmare result. 4 hands, smaller etc..

  • @p_p
    @p_p 4 місяці тому

    Gguf are faster? What? In my tests Gguf are slower

    • @Jockerai
      @Jockerai  4 місяці тому

      @@p_p if your GPU is 16G or higher it is possible to run it slower

    • @p_p
      @p_p 4 місяці тому

      @@Jockerai ah ok. make sense. yeah 3090

  • @FarleyTheCoder
    @FarleyTheCoder 4 місяці тому

    Lol is it GGUF. Not goof? Think PNG :D lol. First I heard this.

    • @Jockerai
      @Jockerai  4 місяці тому +1

      Spelling 4 letters is much harder than saying simple goof🤩🤩😎Although there's no specific rule for pronouncing abbreviations...;)

  • @researchandbuild1751
    @researchandbuild1751 2 місяці тому

    Inpainting still sucks, it never really gives you what you want, it has a mind of its own. Just look a the suit you put on the guy - it's terrible, no suits fit tight like that

    • @Jockerai
      @Jockerai  2 місяці тому

      @@researchandbuild1751 there is a new Inpainting method which I will make video for that. Indeed it will be V2.
      Which I said in my last short

  • @valentynshumakher5842
    @valentynshumakher5842 12 днів тому

    Hey man, how can I contact you? Can you share your email?

    • @Jockerai
      @Jockerai  12 днів тому

      @@valentynshumakher5842 you can email me . Email is in the channel info but here it is : jockerai.yt@gmail.com

  • @minhnguyen-jg6gu
    @minhnguyen-jg6gu 3 місяці тому

    i have problem with node
    "SamplerCustomAdvanced"
    Allocation on device, how can i fix it, thanks

    • @Jockerai
      @Jockerai  3 місяці тому

      @@minhnguyen-jg6gu what is your system configuration?