How To Edit Any Image With FlUX Dev and FlUX Shnell in ComfyUI -Inpaint/Outpaint & Background Remove

Поділитися
Вставка
  • Опубліковано 1 лют 2025

КОМЕНТАРІ • 23

  • @TheSneakyRobot
    @TheSneakyRobot  3 місяці тому +1

    ComfyUI Workflows And Models: 😁
    Image manipulation Workflow:
    openart.ai/workflows/6rTs9au6d3EXBHijCPwW
    Low Vram GGUF_NF4_FP8-16 Workflow:
    openart.ai/workflows/VOrcINUbEg3Akv7ZQO5Y
    Flux Upscale:
    . ua-cam.com/video/8M4OEGxACQk/v-deo.html
    Install ComfyUI:
    • ua-cam.com/video/Ad97XIxaBak/v-deo.html
    Hyper flux and Workflow introduction:
    • ua-cam.com/video/G62irea95gU/v-deo.html
    Flux Controlnet:
    • ua-cam.com/video/kcq81n9qsiQ/v-deo.html
    Model list.
    NF4 Model:
    huggingface.co/lllyasviel/flux1-dev-bnb-nf4/blob/main/flux1-dev-bnb-nf4-v2.safetensors
    Flux Dev GGUF:
    huggingface.co/city96/FLUX.1-dev-gguf/tree/main
    Flux Dev fp8:
    huggingface.co/XLabs-AI/flux-dev-fp8/blob/main/flux-dev-fp8.safetensors
    T5 GGUF Model:
    huggingface.co/city96/t5-v1_1-xxl-encoder-gguf/blob/main/t5-v1_1-xxl-encoder-Q4_K_M.gguf
    T5 FP8 Model:
    huggingface.co/stabilityai/stable-diffusion-3-medium/tree/main
    NB: you'll need to login or sign and agree to the terms of stabilityai to download the fp8 model
    Clip VIT:
    huggingface.co/zer0int/CLIP-GmP-ViT-L-14/blob/main/ViT-L-14-TEXT-detail-improved-hiT-GmP-TE-only-HF.safetensors
    Flux hyper model Lora:
    huggingface.co/ByteDance/Hyper-SD
    Flux dev add detail lora:
    huggingface.co/Shakker-Labs/FLUX.1-dev-LoRA-add-details
    jasperai/Flux.1-dev-Controlnet-Upscalerr:
    huggingface.co/jasperai/Flux.1-dev-Controlnet-Upscaler/blob/main/diffusion_pytorch_model.safetensors
    jasperai/Flux.1-dev-Controlnet-Depth:
    huggingface.co/jasperai/Flux.1-dev-Controlnet-Depth/blob/main/diffusion_pytorch_model.safetensors
    jasperai/Flux.1-dev-Controlnet-Surface-Normals:
    huggingface.co/jasperai/Flux.1-dev-Controlnet-Surface-Normals/blob/main/diffusion_pytorch_model.safetensors

    Controlnet Depth:
    huggingface.co/Shakker-Labs/FLUX.1-dev-ControlNet-Depth/blob/main/diffusion_pytorch_model.safetensors
    Contorlnet Canny:
    huggingface.co/InstantX/FLUX.1-dev-Controlnet-Canny/blob/main/diffusion_pytorch_model.safetensors
    Mistoline Controlnet:
    huggingface.co/TheMistoAI/MistoLine_Flux.dev/blob/main/mistoline_flux.dev_v1.safetensors
    Mistoline GitHub:
    github.com/TheMistoAI/MistoControlNet-Flux-dev
    NB: The NF4 checkpoint loader, Mistoline Note and the perfection style Nodes has to be manually installed, Here are the steps:
    Installing NF4 checkpoint loader:
    open the manager click on install via git URL paste the following url and click on ok and wait till its done installing.
    --github.com/comfyanonymous/ComfyUI_bitsandbytes_NF4.git
    Installing perfection styler:
    open the manager click on install via git URL paste the following url and click on ok and wait till its done installing.
    --github.com/TripleHeadedMonkey/ComfyUI_MileHighStyler.git
    Installing Mistoline:
    open the manager click on install via git URL paste the following url and click on ok and wait till its done installing.
    --github.com/TheMistoAI/MistoControlNet-Flux-dev.git

    • @mostafamostafa-fi7kr
      @mostafamostafa-fi7kr 14 днів тому

      Hi i tried your node and i get this error about messing node it just say :
      workflow>Flux_Scheduler_Custom2
      i dont know what to do

  • @metairieman55
    @metairieman55 Місяць тому

    Nice explanation to a great design. I always prefer the on/off switches but you added another gem with the model loaders section off to the left along with the switches, too. Plus the strategically placed ones around the modules, a concept others should use!

  • @baheth3elmy16
    @baheth3elmy16 3 місяці тому

    Very nice video! Lots of work put there..

  • @andrino2012
    @andrino2012 3 місяці тому +1

    Damn bro, that's just perfect!

  • @jemini421
    @jemini421 3 місяці тому

    Brillant 🤩

  • @generalawareness101
    @generalawareness101 2 місяці тому

    I can't get inpainting text onto an image.

  • @aelendys4401
    @aelendys4401 3 місяці тому

    Awesome ! +1 subscriber !
    As a total newbie in comfyUI, I must say it's a bit complicated but big thank you for your very clear explanations ! Juste to be sure, we can enable or disable each group nodes (Object Remover, SAM2 Inpainting (AUTO), etc.) but if I want to use SAM2 Inpainting (AUTO), the group "Inpainting (MANUAL)" is necessary (because it's here that the image is put), right !?

  • @Wambalfa
    @Wambalfa 3 місяці тому

    After installing the NF4 checkpoint loader via git the node is still red. Even a complete restart doesn't help🥲

  • @andrzejsomiany1987
    @andrzejsomiany1987 3 місяці тому

    Hi thank you for your flow is great.
    I got an error that reads:
    ImpactSwitch
    Node 1404 says it needs input input1, but there is no input to that node at all
    How can there be a cause?

  • @mayankgupta2937
    @mayankgupta2937 3 місяці тому

    its not working to execute the autopaint
    i get the below error
    ImpactSwitch
    Node 1404 says it needs input input1, but there is no input to that node at all
    for sam2 its able to recognize and mask, but there is no output image

  • @naserazmoon2670
    @naserazmoon2670 3 місяці тому

    Hello, I would like to know if it is possible to replace a masked area of an image with my own image. For example, could I replace a painted-over area of clothing with my own clothing?

  • @rifz42
    @rifz42 3 місяці тому

    Subd for including the workflow file! : ) thanks.
    for future videos please don't add music. tutorials should not have music, it's so distracting from what is being said.

  • @startplayertwo8649
    @startplayertwo8649 3 місяці тому

    Hey, can you make a workflow for flux style aligned reference sampler?

  • @leesangin6270
    @leesangin6270 3 місяці тому

    what kind of gpu do u use mate?

  • @AInfectados
    @AInfectados 3 місяці тому

    Why not include the CONTROLNET INPAINT model?

    • @TheSneakyRobot
      @TheSneakyRobot  3 місяці тому +1

      It's still in beta, tried it, tested it for a couple of days and it's not as good as I Hoped. but they said they will be releasing a better version soon. will incorporate it as soon as they do

  • @profitsmimetiques8682
    @profitsmimetiques8682 3 місяці тому

    the text to image prompt box does not work for me, in the complet eworflow
    Failed to validate prompt for output 137:
    * SDXLPromptStylerPreview 155:
    - Value not in list: style: '{'content': 'base', 'preview': 'K:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-Prompt-Preview\\style-preview\\base.png'}' not in (list of length 107)
    Output will be ignored
    Failed to validate prompt for output 138:
    Output will be ignored
    Using pytorch attention in VAE
    Using pytorch attention in VAE
    Prompt executed in 0.47 seconds
    everything is downloaded, and up to date
    UPDATE : ok so in the prompt styler you must select "base" or it won't start.
    Now i have this message : Node 1404 says it needs input input1, but there is no input to that node at all ... But it's selected as input 1 which is the text to image.
    Thanks for your work but honnestly frustrating, always something not working and wasting hour just to try to make it work. Like here i've disabled everything except text 2 image, but this error keeps happening
    Also, you should remove or change the prompt generator, it's not usable since it takes forever to load. Searge llm is better, but this and the image to prompt should be removed i guess. Everybody is using chatgpt for both prompt and image recognition, it's faster and more customizable, so keeping it just make the workflow complexe and not really useful

    • @mayankgupta2937
      @mayankgupta2937 3 місяці тому

      did you find any answer for "Node 1404 says it needs input input1, but there is no input to that node at all ... But it's selected as input 1 which is the text to image."?