FLUX TOOLS - Run Local - Inpaint, Redux, Depth, Canny

Поділитися
Вставка
  • Опубліковано 23 лис 2024

КОМЕНТАРІ •

  • @OlivioSarikas
    @OlivioSarikas  2 дні тому +1

    #### Links from my Video ####
    Get my Shirt with Code "Olivio" here: www.qwertee.com/
    blackforestlabs.ai/flux-1-tools/?ref=blog.comfy.org
    huggingface.co/black-forest-labs/FLUX.1-Canny-dev-lora
    huggingface.co/black-forest-labs/FLUX.1-Depth-dev
    huggingface.co/black-forest-labs/FLUX.1-Redux-dev
    huggingface.co/black-forest-labs/FLUX.1-Fill-dev
    comfyanonymous.github.io/ComfyUI_examples/flux/

    • @LouisGedo
      @LouisGedo 2 дні тому

      👋 hi

    • @Riker20
      @Riker20 День тому

      i hate the spagetti program

    • @jonrich9675
      @jonrich9675 День тому

      where to put the folders + maybe do a forge version?

  • @stefanoangeliph
    @stefanoangeliph День тому +1

    I have been testing the Depth lora, but the output is very far from the input image. Does not seem to work as the controlnet depth does. Even in your video, the 2 cars have similar position, but they are not sharing the same "depth": the input red car is seen from a higher position than the output one. In my test (a bedroom) the output image sometimes is "reversed". Is this expected? Does it mean that these two Canny and Depth are far from how ControlNet works?

  • @ericpanzer8159
    @ericpanzer8159 2 дні тому +6

    I would recommend lowering your Flux guidance and trying DEIS/SGM_uniform or Heun/Beta to reduce the plastic skin appearance. The default guidance for Flux in sample workflows is *way* too high. For example 3.5 is the default but 1.6-2.7 yields superior results.

    • @jorolesvaldo7216
      @jorolesvaldo7216 4 години тому

      Yeah, but just clarifying that it is usually better for REALISTIC prompts. With vector, anime and flatter styles, keep guidance higher (like 3.5) in order to avoid unwanted noise. Just in case someone reading this gets confused

    • @ericpanzer8159
      @ericpanzer8159 Годину тому

      @@jorolesvaldo7216 your point is well taken! And the opposite is true for painting styles. Flux is weird these ways :P

  • @yapyh2872
    @yapyh2872 9 годин тому

    Do you think there will have a GGUF version of the regular Flux model with the inpaint thing in future for low VRAM users?

  • @asdfwerqewsd
    @asdfwerqewsd День тому +1

    Are there going to be GGUF versions of these models?

  • @KDawg5000
    @KDawg5000 2 дні тому

    Finally playing w/this a bit. I wish the depth map nodes would keep the same resolution as the input image. I'm sure I could just use some math nodes to do that, but seems like it would be automatic, or a checkbox on the node. This matters in these setups because the input controlnet image (depth/canny) drives the size of the latent image, thus the size of your final image.

  • @tukanhamen
    @tukanhamen 12 годин тому

    I'm getting the shapes cannot be multiplied error for some reason and I don't know why I have everything set up properly.

  • @therookiesplaybook
    @therookiesplaybook 17 годин тому

    Is there a way to adjust the depth map so comfy doesn't take it so literally, and how do you set up a batch of images so you don't have to do on at a time?

  • @FlyingCowFX
    @FlyingCowFX День тому

    I am seeing very grainy results with the flux fill model for inpainting, wonder if its my settings or the model

  • @therookiesplaybook
    @therookiesplaybook День тому +2

    What am I missing. The output image doesn't match the input at all when I do it.

    • @stefanoangeliph
      @stefanoangeliph День тому

      Same here... Depth and Canny seem not to work like a controlnet. I am confused.

    • @therookiesplaybook
      @therookiesplaybook 17 годин тому

      @@stefanoangeliph I updated Comfy and it's working now.

  • @geyck
    @geyck 2 дні тому +2

    Can you do OpenPose yet for Flux-Forge?

  • @Zegeeye
    @Zegeeye День тому

    For making the REDUX model to work you have to add a node to control the amount of strength.

  • @gimperita3035
    @gimperita3035 День тому

    I'm using SD 3.5 L for the Ultimate Upscaler - with a detailer Lora - and it works fantastic!

  • @zebmac
    @zebmac День тому

    Great video! Redux, Depth and Canny (have not tried Fill yet) works with Pixelwave model too.

  • @tats5850
    @tats5850 День тому

    Thank you for the video. The inpaint looks promising. Do you think the 24GB inpainting model will work with a 4060Ti (16GB of VRAM) ?

  • @ian2593
    @ian2593 День тому +1

    Mine threw up an error when running through Canny Edge but not with Depth Anything. If I disconnect it, run the process once and then reconnect/run, it works. Says I'm trying to run conflicting models the first time but everything exactly matches what you're running. Just letting other who might have the same issue what to do.

    • @SenshiV
      @SenshiV День тому

      Got this and your share helped, thanks.

  • @shinycgi7825
    @shinycgi7825 5 годин тому

    FIX (Output not like the input at all) : If it's not working for you, Add a small CN chain (Load controlnet+Apply controlnet) between the Pix2pix and the Ksampler , Add the image of the Depth or canny whatever to both the pix2pix AND the controlnet as usual , that way it will work 100% , just play arround with the settings , You're welcome .

  • @David.Charles.
    @David.Charles. 2 дні тому +3

    I just noticed Olivio has a mouse callus. It is a true badge of honor.

  • @user-hi3ke6qh7q
    @user-hi3ke6qh7q 2 дні тому

    Those tabs and that mask shape was wild. Thanks for the info :)

  • @AlexeySeverin
    @AlexeySeverin 2 дні тому

    Thanks for sharing! That's great news! Let's see if native control nets work better... As it usually happens with FLUX, some things just don't seem to make a lot of sense... Like what on Earth is with Flux Guidance 10? Or 30??! Also, why do we need a whole 23GB separate model just for the inpainting (which we can already do with the help of masking and differential diffusion anyways). Why? So many questions, Black Forest Labs, so many questions...

    • @AlexeySeverin
      @AlexeySeverin 2 дні тому

      I edited my reply because I realized there's also a Lora for depth, so, my bad. But the rest is still valid, why does Flux have to be so wild?? :)))

  • @Skettalee
    @Skettalee 2 дні тому +1

    Great video and I want you to know that I really like your shirt!

    • @OlivioSarikas
      @OlivioSarikas  2 дні тому +1

      thank you :) i put a link to it in my info :)

    • @Skettalee
      @Skettalee 2 дні тому

      @@OlivioSarikas Seen that, gonna get me one too!

  • @Osama-xs8cl
    @Osama-xs8cl День тому

    hallow Olivio , What is the minimum GPU VRAMs that can run Flux on ComfyUI ?

  • @researchandbuild1751
    @researchandbuild1751 День тому

    How did you know you need a visual CLIP model?

  • @mikrobixmikrobix
    @mikrobixmikrobix День тому

    I have problems with installing many nodes (Depth Anything). Let me know what version of Python you use? I have 3.12 included with Comfy and I often have this exact problem.

    • @OlivioSarikas
      @OlivioSarikas  День тому +1

      comfy is selfcontaint, meaning it comes with the correct python it needs. however if you have run it for a long time, i would rename the comfy folder and download it fresh. you need to reinstall all custom node packs and move the models over, but it is worth it

    • @mikrobixmikrobix
      @mikrobixmikrobix День тому

      @@OlivioSarikas hmm...its new installation and its give me "AttributeError: module 'pkgutil' has no attribute 'ImpImporter'" error GPT says its because i should use python 3.10

    • @OlivioSarikas
      @OlivioSarikas  День тому +1

      @@mikrobixmikrobix best ask in my discord. i'm not good at tech support and ask there often myself

  • @alpaykasal2902
    @alpaykasal2902 2 дні тому

    GREAT t shirt.... and episode, as always.

  • @jaywv1981
    @jaywv1981 2 дні тому

    Using that same workflow for inpainting, im getting error that its missing noise input.

  • @AdvancExplorer
    @AdvancExplorer 2 дні тому

    Is it working with GGUF flux models ?

  • @FrankWildOfficial
    @FrankWildOfficial 2 дні тому +1

    Can we use the inpainting model together with lora trained on regular dev model?
    This would be game changer because like this 2 consistent unique characters in one image would be possible 🥳

    • @Elwaves2925
      @Elwaves2925 2 дні тому

      I don't know but it's definitely worth a try. Just a pity it requires the full model.

    • @OlivioSarikas
      @OlivioSarikas  2 дні тому +1

      I haven't tried it, but i don't see why this shouldn't work

    • @Darkwing8707
      @Darkwing8707 2 дні тому

      @@Elwaves2925 It doesn't. You can convert it to fp8 yourself or grab it off of civitai.

  • @jiexu-j9w
    @jiexu-j9w 2 дні тому

    does Redux work with gguf q4 version ? as i only has 8g Vram.

  • @middleman-theory
    @middleman-theory 2 дні тому

    My dawg, that shirt. Love it.

  • @mateuszpaciorek7219
    @mateuszpaciorek7219 2 дні тому

    Where i can find all workflows that you're using in this video?

  • @KK47..
    @KK47.. 2 дні тому

    Thank you Again, OV

  • @CHATHK
    @CHATHK 2 дні тому

    6:11 im not sure whats wrong but Redux output image comes out blurry

  • @bause6182
    @bause6182 2 дні тому

    Can you run this with 12gb vram with gguf q4 flux ?

  • @LydianMelody
    @LydianMelody 2 дні тому

    I need that shirt :O (edit: oh hello link! Thanks!!!)

  • @486DX
    @486DX 2 дні тому

    2:10 What is "fp8_e4m3fn_fast" and where can i download?

    • @OlivioSarikas
      @OlivioSarikas  2 дні тому

      did you update your comfyui? for me it was just there.

  • @CHATHK
    @CHATHK 2 дні тому

    On time!!

  • @Gli7chSec
    @Gli7chSec 2 дні тому +10

    I just want video generation in forge FLUX

  • @bobobaba2080
    @bobobaba2080 День тому

    I get this error "CLIPVisionLoader
    Error(s) in loading state_dict for CLIPVisionModelProjection:" while loading clip vision, even though I downloaded this fild (siglip-so400m-patch14-384.safetensors) 3.4 GB and this file (sigclip_vision_patch14_384.safetensors) 836 MB and placed them in my ComfyUI\models\clip_vision directory, anyone know what I should do?

  • @FusionDeveloper
    @FusionDeveloper 2 дні тому

    Great video.

  • @forgottenwisdoms
    @forgottenwisdoms 2 дні тому

    easiest way to run flux on mac in comfy?

  • @Showbiz_Music_Official
    @Showbiz_Music_Official 2 дні тому

    What about Forge integration?

  • @Kvision25th
    @Kvision25th День тому

    Flux is so all over the place :/ guidance 30 :D

  • @blutacckk
    @blutacckk 2 дні тому

    Would my 3070 8gb be able to run flux?

    • @OlivioSarikas
      @OlivioSarikas  2 дні тому +1

      i was told yes. you might need a guf model though that has to go into the unet folder and needs the unet loader. but better ask in my discord

    • @CHATHK
      @CHATHK 2 дні тому

      @@OlivioSarikas what about 3080ti 12gig

  • @MillennialKiwiGamer
    @MillennialKiwiGamer 8 годин тому

    comfyui AGAIN

  • @thedevilgames8217
    @thedevilgames8217 2 дні тому

    why everything is comfy

    • @OlivioSarikas
      @OlivioSarikas  2 дні тому +1

      best it get's everything first and is the best ui to try new things