FLUX Controlnet EASY Workflow for Comfyui GGUF models

Поділитися
Вставка
  • Опубліковано 11 січ 2025

КОМЕНТАРІ • 100

  • @rookandpawn
    @rookandpawn 2 місяці тому

    You inspire me to make tutorial vids that are meant for everyone. Great job subscribed love your channel!

    • @goshniiAI
      @goshniiAI  2 місяці тому

      One of the best few comments I've read. Thank you for your motivation. I appreciate it, and keep up your amazing work.

  • @hollyj.3718
    @hollyj.3718 4 місяці тому +6

    Please never stop making videos like this, I've watched dozens of videos and this is the ONLY one I can 100% fully understand. Everything about Flux right now is so hard to keep up with and know what to install, what to wait on, etc. Thank you!!

    • @goshniiAI
      @goshniiAI  4 місяці тому

      Thank you very much for the encouraging words! I'm glad to find out that the tutorial was clear and helpful for you.
      With all of the AI updates and configurations available, it might be overwhelming.

  • @fungus98
    @fungus98 4 місяці тому +3

    Great toot! I love when people who know what they are doing walk you through the entire process instead of talking you through a loaded workflow

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      Many thanks for that. I truly appreciate that you found the process useful.

  • @yetbog
    @yetbog 4 місяці тому +3

    your tutorials are really good and helpful, you deserve more subscribers, great video man!

    • @goshniiAI
      @goshniiAI  4 місяці тому

      Thank you so much, i appreciate the support.

  • @lockos
    @lockos 4 місяці тому +2

    Your video is very clear, easy to understand, detailed and well explained, gg man keep on working like this, I'm subscribing !

    • @goshniiAI
      @goshniiAI  4 місяці тому

      Thank you so much for your support, kind words, and for subscribing!

  • @Toledo43
    @Toledo43 4 місяці тому +1

    I was using ComfyUi installed by Pinokio but it was giving an error in the custom nodes, I just had to install the portable version of ComfyUi and everything worked. Great video, thank you very much :)

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      I'm glad you found a solution with the portable version! this could really help others facing the same issue. Thank you for sharing your experience

  • @SavetheRepublic
    @SavetheRepublic 4 місяці тому +1

    Great video, love how you broke it down.

    • @goshniiAI
      @goshniiAI  4 місяці тому

      Your encouraging feedback means a lot. Thank you!

  • @MaxSelichkin
    @MaxSelichkin 4 місяці тому

    Спасибо тебе, добрый человек! Очень полезное и легкое видео!

    • @goshniiAI
      @goshniiAI  3 місяці тому +1

      i am glad you found it useful and understanding. thank you for your feedback

  • @JoeBurnett
    @JoeBurnett 4 місяці тому +3

    Great video! Thank you!

    • @goshniiAI
      @goshniiAI  4 місяці тому

      You are welcome, Thank you as well for your compliment

  • @kaziahmed
    @kaziahmed Місяць тому +1

    I can't get xlabs sampler to work, getting this error: XlabsSampler, Allocation on device.
    Flux GGUF model works as standalone, but it xlabs sampler is not working. Is there a way to fix that, or an alternative?

    • @goshniiAI
      @goshniiAI  Місяць тому

      The error you're encountering often relates to memory allocation. Try to restart Comfy or your setup. Close unnecessary applications to free up GPU memory. Also, you can reduce the resolution of the image size or steps in the sampler.

    • @slobodanblazeski0
      @slobodanblazeski0 25 днів тому

      @@goshniiAI I'm having the same problem, RTX 2070 8GB RAM , GGUFs works alone but not with controlnet

  • @AgustinCaniglia1992
    @AgustinCaniglia1992 4 місяці тому +1

    Nice thank you!

    • @goshniiAI
      @goshniiAI  4 місяці тому

      i appreciate your feedback, you are welcome.

  • @Lukasz_Stan
    @Lukasz_Stan 3 місяці тому

    Great and simple tutorial. Whether this workflow will be suitable for photorealistic architecture?

    • @goshniiAI
      @goshniiAI  3 місяці тому +1

      Absolutely! But you'll need to use the MLSD model for controlNet, which isn't available for Flux yet, but hopefully will be soon.

  • @Valket
    @Valket 4 місяці тому +1

    Why does the positive prompt has two parts? 6:00? one for the clip_l and one for the t5. I don't get it?
    Sorry so many questions I am starting out and its overwhelming.
    Also I have seen that flux doesn't do negative prompts.

    • @goshniiAI
      @goshniiAI  4 місяці тому +2

      No worries, things will become much clearer with time - By providing the same prompt in both the clip_l and t5xxl boxes, you combine the strengths of two models: one tuned for image-text alignment and another for a deeper understanding of languages. This can lead to more precise, detailed, and contextually rich image creation, making it an effective technique for prompts involving both visual precision and rich linguistic context.
      FLUX's training does not yet focus on the "negative conditioning" strategy. This feature was likely not promoted in earlier versions, which focused on generating high-quality images from positive input prompts. Negative prompts are likely on the roadmap but have yet to be established, as the model improves and adds new features over time.

    • @Valket
      @Valket 4 місяці тому

      @@goshniiAI Hey thanks alot :>

  • @psznt
    @psznt Місяць тому

    This is awesome thanks. Do you have a work flow for img2img?

    • @goshniiAI
      @goshniiAI  Місяць тому

      Thank you so much! Currently, I don’t have a specific workflow for img2img, but I will consider it something to explore in future videos. Thanks for the great suggestion!

  • @madballdesign
    @madballdesign 25 днів тому

    so with base model GGUF version, use normal Controlnet workflow, it will be strange result ,right? Must using Xlabs loader?

  • @chuanruhsieh7618
    @chuanruhsieh7618 2 місяці тому

    display error message "AIO_Preprocessor No operator found for `memory_efficient_attention_forward` with inputs: query : shape=(1, 1740, 16, 64) (torch.float32) key : shape=(1, 1740, 16, 64) (torch.float32) value : shape=(1, 1740, 16, 64) (torch.float32) attn_bias : p : 0.0 `decoderF` is not supported because:xFormers wasn't build with CUDA support ...

  • @popular75
    @popular75 3 місяці тому +1

    i found that instantX canny does not compatible with xlabs canny

  • @Prajwal____
    @Prajwal____ 2 дні тому

    how do i use my depth image i created in blender?? Thank You so much!!

    • @goshniiAI
      @goshniiAI  2 дні тому +1

      You should load the depth image into the load image node the same way but without using the preprocessor node. Hopefully, I will upload a video about that in the next post.

  • @uxairehsan
    @uxairehsan 4 місяці тому

    In 5:10 DualClipLoader(GGUF) "" Clip_name2:
    from where we can download clip_1.safetensors and where we will save the file
    and on 6:12 Load VAE where we can download (ae.sft) and in which location we can save it

    • @goshniiAI
      @goshniiAI  4 місяці тому

      Hello there, thank you for making me aware of that, the process for finding the clip_l model, Load VAE and installing it can be seen in this video. ua-cam.com/video/P1uDOhUTrqw/v-deo.htmlsi=tnmCyg-cO3XppAxo

    • @uxairehsan
      @uxairehsan 4 місяці тому

      @@goshniiAI i did it and still getting error

  • @heno02
    @heno02 4 місяці тому

    Great tutorial, but when changing to your workflow, using the GGUF quant models my generation time went up from 4 minutes/13 minutes for flux schnell/flux dev-fp8 respectively to 54 minutes pr generation ..... (using a 64gb ryzen 9 with nvme and a 2070 super GPU)

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      GGUF quant models are designed to help with VRAM efficiency, but they can sometimes come with a trade-off in speed. You could try experimenting with different samplers. Another tip is to double-check your batch size and ensure there are no heavy processes running in the background that could slow things down.

  • @javier-medel
    @javier-medel 4 місяці тому

    Excellent video, can you upload a video for how to use florence 2 on ComfyUI? Thank You

    • @goshniiAI
      @goshniiAI  4 місяці тому

      I will look into your suggestion. Thank you very much for your compliment.

  • @gardentv7833
    @gardentv7833 4 місяці тому +1

    thanks alot

    • @goshniiAI
      @goshniiAI  4 місяці тому

      you're very welcome

  • @kiransurwade3576
    @kiransurwade3576 4 місяці тому +1

    I have a potato PC with GTX 1060, 6GB card, .....which GGUF model should I use for good controlnet output?

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      I will recommend using the quantised model (flux1-dev-Q4_0) with dual clip FP8.
      They should provide a good balance of performance and output quality.
      You can also keep your batch sizes low and the resolution moderate to reduce VRAM challenges.

  • @netneo4038
    @netneo4038 4 місяці тому

    I'm quite the newbie to AI generation, I have a 3080TI and the images are taking a very very long time time to generate, I'm using the FluxDevF16 unet and the fp8 clip. Should I be changing the Unet to a smaller one to speed up generation times? If so, which one would be a good step down? Thanks for all the help!

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      Your 3080TI is an excellent card, but if the FluxDevF16 UNet is causing long generation times, changing the unet will also take longer to load the model with FLUX models. One option would be to use the GGUF model, which provides a good speed-to-quality results so you can increase speed without losing too much detail.

  • @braedongarner
    @braedongarner 4 місяці тому

    It is looking great when I don't have the depth control net hooked up. As soon as I do, it makes the image look like a thousand tiny blurry squares. Any idea what could be causing this?

    • @goshniiAI
      @goshniiAI  4 місяці тому

      If the ControlNet strength is too high, it can overwhelm the original image. Try lowering the weight slightly to see if it balances things out. Also Make sure the preprocessor is set to match the depth model you're using. Sometimes a mismatch can cause distortions like the blurry squares

    • @braedongarner
      @braedongarner 4 місяці тому

      @@goshniiAI got it I’ll try that, thanks for getting back to me so quickly on this✊

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      @@braedongarner You're very welcome!

  • @twalling
    @twalling 4 місяці тому

    I get this error each time I try to queue the prompt: upsample_bicubic2d_out_frame" not implemented for 'BFloat16' --- any ideas how to fix?

    • @goshniiAI
      @goshniiAI  4 місяці тому

      Hello there, You might want to try switching to FP16 or FP8, if your hardware can handle it, or ensure you’re using the latest GGUF model that’s compatible with your setup.

  • @zombieploios
    @zombieploios 3 місяці тому

    flux dev nf4 v2 or flux dev gguf which one u think offers more quality? I m particulary interested about generating good quality hands with canny or line art

    • @goshniiAI
      @goshniiAI  3 місяці тому +1

      you can find this In order of appearance for the best quality, the 1.Flux Dev FP8 model, 2.GGUF Quantised models, and 3.Flux NF4

    • @zombieploios
      @zombieploios 3 місяці тому

      @@goshniiAI legend! thank you 🙏

  • @BabylonBaller
    @BabylonBaller 4 місяці тому

    was wondering if the gguf models can be used with loras, Id like to controlnet an image but use my custom character I have in a lora

    • @goshniiAI
      @goshniiAI  4 місяці тому

      Yes!! Without a doubt! Your custom character can be used on GGUF models with LoRAs, and the details can be adjusted with ControlNet. To load the GGUF model and LoRA at the same time just check that your ComfyUI setup is set up correctly.

    • @BabylonBaller
      @BabylonBaller 4 місяці тому

      @@goshniiAI Thank u

  • @wrillywonka1320
    @wrillywonka1320 3 місяці тому

    I sure would love to use flux but my i9 nvidia 4070ti isnt strong enough.....apparently

    • @goshniiAI
      @goshniiAI  3 місяці тому +1

      That setup is really powerful. you might need to make some tweaks to boost the performance, or try using quantized GGUF models. Keep trying.

    • @wrillywonka1320
      @wrillywonka1320 3 місяці тому

      @@goshniiAI what is quantatized gguf? You should make a video on tweaks that can be made. Ive come acrossothers with the same issue

  • @chouawarasteven
    @chouawarasteven 4 місяці тому

    Bro i keep getting insufficient memory error mean while i have rtx 3050 4gb vram.
    Should I just give up?

    • @goshniiAI
      @goshniiAI  4 місяці тому

      Don't give up just yet! The RTX 3050 with 4GB VRAM can be a bit limiting, but there are ways to make it work.
      Try using a lower-resolution input or reducing the model size to something more manageable for your GPU. You could also experiment with more quantised models, and FP8, which can help ease the load on your VRAM. It might take some tweaking, but you can get results without sacrificing too much quality. Stay strong!

  • @zachary3603
    @zachary3603 4 місяці тому

    How is this working with image to image strength being set to 0. Mine says out of index on the xlabs samp. Needed to change it to 1. What would you say the ideal value for this is?

    • @betortas33
      @betortas33 4 місяці тому

      Make sure the node conected to latent image in the xlab sampler is empty latent image and not empty sd3 latent image. That may fix it.

    • @zachary3603
      @zachary3603 4 місяці тому +1

      @@betortas33 Genius, thanks very much for your help, worked a treat!

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      Thank you for the additional information

  • @KINGLIFERISM
    @KINGLIFERISM 4 місяці тому

    has anyone tried this without the xlabs ksampler? xlabs was giving me errors.

    • @goshniiAI
      @goshniiAI  4 місяці тому

      If there are any issues, consider updating your nodes or your ComfyUI setup to the most recent version.
      You can also run the queue prompt again after the error. I discovered that it sometimes renders the prompt.

  • @arron122
    @arron122 4 місяці тому

    Which one of the GGUF models I should use on a 4060 Ti 16Gig? The results I get is still pretty poor on my end using the non GGUF models. Just have to give more time.

    • @goshniiAI
      @goshniiAI  4 місяці тому

      I used ( flux1-dev-Q4_0 ) in the video. If you haven't already, you can give it a try.

    • @arron122
      @arron122 4 місяці тому

      @@goshniiAI What Gpu you running the model on?

    • @goshniiAI
      @goshniiAI  4 місяці тому

      @@arron122 I'm running comfyui on - Nvidia RTX 3060.

    • @lateralus1073
      @lateralus1073 4 місяці тому

      @@goshniiAI I cant seem to get the fp16 to run, seems stuck at CLIPTextEncodeFlux even after 10min, but fp8 finishes in under 2min. Using a 3080. How are you bale to run this with fp16 on a 3060??

    • @ShubzGhuman
      @ShubzGhuman 4 місяці тому

      @@lateralus1073 seems like he has more vram, in mine i used t5-v1_1-xxl-encoder-Q6_K.gguf, now gonna try t5-v1_1-xxl-encoder-Q4_K_S.gguf

  • @Lucas-uk6fj
    @Lucas-uk6fj 4 місяці тому

    Where can I find pictures of eating ice cream?

    • @goshniiAI
      @goshniiAI  4 місяці тому

      Thank you for mentioning it, the link is right here. tinyurl.com/49zdpkv5

  • @ValleStutz
    @ValleStutz 4 місяці тому +1

    What's your opinion about FLUX? Tbh, I don't get the hype. I still prefer SDXL models + tiled upscaler. Better quality, more details and yet much more controllable

    • @goshniiAI
      @goshniiAI  4 місяці тому +2

      You're right Valle, SDXL models and tiled upscalers offer excellent quality and control. I've found that FLUX prompting produces some truly amazing visual results. I believe that when additional nodes have been built, it will become much more controllable. The processing time is something to think about, but using quantised models can help you overcome that problem.

  • @Exagerardo
    @Exagerardo 4 місяці тому

    DONT WORK WELL XLAB SAMPLER (OUT OF RANGE)

    • @goshniiAI
      @goshniiAI  3 місяці тому

      Thank you for pointing this out! This could happen due to particular model settings or conflicting parameters. An easy fix could be changing the controlnet strength or trying a different sampler.

  • @rogersnelson7483
    @rogersnelson7483 4 місяці тому

    USELESS for Me. 64 minutes for 1 image. FLUX is just too hardware intensive for almost anything. 8 Gig is just not enough to do anything except basic create image workflow. Testing for about 3 weeks. 10 to 100 times slower than SDXL depending on what you are doing.

    • @goshniiAI
      @goshniiAI  4 місяці тому +1

      The hardware requirements can make it challenging for more intensive workflows. I understand your frustration.
      If you’re open to it, experiment with different settingss, like using lower-resolutions for sd1.5 still works great.
      Another option is to use the GGUF model, which delivers good speed-to-quality results.

    • @Valket
      @Valket 4 місяці тому

      I am just starting out, but the 8gigs vram are not bad 1024pixels takes one minute, (using the schnell fp8) Should I try the sdxl stuff?

    • @sirjcbcbsh
      @sirjcbcbsh 4 місяці тому

      What gpu u using

    • @goshniiAI
      @goshniiAI  4 місяці тому

      @@sirjcbcbsh I have an RTX 3060.

    • @sirjcbcbsh
      @sirjcbcbsh 3 місяці тому

      @@goshniiAI I have RTX 4070 Ti Super 8gb Vram with 16gb memory. which version of Flux would you recommand without taking like 20mins per image

  • @SPOONCYBER
    @SPOONCYBER 2 місяці тому

    Hello,
    Nice work, but I have a problem with UnetloaderGGUF , with this error code.
    UnetLoaderGGUF
    `newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead.
    Do you have a solution to this problem?

    • @goshniiAI
      @goshniiAI  2 місяці тому

      you could fix this using the command terminal by tying an older version of NumPy anything under 2.0 might help.

  • @SOLOLEVELING_666
    @SOLOLEVELING_666 4 місяці тому +1

    NICE!!!!

    • @goshniiAI
      @goshniiAI  4 місяці тому

      i appreciate it!!! thank you

  • @adastra231
    @adastra231 4 місяці тому

    I keep getting this error "AIO_Preprocessor
    [Errno 2] No such file or directory: 'C:\\Users\\jackp\\Downloads\\StabilityMatrix-win-x64\\Data\\Packages\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\\ckpts\\depth-anything\\Depth-Anything-V2-Large\\.cache\\huggingface\\download\\depth_anything_v2_vitl.pth.a7ea19fa0ed99244e67b624c72b8580b7e9553043245905be58796a608eb9345.incomplete'" downloaded every depthv2 model, the preprocessor isn't working

    • @goshniiAI
      @goshniiAI  4 місяці тому

      The ".incomplete" extension on the file indicates the extension or model didn’t download or install properly. Try deleting the file and redownloading or installing with all the commands properly.
      Also, Double-check if any model required is correctly placed in the right folder.
      Alternatively, you can use the depth midas preprocessor or the depth anything preprocessor, which still works fine.