Flux ControlNet - How to guide for ComfyUI

Поділитися
Вставка
  • Опубліковано 25 січ 2025

КОМЕНТАРІ • 65

  • @sebastiankamph
    @sebastiankamph  3 місяці тому +7

    Free guide and workflow here www.patreon.com/posts/114035831

    • @jonrich9675
      @jonrich9675 2 місяці тому

      is there no flux anime yet?

  • @emimix
    @emimix 3 місяці тому +6

    The best and easiest Flux ControlNet I've seen so far! And you're offering the tutorial and workflow for free too-thank you!

    • @sebastiankamph
      @sebastiankamph  3 місяці тому

      You're very welcome! Share it with a friend 😊💫

  • @faustoserone635
    @faustoserone635 3 місяці тому +10

    I learned more about comfyUI in this video than in 50 other tutorials. Thanks!

  • @Dave_AI
    @Dave_AI 3 місяці тому +7

    Quick tip: If you have Rgthree nodes enabled (which you will have if you download this workflow), go into the rgthree node settings, and enable "Show fast toggles in group headers." This wlll give you little bypass/mute icons in each group header, eliminating the need to use the Fast Bypasser nodes. Trust me, it's better.
    Great video Sebastian.

    • @sebastiankamph
      @sebastiankamph  3 місяці тому

      Great tip!

    • @KDawg5000
      @KDawg5000 3 місяці тому +1

      @@sebastiankamph Another idea is to use Rgthree's "Power Lora Loader" to clean up the nodes. (I also like to use the anything anywhere nodes to eliminate a lot of noodles)

    • @spiritform111
      @spiritform111 2 місяці тому

      nice! thanks!

  • @spiritform111
    @spiritform111 2 місяці тому +1

    the fast bypasser trick is very clever.

  • @simbarules777
    @simbarules777 3 місяці тому +2

    Thank you, much appreciated and awesome how you took the time to explain every detail ❤

  • @willmobar
    @willmobar 3 місяці тому

    Thank you, Sebastian, you are an awesome tutor! I am just starting and learning a lot.

  • @Kino-f5q
    @Kino-f5q Місяць тому

    Thx! It's simple, clear and understandable.

  • @vindyyt
    @vindyyt 3 місяці тому +4

    Thanks for the vid 😁.
    I wonder when are we going to get a functional CN for ForgeUI

  • @Maria-o2e2t
    @Maria-o2e2t Місяць тому +1

    does this work with the new flux tools? should I use lora or checkpoint? Should these be placed in controlnet folder?

    • @tiowillsan
      @tiowillsan День тому

      C:\Stable DIffusion\ComfyUI_windows_portable\ComfyUI\models\xlabs\controlnets

  • @jriker1
    @jriker1 День тому

    Why wouldn't I see ControlNet Union Pro in the manager? Haven't looked but sure I can find it manually but your instructions mention it so thougth I'd ask.

  • @ayron419
    @ayron419 2 місяці тому

    Question around 5:00 when choosing what model to download. Can you briefly elaborate on what constitutes "beefy" in this case? Ie: i have an ok processor and a 3070ti, which some might consider beefy. However, i believe Vram is important on GPUs for Ai processes, correct? So are you referring mainly to the amount of vram? Ie: though my 3070ti might out perform a 3060 on video games typically, the 3060 has more vram and as such may be more viable / beefy?

  • @jriker1
    @jriker1 День тому

    Tried a few variants of strength but doesn't seem like the resulting output looks in the pose that the source is at all. Thoughts? Just implemented this, made sure all the pieces were linked and uploaded a pose of a person and ran it. output was totally different.
    EDIT: SoftEdge seems to work better than LineArt. Though if I enable one of my LoRAs it goes back to doing nothing specific related to the image.

  • @kallamamran
    @kallamamran 3 місяці тому +1

    First 😁 Finally a good video for Flux CN!!!

    • @sebastiankamph
      @sebastiankamph  3 місяці тому +1

      Thanks! I think so at least. The way it's set up it works really well for me.

  • @Martin_Level47
    @Martin_Level47 Місяць тому

    Great vid - again 🙂 Is there a reason to why you don't use the AIO Aux Preprossor instead of making all theys groups? In the AIO you can just pick what ever controller you want to use, and it will download automatically.

  • @sallar7
    @sallar7 2 місяці тому

    great tutorial! thank you.
    btw, can the same workflow be used for videos instead of images (vid2vid projects)? if yes, how?

  • @electrolab2624
    @electrolab2624 3 місяці тому

    Clear and friendly as ever! I don't see the point of controlnet for flux: use denoise of 0.08 for instance, base shift 0.5 - the trick is the max_shift! Fluxuate between 0.7 and up to 5 or more, depends.. This max_shift is the agent of change (The flux, if you will)! It's like flux has this built in already 😅. Granted, the original image color will persist this way.. But hey. And as always.

    • @sebastiankamph
      @sebastiankamph  3 місяці тому

      Interesting, so you're saying it's like an img2img with a ControlNet light functionality built in?

    • @electrolab2624
      @electrolab2624 3 місяці тому

      @@sebastiankamph Yeah! 😄 - One must try it to see!
      Almost no denoise (even 0.001 worked) - Use max_shift as agent of change..
      Image can retain much of the original if max_shift is low (like 1.5)
      and 'dream' much change (as prompt says) if max_shift is high (like 5)
      (- If sampler is ddim + scheduler : ddim_uniform able to retain most - but I think will work on Euler / simple too) - do try! - And as always.

  • @dariovh8678
    @dariovh8678 3 місяці тому

    Great video! Which workflow do you recommend if you are using 3D software to obtain the depth map and toon shader (similar to line art/canny)?

  • @undoriel
    @undoriel 3 місяці тому +2

    I'm running on an RTX4070 with Q5 and two loras and it just freezes at KSampler :|

    • @3dParadox
      @3dParadox 2 місяці тому

      Sure u didn't mistake it for long load times? XD

  • @BibhatsuKuiri
    @BibhatsuKuiri 3 місяці тому +1

    can we add lora to our own portrait ? like my portrait in some fantasy style or something. i tried a lot but always face structure is changing a bit.

    • @OfficialTrickertrent
      @OfficialTrickertrent 3 місяці тому

      Need lora of person you want and the one you want to use. 2 loras

  • @paafuglify
    @paafuglify Місяць тому

    how would you add a reference image to guide the result style along the prompts?

  • @TheSORCERER-p9l
    @TheSORCERER-p9l 2 місяці тому

    If we have a trained Lora can that be used in this workflow, I assume ye but just trying to put all the pieces together.

  • @PunxTV123
    @PunxTV123 3 місяці тому

    What is the button for disabling or enabling the section?

  • @AlfianZaidi-d6z
    @AlfianZaidi-d6z Місяць тому

    What is the best model setup for my laptop that using RTX3060 6GB?

  • @Lawliet2017
    @Lawliet2017 3 місяці тому +1

    Great thank you for the tutorial! I work with Forge but test my Comfy for this workflow CN+FLUX
    But I have got error:
    UnetLoaderGGUF
    `newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead.
    Can you suggest any decision please?

    • @b4gu3tt3
      @b4gu3tt3 2 місяці тому

      it's because UnetLoaderGGUF has to be installed with numpy

    • @se7ensvault
      @se7ensvault 2 місяці тому

      @@b4gu3tt3 It did not

  • @rogerdupont8348
    @rogerdupont8348 Місяць тому

    Hello !
    Thanks for the video, super helpful as usual.
    What if I want to change the pose of an exisiting image using control net? It would help a lot for my comic book !
    thanks :)

  • @kleetaru929
    @kleetaru929 2 місяці тому

    Hey Sebastian, im just starting out because like 1.5 years ago my Graphics couldnt handle that, back then u used Stable Diffusion with automatic1111 what would you now recommend is it ComfyUI or is there something better to start with?

  • @envigraphy
    @envigraphy 2 місяці тому

    I keep getting some unknown when it progress to the DualCLIPLoader despite having all the correct model in the correct place

  • @willbe5426
    @willbe5426 2 місяці тому

    It's like this error all the time:
    UnetLoaderGGUF
    `newbyteorder` was removed from the ndarray class in NumPy 2.0. Use `arr.view(arr.dtype.newbyteorder(order))` instead.

  • @ainaopeyemi339
    @ainaopeyemi339 3 місяці тому

    I loveeeeeeeeeeeeee thisssssss

  • @mostafamostafa-fi7kr
    @mostafamostafa-fi7kr 3 місяці тому

    great video

  • @emrahonemli
    @emrahonemli 2 місяці тому

    Thank you :)

  • @DEVIANT...
    @DEVIANT... 2 місяці тому

    where are the contolnets for flux for forge

  • @mauvaissigne
    @mauvaissigne 3 місяці тому

    I don’t quite understand what the difference is between flux and sdxl I think that is what the alternative is called

    • @arothmanmusic
      @arothmanmusic 3 місяці тому +1

      Flux is a different model entirely. SD and SDXL were released by Stable Foundation. Flux is from Black Forest Labs, which was started by people who left Stable Diffusion.

    • @mauvaissigne
      @mauvaissigne 3 місяці тому

      @@arothmanmusic which do you use?
      Also I have a slightly unrelated question, if you have the time to help me.
      So ran a loRA training node/workflow and the output_dir is models/loras but i can not find it, any suggestions? the datapath (text files for the pictures) i can find and are in the right folder but am lost finding the actual Lora model. I am running ComfyUI with a SDXL checkpoint for the loRA training

  • @DishiKar
    @DishiKar 2 місяці тому

    The manager button just fucked off!!!! what???

  • @sereinnat9832
    @sereinnat9832 3 місяці тому

    Anyone here has a blank image output ? Any fix ?

  • @jasonstetsonofficial
    @jasonstetsonofficial 3 місяці тому +1

    i just want ForgeUI

  • @user-oleg-ger
    @user-oleg-ger 3 місяці тому

    😀👏

  • @AndreyJulpa
    @AndreyJulpa 3 місяці тому

    is it possible to make a girl's face not change much?

  • @ProvenFlawless
    @ProvenFlawless 3 місяці тому +3

    Remember that stable diffusion a111 is our true daddy.

    • @jibcot8541
      @jibcot8541 3 місяці тому

      Comfyui is soo much better and gets all the cool stuff on release day.

  • @KK47tv
    @KK47tv 2 місяці тому

    how do i get the model to look exactly like me, I subbed to your patron I'm still lost lol

  • @KDawg5000
    @KDawg5000 3 місяці тому

    Thank you for the workflow. All of the controlnets worked for me except for DWpose. I'm getting this error. "DWPreprocessor 'NoneType' object has no attribute 'get_providers'.
    EDIT: I'm trying to dig in... I'm reading this in the terminal.
    .... wholebody.py", line 41, in __init__
    print(f"Failed to load onnxruntime with {self.det.get_providers()}.
    Please change EP_list in the config.yaml and restart ComfyUI")
    EDIT2: When I set the detector and estimator to *.torchscript.pt it works. Not sure what's going on. (shrug)

  • @TinaSmith-u5x
    @TinaSmith-u5x 2 місяці тому

    I can never get pass this, I researched it, tried several fixes, can't get past it, [Errno 2] No such file or directory: 'C:\\ComfyUI_windows_portable_nvidia\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-tbox\\..\\..\\models\\annotator\\LiheYoung\\Depth-Anything\\.cache\\huggingface\\download\\checkpoints\\depth_anything_vitl14.pth.6c6a383e33e51c5fdfbf31e7ebcda943973a9e6a1cbef1564afe58d7f2e8fe63.incomplete'