Flux ControlNet (Depth, Canny, Hed) - Work 100%

Поділитися
Вставка
  • Опубліковано 11 вер 2024
  • Flux ControlNet supports 3 models:
    1- Canny
    2- HED
    3- Depth (Midas)
    Each ControlNet is trained on 1024x1024 resolution. However, It recommend to generate images with 1024x1024 for Depth, and use 768x768 resolution for Canny and HED for better results.
    Install X-flux-comfyui custom node:
    github.com/XLa...
    After the first launch, the ComfyUI/models/xlabs/loras and ComfyUI/models/xlabs/
    controlnets folders will be created automatically.
    Download flux controlnet model collection:
    huggingface.co...
    ***********************************
    Comfyui tutorial, Учебное пособие по Comfyui, Comfyui ट्यूटोरियल, Tutoriel Comfyui, Tutorial Comfyui, Comfyui 튜토리얼
    Comfyui stable diffusion, Install comfyui, comfyui video, controlnet comfyui, comfyui animateddiff, comfyui sdxl, comfyui upscale, comfyui video to video, comfyui manager, comfyui inpainting, comfyui ipadapter, comfyui faceswap
    ***********************************
    🤯 Get my FREE comfyui tutorials with workflows: openart.ai/wor...
    • CG TOP TIPS - AI MUSIC
    / @cgtoptips
    ------------------------------------
    🌍 SOCIAL
    / cgtoptips
    / cgtoptips
    📧 cg.top.tips@gmail.com
    ------------------------------------
    #ComfyUI
    #Flux
    #FluxControlNet

КОМЕНТАРІ • 36

  • @marcihuppi
    @marcihuppi 27 днів тому +3

    Error occurred when executing XlabsSampler:
    'ControlNetFlux' object has no attribute 'load_device'
    i already did a git pull to update comfyui... any other ideas?
    thanks in advance ♥

  • @roylow1292
    @roylow1292 23 дні тому +2

    VAE Docode eror Error occurred when executing VAEDecode:
    Given groups=1, weight of size [4, 4, 1, 1], expected input[1, 16, 128, 128] to have 4 channels, but got 16 channels instead
    File "D:\ComfyUI-aki-v1.3\execution.py", line 316, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    File "D:\ComfyUI-aki-v1.3\execution.py", line 191, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    File "D:\ComfyUI-aki-v1.3\execution.py", line 168, in _map_node_over_list
    process_inputs(input_dict, i)
    File "D:\ComfyUI-aki-v1.3\execution.py", line 157, in process_inputs
    results.append(getattr(obj, func)(**inputs))
    File "D:\ComfyUI-aki-v1.3
    odes.py", line 284, in decode
    return (vae.decode(samples["samples"]), )
    File "D:\ComfyUI-aki-v1.3\comfy\sd.py", line 322, in decode
    pixel_samples[x:x+batch_number] = self.process_output(self.first_stage_model.decode(samples).to(self.output_device).float())
    File "D:\ComfyUI-aki-v1.3\comfy\ldm\models\autoencoder.py", line 199, in decode
    dec = self.post_quant_conv(z)
    File "D:\ComfyUI-aki-v1.3\python\lib\site-packages\torch
    n\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    File "D:\ComfyUI-aki-v1.3\python\lib\site-packages\torch
    n\modules\module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
    File "D:\ComfyUI-aki-v1.3\comfy\ops.py", line 93, in forward
    return super().forward(*args, **kwargs)
    File "D:\ComfyUI-aki-v1.3\python\lib\site-packages\torch
    n\modules\conv.py", line 460, in forward
    return self._conv_forward(input, self.weight, self.bias)
    File "D:\ComfyUI-aki-v1.3\python\lib\site-packages\torch
    n\modules\conv.py", line 456, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,

  • @VaradRane-p2q
    @VaradRane-p2q 27 днів тому +1

    Can we use this with inpainting techniques ? Is there any workflow for it in ComfyUI ?

  • @antoinesaal2372
    @antoinesaal2372 18 днів тому

    Help ; Error : No module named 'controlnet_aux' (for img2img)
    (My comfyUI, custom_nodes are updated. I watched different controlnet tutos.)
    (I'm using Flux1-dev-Q4_K_S + comfyUI + Controlnet)
    There is a error during the node "Canny edge" : No module named 'controlnet_aux'
    I installed everything that I need (ComfyUImanager, Xlabs, ControlNet auxiliary models, Controlnet Canny) I don't understand why this doesn't work.
    If you have a solution :)

  • @eltalismandelafe7531
    @eltalismandelafe7531 23 дні тому

    In the node Canny Edge you have set the resolution to 768. My image is 1280 x 720, how can I set the resolution of the Canny Edge to 1280 x 720 or to get a 1280 x 720 image?

  • @dameguy_90
    @dameguy_90 28 днів тому +3

    Thank you very much for the tutorial, but I am not getting the same quality as yours, just very, very poor quality pictures. And my flux model keeps drawing only animations and not real. Is there a solution?

    • @Huang-uj9rt
      @Huang-uj9rt 27 днів тому +2

      Yes, I also manipulated it after watching this video to get a terrible image, not as good as the one I got after running flux on mimicpc. I think I'm going to be a big fan of mimicpc from now on, it has all the popular AI tools that I can try for free!

    • @CgTopTips
      @CgTopTips  27 днів тому +2

      The difference is likely in your settings. Please go to the x-flux-comfyui folder and try the company's pre-built workflows for Canny,Depth, and HED with the default settings

    • @plainpixels
      @plainpixels 24 дні тому

      Still seems to suck unless you use the same type of images as their examples

    • @adriands8207
      @adriands8207 20 днів тому

      @@CgTopTips what settings you mean? we are just following all steps and settings in the video but the results are awful

  • @anagnorisis2024
    @anagnorisis2024 24 дні тому

    is there a workflow based on this, where i can input an image and do a style or composition transfer?

  • @senoharyo
    @senoharyo 28 днів тому

    Very nice, have tried it last night. Are these controlnets works with flux Checkpoint work flow? :)

  • @VazgenAkopov1976
    @VazgenAkopov1976 27 днів тому +1

    It's a pity, but it DOESN'T WORK on 32 gigabytes of RAM and 8 gigabytes of video card memory!(((

    • @CgTopTips
      @CgTopTips  27 днів тому +2

      Yes, at least 12gb 😕

  • @cameochan7405
    @cameochan7405 21 день тому

    AttributeError: 'DoubleStreamBlock' object has no attribute 'processor'
    what's this mean pls?

  • @yaahooali
    @yaahooali 28 днів тому

    Thank you

  • @user-lt2lk6vf7x
    @user-lt2lk6vf7x 21 день тому

    Very good,

  • @kevinwang7340
    @kevinwang7340 27 днів тому

    the workflow is fine but the result is very off from your samples, it seems like it does not interpret well the inputs and gives you weird images.

  • @shadowheg
    @shadowheg 27 днів тому

    dont working on Dev fp32 with 32gb ram, 4080 12gb VRAM, but working on NF4. but result unsuccessfu (

  • @CasasYLaPistola
    @CasasYLaPistola 28 днів тому

    Thanks for the video. One question, does it only work with the Dev model? Does it not work with the schnell model?

    • @CgTopTips
      @CgTopTips  28 днів тому +1

      Yes, both the Schnell and Dev models work fine

  • @CGFUN829
    @CGFUN829 28 днів тому

    Thanks, what resolution you recomend when doing animation using sd1.5 , depth , canny ?

  • @brunocandia9671
    @brunocandia9671 27 днів тому

    ThX!

  • @kallamamran
    @kallamamran 28 днів тому +1

    Thanks, but... !!! Exception during processing!!! Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument mat1 in method wrapper_CUDA_addmm)

    • @tetsuooshima832
      @tetsuooshima832 27 днів тому

      I had the exact same error, but today ComfyUI has been updated to support Flux controlnets, so hopefully we don't need this anymore

  • @wowforeal
    @wowforeal 27 днів тому +1

    Work w NF4?

  • @RiiahTV
    @RiiahTV 28 днів тому

    its like that!

  • @joneschunghk
    @joneschunghk 27 днів тому

    You are installing "requirements.txt" to your python, not python_embeded of comfyui.

    • @CgTopTips
      @CgTopTips  27 днів тому

      follow x-flux-comfyui installing instruction on githhub page

    • @davoodice
      @davoodice 27 днів тому

      Yes .your way is not for comfyui portable. ​@@CgTopTips

  • @davoodice
    @davoodice 27 днів тому

    Installation of the package x-flux is not correct. You installed xflux in stand alone python not in comfyUI portable python.

  • @MilesBellas
    @MilesBellas 28 днів тому +1

    it works !?!😊

    • @CgTopTips
      @CgTopTips  28 днів тому +1

      I'm glad you were able to get a result. This workflow need more VRAM to avoid the "Cuda Out of Memory" error

  • @ismgroov4094
    @ismgroov4094 27 днів тому

    ❤😅