ComfyUI ControlNet Tutorial (Control LoRA)

Поділитися
Вставка
  • Опубліковано 21 гру 2024

КОМЕНТАРІ • 55

  • @cyberspider78910
    @cyberspider78910 6 місяців тому

    Brilliant and no fuss work. Keep it up bro. With this quality of tutorial, you will outgrow any major channel...

  • @tonikunec
    @tonikunec Рік тому +2

    Wow, wow, wow... I gotta say, that's a brilliant explanation and showcase of different takes on image generation workflow in ComfyUI. I've followed your tutorials closely and learned much about the different aspects of AI image generation in ComfyUI and using various nodes. Keep up the good work!

  • @simonmcdonald446
    @simonmcdonald446 Рік тому +4

    The colour blending worked very badly for me, i got no extra colour on the B/W photos whatsoever - just some strange mioire effects. the Canny and depthmap was very enjoyable and informative though.

  • @sergetheijspartner2005
    @sergetheijspartner2005 8 місяців тому

    I like how you go over each setting in detail, it might be lengthy and boring to some people but damnit I needed all that info like months ago, other creators just pass over them quickly or just don't talk about them at all👍👍👍

  • @Mehdi0montahw
    @Mehdi0montahw Рік тому +1

    Can you provide an explanation for converting any image into a drawing for coloring? Any explanation intended for coloring book makers with the addition of inscriptions over the pictures, especially the necessity of the quality of the pictures and the clarity of the lines

    • @controlaltai
      @controlaltai  Рік тому +2

      Well you can just generate a drawing directly for coloring books. It's not actually required to take an existing image and then convert it to drawing. AI comes handy in two aspects.
      One: you physically draw something, sketch, and instead of spending hours/days to finish it, you can use the AI to convert your drawings into art.
      Second: Use the AI to just generate the drawing for making a coloring book, for commercial or personal use. This can be done via Stable Diffusion, Blue Willow or MidJourney.

    • @Mehdi0montahw
      @Mehdi0montahw Рік тому +1

      @@controlaltai I did that, but I want clear drawing lines and a solution to the problem of black shadows inside the drawings. Any advice?

    • @controlaltai
      @controlaltai  Рік тому +1

      @Mehdi0montahw the prompt depends on the platform, what platform are you using? For blue willow, I have made a tutorial here: ua-cam.com/video/K4fZW6dS9DY/v-deo.htmlsi=hpbyhwiw6BAA8qJZ, for stable diffusion check out civit ai: civitai.com/search/models?sortBy=models_v3&query=Coloring%20book

    • @Mehdi0montahw
      @Mehdi0montahw Рік тому

      @@controlaltai I only use Stable Diffusion and ComfyUI And focus on them .Thank you. I will try the model you suggested, and if you come up with a way to remove the blackness in the majority of the images produced, please share it with us.

    • @controlaltai
      @controlaltai  Рік тому +1

      @@Mehdi0montahw Sure will do. 👍

  • @hylee2356
    @hylee2356 5 місяців тому

    Thanks for the tutorial, but is there a way to use it on mac? I downloaded everything but mac cannot install the .bat file and there are no "preprocessor" nodes in my Comfy UI. Do you have any ideas about this?

    • @controlaltai
      @controlaltai  5 місяців тому

      Welcome and Sorry, I don’t have any idea about mac. Checkout out the GitHub repository for mac support.

  • @itanrandel4552
    @itanrandel4552 6 місяців тому

    Excuse me, do you have any tutorial on how to make a batch of multiple depth or softedge images per image?

    • @controlaltai
      @controlaltai  6 місяців тому +1

      You just connect the load image node with the required ore processors and the save nodes.

  • @andresz1606
    @andresz1606 8 місяців тому

    Great tutorial. Any ideas on how to load JSON files for openpose? The Apply ControlNet node obviously does not accept JSON files, only images.

    • @controlaltai
      @controlaltai  8 місяців тому +1

      Thank you, Have not tried it but maybe this helps: cmu-perceptual-computing-lab.github.io/openpose/web/html/doc/md_doc_02_output.html#autotoc_md40
      Refer to the json output section, and this:
      github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/include/openpose/flags.hpp
      For the keypoint_scale

  • @answertao
    @answertao 8 місяців тому

    I can't find the colorcorrect node, and I've also searched in the manager, how can I get this node?

    • @controlaltai
      @controlaltai  8 місяців тому

      Check for comfy ui post processing nodes. Its there. Refer here 22:53

    • @answertao
      @answertao 8 місяців тому

      @@controlaltaiOH!!!! Thanks a million for adding sunshine to my day!

  • @spraygospel5539
    @spraygospel5539 10 місяців тому

    I had an error when running the first depth controlnet workflow. When the program wants to run KSampler advanced, it released this error line:
    'ModuleList' object has no attribute '1'
    can you help me fix it?

    • @spraygospel5539
      @spraygospel5539 10 місяців тому

      problem solved. It was because the checkpoint I use is from SD 1.5 . The checkpoint use for this tutorial must be based on SDXL because the controlnet used is also from SDXL.

    • @controlaltai
      @controlaltai  10 місяців тому

      Hi, yes. I am using control lora from stability ai for the ControlNet, they are all SDXL base. Checkpoint is also sdxl base.

  • @abiodunshonibare832
    @abiodunshonibare832 9 місяців тому

    Hello, please when i tried queue prompting the midas depth map i got this error (Error occurred when executing MiDaS-DepthMapPreprocessor:
    PytorchStreamReader failed reading zip archive: failed finding central directory) I have tried checking how to resolve this but haven't been able to

    • @controlaltai
      @controlaltai  9 місяців тому

      No idea what this error is, have you downloaded the depth map model correctly and placed it in the ControlNet folder?

    • @abiodunshonibare832
      @abiodunshonibare832 8 місяців тому

      @@controlaltai sorry i have tried searching , please where can i download the depth map model and which folder do i put it in...I have tried several ways to search for the midas but havent't been able to find it

    • @controlaltai
      @controlaltai  8 місяців тому

      @@abiodunshonibare832 Here is the link to the dept model: huggingface.co/stabilityai/control-lora/tree/main/control-LoRAs-rank256 This will go in the following folder: ComfyUI_windows_portable\ComfyUI\models\controlnet

    • @abiodunshonibare832
      @abiodunshonibare832 8 місяців тому

      @@controlaltai thank you for your response, I found out what the problem was yesterday. So when loading the Midas depth for the first time it tends to download first, that initial download after I clicked queue prompt wasn’t complete hence why I got that error , so I had to delete the incomplete file and run it again, then it worked

  • @dkamhaji
    @dkamhaji Рік тому +1

    Great video, thank you. I have a question, in your canny/depth set up with 2 image sources, which image/controlnet, becomes the main image and which becomes the accent.
    How do you define the accent?
    In your example it was the canny with “a pirate” defined in the positive prompt. Would there be a time where the depth would act as the accent and the canny the main?
    Like if you can explain how with this set up, you organize the main and the accent.

    • @controlaltai
      @controlaltai  Рік тому

      Hi, the first image node going through the pass through via imagescaletototalpixels node becomes main and other becomes accent (secondary).
      For image you can define primary via making it pass through this node.
      For controlnet is more about the control weight.
      In the same example where image 1 via cn1 is primary and img2 is secondary if you make cn2 same weight as cn1 you will just still have a very strong outline superimposed over img1. This will be canny. That's why I reduced the weight of cn2 so it can blend very well.
      The results would be very different if you use a latent image as a source instead of passing the image through the imagescaletototalpixels node.

  • @enriqueicm7341
    @enriqueicm7341 Рік тому +1

    It was a very helpful tutorial, thank you for dedicating the time and explanation. You help us a lot!

    • @controlaltai
      @controlaltai  Рік тому

      Thank you for the support! You are welcome. If you need something specific let me know. Taking request from members, will start making specific workflow tutorials.

  • @danilsi6431
    @danilsi6431 Рік тому +1

    Your channel has very detailed and informative lessons👍. I would support you financially, but transactions from my country are blocked because we are unsuccessfully trying to destroy the entire civilized world😏. So just a huge thank you for your hard work🙏

    • @controlaltai
      @controlaltai  Рік тому

      Ohh I totally get it what you are saying, I never agree with these blocking behaviours. Thank you so much!! Word of support just made my day. 🫶🏼. If you need anything specific, let me know. We are always looking for ideas from and for the users who watch the channel. 🙂

    • @danilsi6431
      @danilsi6431 Рік тому +2

      @@controlaltai I am still learning the basics and can hardly offer anything specific. It's a shame that such professionals (deeply versed in the topic of ai) as you are not popular enough in contrast to channels with all sorts of nonsense. So I just expressed my gratitude for sharing your vast knowledge with us, gave you a Like👍 and wrote a comment to promote your channel on the UA-cam platform.

  • @VendavalVendavesco
    @VendavalVendavesco 10 місяців тому

    Thanks, but i have this problem when i try to fix problems in "python embeded folder" it says this:
    Fatal Python error: init_fs_encoding: failed to get the Python codec of the filesystem encoding
    Python runtime state: core initialized
    ModuleNotFoundError: No module named 'encodings'
    Current thread 0x00002900 (most recent call first):
    I have update comfyui and python dependencies.

    • @controlaltai
      @controlaltai  10 місяців тому

      Very hard to diagnose this without knowing the environment you are running it on. Simple solution download the latest portable version of com in a new folder and run it from there to see if the problem is with the existing environment. If the new folder comfy works, just port models, output folder, and manually re install each custom node.

    • @VendavalVendavesco
      @VendavalVendavesco 10 місяців тому

      @@controlaltai Ok, thanks, i'm going to try, but where is tor what is he latest version?

    • @controlaltai
      @controlaltai  10 місяців тому

      Whatever version is there on GitHub, download that. For python I recommend 3.11 version only.

    • @VendavalVendavesco
      @VendavalVendavesco 10 місяців тому

      @@controlaltai Thanks, it´s possible to mave consistent characters in Comfy UI? Because i want to make a comic with consistent characters.

    • @controlaltai
      @controlaltai  10 місяців тому

      Yes, consistent characters are very much possible in comfyui. You just need to get a workflow correct.

  • @Mehdi0montahw
    @Mehdi0montahw Рік тому +2

    We require a professional episode on converting images to lineart while completely removing the black and gray parts

    • @controlaltai
      @controlaltai  Рік тому +3

      I will try and see if I am able to do so.

    • @Mehdi0montahw
      @Mehdi0montahw Рік тому +2

      @@controlaltai Thank you for your response and interest in your followers’ requests

  • @Genoik
    @Genoik Рік тому

    Could you share the configuration with an image?

    • @controlaltai
      @controlaltai  Рік тому

      Hi, all workflows are shared for channel members via community post. I hope you understand, Thank You! 🙏

  • @simonmcdonald446
    @simonmcdonald446 Рік тому +1

    Would also love to see more on SEGS in controlnet....

    • @controlaltai
      @controlaltai  Рік тому +2

      Will put that in the todo list. Thanks!

  • @river...47
    @river...47 9 місяців тому

    *Hello G. Seth
    when I run the workflow that you show us in the video, this error appears in the terminal, could you please help me resolve it.
    Appears when it reach the KSampler node (Advanced)*
    ERROR:root:!!! Exception during processing !!!
    ERROR:root:Traceback (most recent call last):
    File "C:\Users
    iver\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users
    iver\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users
    iver\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users
    iver\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI
    odes.py", line 1402, in sample
    return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users
    iver\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI
    odes.py", line 1338, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users
    iver\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Impact-Pack\modules\impact\sample_error_enhancer.py", line 9, in informative_sample
    return original_sample(*args, **kwargs) # This code helps interpret error messages that occur within exceptions but does not have any impact on other operations.
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users
    iver\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\sample.py", line 100, in sample
    samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users
    iver\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 703, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users
    iver\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 601, in sample
    pre_run_control(model, negative + positive)
    File "C:\Users
    iver\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\samplers.py", line 452, in pre_run_control
    x['control'].pre_run(model, percent_to_timestep_function)
    File "C:\Users
    iver\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\controlnet.py", line 296, in pre_run
    comfy.utils.set_attr_param(self.control_model, k, self.control_weights[k].to(dtype).to(comfy.model_management.get_torch_device()))
    File "C:\Users
    iver\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 301, in set_attr_param
    return set_attr(obj, attr, torch.nn.Parameter(value, requires_grad=False))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users
    iver\AI\ComfyUI\ComfyUI_windows_portable\ComfyUI\comfy\utils.py", line 295, in set_attr
    obj = getattr(obj, name)
    ^^^^^^^^^^^^^^^^^^
    File "C:\Users
    iver\AI\ComfyUI\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch
    n\modules\module.py", line 1688, in __getattr__
    raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
    AttributeError: 'ModuleList' object has no attribute '1'
    Prompt executed in 12.69 seconds

    • @controlaltai
      @controlaltai  9 місяців тому

      Hard to diagnose like this without checking the workflow. Can you check the controlnet model and checkpoint, both should be SDXl.

  • @DJVARAO
    @DJVARAO Рік тому +1

    Impressive!

  • @pastuh
    @pastuh Рік тому +1

    was trying to work in VR, and its obvious need UI specific for VR.
    I would imagine like sorceres game, where you see ingrediens and throwing everything in one pot 😅

  • @IcarusOLucido
    @IcarusOLucido 10 місяців тому

    d