ComfyUI: Imposing Consistent Light (IC-Light Workflow Tutorial)

Поділитися
Вставка
  • Опубліковано 4 лют 2025

КОМЕНТАРІ • 240

  • @controlaltai
    @controlaltai  6 місяців тому +16

    Update (Oct 28, 2024): The BiRef ZhoZhoZho nodes do not import after the latest comfy UI update. The Fix is simple:
    1. Install the node via Comfy Manager. Restart and you will get a filed import error.
    2. Close Comfy in the browser.
    3. Down load Fixed Zip I made: drive.google.com/file/d/1oHO_m5DoUWkU7ViMg3P9dQqWUH06tkCX/view?usp=sharing
    4. Replace all contents of the zip in: ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO
    5. Restart ComfyUI. The node should now work.
    Update (Sep 5, 2024): The Preview Bridge node in the latest update as a new option called "Block". Ensure that it is set to "never" and not "if_empty_mask". This will allow the preview bridge node to pass on the image to the H/L Frequency Node as shown in the video and transfer the details. If set to "if_empty_mask" you will not get any preview, it will show as a black output. I had asked the dev to update the node so that the default behavior is always set to never, he has fixed and done the same. Update the node again to the latest version.
    Comfy Update (Aug 27, 2024): If you Are getting KSampler Error, You need to update ComfyUI to "ComfyUI: 2622[38c22e](2024-08-27)" or higher, IC-Light and Layered Diffusion. Everything works as shown in the video. No change in workflow.
    IC-Light is based on SD1.5, but all generates are SDXL Resolution, then 4x Upscale. I hope you find the tutorial helpful. Please note: at 5:17 layered diffusion custom node is needed, even though no nodes are used otherwise you will get an error as follows:
    RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 64, 64] to have 4 channels, but got 8 channels instead

    • @williamsaton8812
      @williamsaton8812 6 місяців тому

      thx sooooo much

    • @ivanivan9301
      @ivanivan9301 6 місяців тому +1

      Thank you so much, I followed the video for 2 days and finally managed to make it, awesome tutorial! 👍

    • @ismgroov4094
      @ismgroov4094 6 місяців тому +1

      sir help me!!!!! i bought your workflow :( ... plz

    • @controlaltai
      @controlaltai  6 місяців тому

      Already replied to you.

    • @eme4117
      @eme4117 5 місяців тому

      @@ismgroov4094 Where did you buy it?

  • @esuvari
    @esuvari 6 місяців тому +4

    Oh MY GOD! This is incredible! The first two random images I tried off the top turned out amazing, first try. You're the most underrated SD channel on youtube, thank you for this amazing work. Can't wait to get my hands dirty with this. Wish you the best.

    • @yuvish00
      @yuvish00 6 місяців тому

      Hi, did you get the error: Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 8, 104, 152] to have 4 channels, but got 8 channels instead ?

    • @esuvari
      @esuvari 6 місяців тому

      Nope, it worked on mine

    • @TwixTed
      @TwixTed 4 місяці тому

      I can't seem to get the BiRefNet zho node to show up. I have the ones by dznodes. How can i get the one by zho? Could you please help?

  • @CerebricTech
    @CerebricTech 5 місяців тому

    Its amazing, even this is rocket science for me yet, this is most detailed explained video for product iv seen till now..
    Thanx.

  • @kobe5113
    @kobe5113 6 місяців тому +2

    honestly this is too good, thank you so much

    • @kobe5113
      @kobe5113 6 місяців тому

      really really well done

  • @GoodArt
    @GoodArt 6 місяців тому +1

    that was just the coolest video I've ever seen. comfy rules.

  • @rcj1337
    @rcj1337 5 місяців тому

    Impressive stuff, amazing work!

  • @amirmhmdart
    @amirmhmdart 18 днів тому

    Amazing.

  • @jd38
    @jd38 6 місяців тому

    Yes, finally! thank you for this tutorial

  • @jjagdishwar
    @jjagdishwar 6 місяців тому

    Love this. Thank you so much

  • @PrithivThanga
    @PrithivThanga 6 місяців тому

    Must be a worthy one. will test and post here..

  • @yanggary
    @yanggary 3 місяці тому +1

    🙏 Thanks for sharing this tutorial ❓ Question, is there a way to add effects in front of the main object without distorting the product (ie smoke effect in front of the main object)?
    Btw, BiRef Zhozhozho is not functional again.

    • @controlaltai
      @controlaltai  3 місяці тому

      Hi, That is difficult with accurate lighting. However you can do it with custom nodes in comfy ui with pseudo effects. Or you can use photoshop for further edits. Comfy will obviously require a different workflow, not shown here.
      For the BiREF ZhoZhoZho, i wrote a note here:
      Update (Sep 14, 2024): The BiRef ZhoZhoZho nodes does not import after latest comfy UI update. You can fix it, follow github instructions here: github.com/ZHO-ZHO-ZHO/ComfyUI-BiRefNet-ZHO/issues/21. I have verified the fix on my end and it works. Here are direct instructions: drive.google.com/file/d/1NMkK7fYpXog0fv1P2QmKzfYg9v1HPHZM/view?usp=drivesdk
      It just checked it now 5 minutes back and Its loading on my system, everything updated. If that is too much of a headache the Layer Style node also has a BiREF node, use from there and avoid ZhoZhoZho. As the dev never updates or fixes anything once broken. I should start avoiding using his custom nodes in tutorials.

  • @caseyj789456
    @caseyj789456 4 місяці тому

    Very impressive render quality!

  • @Goger_
    @Goger_ 8 днів тому

    Do you plan to create a similar worflow or modify this one so that you can use flux or sdxl to generate backgrounds? As far as I know there was a demo version of IC light that works with flux

    • @controlaltai
      @controlaltai  8 днів тому

      Not yet released. Once released for flux the entire workflow will need to be modified will make a new video. Sdxl requires minor changes. But the person who released the model was working on a flux release and not sdxl one.

    • @Goger_
      @Goger_ 8 днів тому

      @controlaltai Thanks for the info, so I am waiting for the release. Anyway thanks for the great worflow ❤️

  • @andreydcua8123
    @andreydcua8123 3 місяці тому

    First of all, thank you very much for the great tutorial!
    Can you give any recommendations to preserve (or restore) the original colors of the product?

    • @controlaltai
      @controlaltai  3 місяці тому +1

      Hi, Use the color match tool. Transfer sensitive colors details via masking.

  • @oohlala5394
    @oohlala5394 5 місяців тому +2

    Thank you for this tutorial. However, I don't understand why we need to segment the image again at 16:51. We already have the mask and the image with the new composition (product size and placement) as output of the "ImageBlendAdvance V2" node. Why are we repeating the segmentation process? The resulting images and masks of the new segmentation seem to me to be the same as the outputs from the "ImageBlendAdvance V2" . Sorry to ask about that. I'm a sub, and thoroughly enjoy your tutorials.

    • @controlaltai
      @controlaltai  5 місяців тому +1

      Hi, There are Multiple reasons. First, we blend the object with a grey background in this node. It's only the mask; there is no image. We need the PNG image again to be transparent. Second, this mask is not that good for some objects as it fails to mask them properly from this node after resizing. In the test, 1 out of 10 times, it caused an issue. Since I had to use the transparent PNG, I thought we should give options for masking and getting the mask again.

    • @oohlala5394
      @oohlala5394 5 місяців тому

      @@controlaltai thanks

  • @dankazama09
    @dankazama09 5 місяців тому

    Magnific 👌

  • @thedillion
    @thedillion 2 дні тому

    You can share with workflow🙏🏻

  • @Catwholovesfish
    @Catwholovesfish 4 місяці тому

    in the video 13:56, I can not connect mask after both birefnet model to Mask Segment Node. I tried different product images which have the same result. Is that because no mask generated? I play back your video, but cannot find the solution.

    • @controlaltai
      @controlaltai  4 місяці тому

      What do you mean you cannot connect the mask. Explain the error. Connecting the mask is dragging out a noodle line and connecting the the switch input.

    • @Catwholovesfish
      @Catwholovesfish 4 місяці тому

      @@controlaltai No error. the output "mask" in Birefnet cannot dragging out a noodle line to the switch input.

    • @controlaltai
      @controlaltai  4 місяці тому

      The node must not be installed. Here is the fix to install node:
      The BiRef ZhoZhoZho nodes does not import after latest comfy UI update. You can fix it, follow github instructions here: github.com/ZHO-ZHO-ZHO/ComfyUI-BiRefNet-ZHO/issues/21. I have verified the fix on my end and it works. Here are direct instructions: drive.google.com/file/d/1NMkK7fYpXog0fv1P2QmKzfYg9v1HPHZM/view?usp=drivesdk

  • @alexanderpina5913
    @alexanderpina5913 2 місяці тому +1

    ZHO BiRefNet module is no longer available for download via confyui manager

    • @controlaltai
      @controlaltai  2 місяці тому +1

      Use the birefnet from layer style custom node. It’s basically the same.

  • @MohamedAli-hz1cn
    @MohamedAli-hz1cn 8 днів тому

    how download ?
    workflow

  • @IcaroDiniz
    @IcaroDiniz 3 місяці тому

    DOPE!

  • @dankazama09
    @dankazama09 5 місяців тому

    can we have this kind of workflow with flux? this video deserve more views. Good work sir/ma'am!

    • @controlaltai
      @controlaltai  5 місяців тому +1

      Hi, No we can't unfortunately. The IC Light model was trained on SD 1.5. Its not supported on anything else but SD 1.5 based or fined tuned checkpoints of SD 1.5.

  • @hootanbr6745
    @hootanbr6745 Місяць тому

    how can i get this workflow?

  • @mymood4200
    @mymood4200 Місяць тому

    thank you for this tutorial !
    can you help me. i got error
    KSampler
    The new shape must be larger than the original tensor in all dimensions
    what that mean

    • @controlaltai
      @controlaltai  Місяць тому +1

      Something wrong with the model. Ensure checkpoint is sd1.5 and not flux or sdxl

    • @mymood4200
      @mymood4200 Місяць тому

      @@controlaltai thank

  • @ivanivan9301
    @ivanivan9301 5 місяців тому

    Hello, thank you for the course, I've realized the whole workflow, and I just can't make the product look transparent when placing transparent glass products such as perfume and wine glasses, it can't show the background content through the glass at all, I've watch the tutorials again and again and didn't find where to set the transparency of the product, may I ask where to set it? Looking forward to your answer, thanks!

    • @controlaltai
      @controlaltai  5 місяців тому

      There is no transparency setting. If you are looking for clear objects see through glass you need to switch from here to photoshop and do manually. Hence we put a bokeh background in the prompt.

    • @ivanivan9301
      @ivanivan9301 5 місяців тому

      @@controlaltai Thanks for the reply, I'll give it a try

  • @chakhmanmohamed9436
    @chakhmanmohamed9436 3 місяці тому

    Can we have something like flare? When u have both product shot and the background? Thanks alot

    • @controlaltai
      @controlaltai  3 місяці тому

      What does flare mean. Use the workflow you get the product lighting only. Rest of the post processing do it out of comfy. This would still save time.

  • @FlowFidelity
    @FlowFidelity 6 місяців тому

    at 30:00 you mention copying the negative prompt from CivitAI, could you expound on this? Thanks!

    • @controlaltai
      @controlaltai  6 місяців тому

      Well all i did was opened sample images from juggernaut aftermath and checked the negatives used. Copy and pasted that. That's what I meant.

    • @FlowFidelity
      @FlowFidelity 6 місяців тому

      @@controlaltai ooooh that makes sense. Thanks

  • @TimesNewRomanAI
    @TimesNewRomanAI Місяць тому

    Any advise to install Ollama on Runpods with ComfyuI?. By the way, is there any json file to download the workflow?

    • @controlaltai
      @controlaltai  Місяць тому +1

      Hi, expand the post the workflow link is given....Here it is again
      Ollama might not work on runpod. I have no experience with that. Instead of Ollama you can install Gemini Node or any Cluade AI node which uses API, API based LLM would easily work on runpod. For the instructions use the same I have given in the workflow. All you have to do is replace Ollama with your own API Custom node (whatever custom node you choose) and ensure the connections are all correct.
      Workflows: drive.google.com/file/d/1WH5Exmzij-shWnQ7MQOagE_jTTL8pEtx/view?usp=sharing
      Workflow Images: drive.google.com/file/d/1z_9my1bzxNEEArGWsFzHWlINl60BCtdY/view?usp=sharing

    • @TimesNewRomanAI
      @TimesNewRomanAI Місяць тому

      @@controlaltai Thank you very much. Ill try

  • @josephmorgans6812
    @josephmorgans6812 6 місяців тому

    great work, thank you !
    Is it possible to edit / change the background & product (STRING) promts ?

    • @controlaltai
      @controlaltai  6 місяців тому

      Yeah, you can use custom conditioning. A switch is given in the workflow. Copy and paste from the Ollama generation to custom text condition, then make the switch to 2.

    • @josephmorgans6812
      @josephmorgans6812 6 місяців тому

      @@controlaltai Thank you for your quick reply. Sadly it does seem to work for me, the final image doesn't change.

    • @controlaltai
      @controlaltai  6 місяців тому

      Send me your current workflow with the prompt the reference bg and the product image to mail @ controlaltai . com (without spaces). The workflow is complicated, obviously something is missed. Will have a look and revert to you via email.

  • @wencho3616
    @wencho3616 Місяць тому

    do you know how to fix this "Sizes of tensors must match except in dimension 1. Expected size 104 but got size 112 for tensor number 1 in the list." i kept getting this error in KSampler when trying to edit using another picture.

    • @controlaltai
      @controlaltai  Місяць тому

      There is some issue with the models. Use SD 1.5 checkpoint and not SDXL or Flux (they are not compatible). If you still get that error, then check if the ic_light models are correct and the IP adapter models are correct.

    • @wencho3616
      @wencho3616 Місяць тому

      I already checked and use everything version SD 1.5, but it still have the same problem in the KSampler.

    • @controlaltai
      @controlaltai  Місяць тому

      You can email the workflow to mail @ ControlAltAI . Com (without spaces). I cannot help further without looking at the workflow. Make sure to include your input product photo and reference image you are using.

  • @TimesNewRomanAI
    @TimesNewRomanAI Місяць тому

    Hello and Happy New Year to all. I have loaded from Hugging Face /juggernaut_aftermath.safetensors
    in the Checkpoints folder but I can't get it to appear as an option under
    I also get this error
    'down_blocks.3.resnets.0.norm1' which I understand has to do with the wrong model.
    Can you help me?
    By the way, I'm running Comfyui on Runpod, the checkpoints folder doesn't show its content, but how do I delete the models I don't need?

    • @controlaltai
      @controlaltai  Місяць тому

      Happy new year to you. Unfortunately I don’t know how to do this on run pod. Check their documentation for the same.

  • @TheGold72
    @TheGold72 3 місяці тому

    Hi thanks for the good tutorial, but the Resadapter for comfyUI import failed as well, how can i fix it?

    • @controlaltai
      @controlaltai  3 місяці тому

      Hi, what’s the import fail error?

  • @DanielPartzsch
    @DanielPartzsch 6 місяців тому

    Very nice. What exactly is the difference between the old IC light models and the once you've used here? Do they yield better results? Thanks

    • @controlaltai
      @controlaltai  6 місяців тому +1

      Actually this one is older, it came out first I think. I started with kijai, all respects to him for his work. However I was not getting the results that I wanted. I switch to an entire different way as the way this one works is different, was impressed with the results, just went on building the workflow from there. Don't have a side to side comparison as the nodes and method applied are both different. So I cannot be sure if any is better, as I really did not go back to the kijai one and tried to get that working the way i wanted.

  • @SaoirseChen-v8b
    @SaoirseChen-v8b 6 місяців тому

    Thank you for the incredible workflow, I got an issue when generating the ksampler before the details and color adjust parts, the ksampler image became total black at 60%, and the colormatch got error(stack expects a non-empty TensorList), do you have any clues?

    • @controlaltai
      @controlaltai  6 місяців тому +1

      Hi, Not untill I see what you have done with the workflow. It's quite complex to identify the issue. You mail me, I can have a look and see if I can troubleshoot it. mail @ controlaltai . con ( without spaces)

  • @ronsha
    @ronsha 4 місяці тому

    Two questions (:
    1. Should I update Ollama when it says there is a new update?
    2. Is there an option to create the background focused? Almost every photo with the product has a blurry background. it feels like the photo was taken by a lens with low number of aputur.
    Thanks!

    • @controlaltai
      @controlaltai  4 місяці тому +1

      The background is as per prompting and checkpoint. You need to change the prompts and playing around with the checkpoint lora to get your desired results. The workflow remains the same.
      Yes update ollama. Always best to keep it to the latest version. Will not negatively affect the workflow.

    • @ronsha
      @ronsha 4 місяці тому

      @@controlaltai thank you! I'll try it.

  • @JuanPabloJaramillo-k6u
    @JuanPabloJaramillo-k6u 2 місяці тому

    Can this workflow be used correctly with comfyui using Google Colab?

    • @controlaltai
      @controlaltai  2 місяці тому

      I don’t have any idea about Google colab. Never played on it. All the work done for clients is usually locally.

  • @francoisfilliat764
    @francoisfilliat764 4 місяці тому

    Thanks a lot for this amazing workflow ! Any chance to use Flux to generate the background even if i know that IC light is not compatible at this state ? Maybe by "IC Lighting" the image generated by Flux ?

    • @controlaltai
      @controlaltai  4 місяці тому

      Welcome, unfortunately, no. We have to pass the image via IC Light which is Sd 1.5. When you due that in the k sampling it will degrade again. IC Light has to be compatible with flux to have this generation blend properly with flux background.

    • @francoisfilliat764
      @francoisfilliat764 4 місяці тому

      @@controlaltai Thank you for your quick reply. I'll wait for IC Light to be compatible with Flux ! ☺

  • @design38
    @design38 6 місяців тому

    Hi, great tutorial, by the way! I have a slight problem. The resulting image of a black product is different from the original. For example, if the product is black running shoes and the background is a green scenery, the result will make the shoes appear green. I also tried a black bag, and it turned white. the details is still there, but this result is after ksampler. probably has something to do with the IPAdapter or IClight ??

    • @controlaltai
      @controlaltai  6 місяців тому

      Hi, Send me the workflow and the images to mail @ controlaltai . com (without spaces), without looking and testing myself, can’t trouble shoot.

  • @Howcouldieverstop
    @Howcouldieverstop 23 дні тому

    Get my money! Hey I’m a photographer, and I’ve been watching your videos. I want to naturally composite backgrounds into photos I’ve actually taken, but when generating images in SDXL and Flux environments using the T2I method, the quality of the backgrounds isn’t very good. I’m considering using MidJourney to create backgrounds and naturally matching the product with them. Do you have any related videos, or is this something you plan to cover?

    • @controlaltai
      @controlaltai  22 дні тому

      Hi, You will require a custom workflow. Midjourney won’t solve the problem, blending will require too much post processing. IC light is only supported by sd 1.5, you cannot use flux or sdxl with IC light. We can achieve a natural blend with flux but the lighting will be pseudo. Again it would be a custom workflow. There won’t be a UA-cam tutorial for this as it’s a very specific use case.

  • @Andrew-hi4lk
    @Andrew-hi4lk 5 місяців тому

    This is amazing!
    Any ideas about this error?
    Error occurred when executing KSampler:
    Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 8, 104, 152] to have 4 channels, but got 8 channels instead

    • @Andrew-hi4lk
      @Andrew-hi4lk 5 місяців тому

      Never mind! I see that the custom node ComfyUI-layerdiffuse (layerdiffusion) is required and this resolves the error :)

  • @agusdor1044
    @agusdor1044 6 місяців тому

    Hi, I'm trying to use this WF but I only have a 6GB GPU. I've tried on various online platforms and even locally with the ComfyCloud node (which allows you to work locally but with a cloud GPU for generations), but I haven't been able to use the WF successfully with any of these alternatives. Could you tell me if you know whether this WF could be used with a service like RunPod or something similar? Ty!!!

    • @controlaltai
      @controlaltai  6 місяців тому

      The workflow can be used on the cloud or locally. Does not matter where you run it, 6gb VRAM won't do. There are lot of things happening here, and lot of models getting loaded. 24 Gb is recommended. But you can try this at 12 GB at a bare minimum. I haven't tested it, as I don't have a 12 GB hardware.

  • @KashifRashid
    @KashifRashid 6 місяців тому

    I have installed everything but I cant find the switch (any) node on my search . What am I doing wrong?

    • @KashifRashid
      @KashifRashid 6 місяців тому

      Ok.. figured that one out.. lol. Had to update comfyui from outside. not the manager

  • @FlowFidelity
    @FlowFidelity 6 місяців тому

    Thank you. How does one install the VIT MATTE detail model, Pymatting is working for me in Ultra, but I seem to be missing VIT MATTE

    • @controlaltai
      @controlaltai  6 місяців тому

      I have explained in the video. Check from 6:28

    • @FlowFidelity
      @FlowFidelity 6 місяців тому

      @@controlaltai Thank you! That's what I was looking for!

    • @FlowFidelity
      @FlowFidelity 6 місяців тому

      @@controlaltai BTW are you on LinkedIn ? I did a post about this tutorial and would love to tag you. Thanks again for the great tutorial!

    • @controlaltai
      @controlaltai  6 місяців тому

      Hi, no went off LinkedIn years back. That's fine feel free to share.

    • @FlowFidelity
      @FlowFidelity 6 місяців тому

      @@controlaltai not gonna lie I originally skipped that section, and really wish I had not. Going back through it now :)

  • @blackbear8398
    @blackbear8398 5 місяців тому

    Hi, is there a way that i can make the background less cartoonish? I already try many checkpoint but it give the same result. How do i make realistic background. I already use realistic image for the background image though.

    • @controlaltai
      @controlaltai  5 місяців тому +1

      Hi, You see in the video the background images. They are not cartoonish. So it's the prompting or the checkpoint. I cannot tell unless I look at the workflow.

    • @blackbear8398
      @blackbear8398 5 місяців тому +1

      @@controlaltai after some experimenting, i add this to the prompt {describe the image in extreme detail Include "atmosphere, mood & tone and lighting". Write the description as if you are a product photographer. include the word "hyper realistic" and "shot on dslr" and "shot using 12mm lens" and "aperture f 1.2" and "lifelike texture" and "macro shot" and "faded color grading" and "slow shutter" and "long exposure" in the description} and it worked. Thanks bro for this awesome workflow.

    • @controlaltai
      @controlaltai  5 місяців тому +1

      Great 👍 We need a better llm vision model. The llama 3.1 is far better but nothing vision atm. Your prompt instruction is very interesting, will try it out, thanks.

  • @LinhLe-ib9gi
    @LinhLe-ib9gi 5 місяців тому

    i'm error node Switch ,  it say : node 29 says it needs i nput inpu 0, but there is no input to that node at all . Help me

    • @controlaltai
      @controlaltai  5 місяців тому

      I cannot understand what node 29 you are talking about. Visually see which node the error is coming from, along with the cmd error. That will help me understand what the issue is.

    • @LinhLe-ib9gi
      @LinhLe-ib9gi 5 місяців тому

      @@controlaltai error node Switch ( impact Pack ) . Error occurred when executing ImpactSwitch:
      Node 5 says it needs input input0, but there is no input to that node at all
      File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 294, in execute
      execution_list.make_input_strong_link(unique_id, i)
      File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\comfy_execution\graph.py", line 94, in make_input_strong_link
      raise NodeInputError(f"Node {to_node_id} says it needs input {to_input}, but there is no input to that node at all")

    • @controlaltai
      @controlaltai  5 місяців тому

      Can you email me a screenshot of the workflow and zoom in on the node which has the error, I need to look at what is going on. mail @ controlaltai . com (without spaces).

  • @赵毅-b9y
    @赵毅-b9y 5 місяців тому

    Hello, I cannot find VAE Encode ArgMax in my comfyui. Which plugin do I want to download

    • @controlaltai
      @controlaltai  5 місяців тому

      Hello, Check the video for custom nodes requirements. It's a part of the main ic light custom node as shown.

    • @赵毅-b9y
      @赵毅-b9y 5 місяців тому +1

      ​@@controlaltai Thank you, I have found a solution. The version I downloaded had an issue, so I couldn't find it

  • @Lifejoy88
    @Lifejoy88 4 місяці тому

    Hi, where I can download your workflow (json file)?

    • @controlaltai
      @controlaltai  4 місяці тому

      Hi, Workflow is only made available for paid channel members. You don't need to become a paid member. Everything is shown in the video to recreate the workflow from scratch.

  • @KeenHendrikse
    @KeenHendrikse 6 місяців тому

    Hey, can anyone advise where I can find the image blend advanced v2 node?

    • @controlaltai
      @controlaltai  6 місяців тому +1

      LayerStyle Custom Node. Check video custom node requirements.

  • @ForYoutube-s3o
    @ForYoutube-s3o 3 місяці тому

    For some reason cstom masking for text does not work properly. I am wondering wehat might be the isue?

    • @controlaltai
      @controlaltai  3 місяці тому

      What part of custom masking is not working, after making a custom mask, you have to switch to custom mask in the mask selection switch.

    • @ForYoutube-s3o
      @ForYoutube-s3o 3 місяці тому

      @@controlaltai masking for the version b.

    • @controlaltai
      @controlaltai  3 місяці тому

      Explain, I cannot understand what is version b. There is no “version b” in the workflow.

    • @ForYoutube-s3o
      @ForYoutube-s3o 3 місяці тому

      @@controlaltai Image b in hte image comparer

    • @controlaltai
      @controlaltai  3 місяці тому

      Check the preview bridge node for masking, before the masking switch, the preview bridge node option if empty mask should be set to “never”.

  • @agusdor1044
    @agusdor1044 5 місяців тому

    hi! I've successfully loaded the workflow in a cloud instance. Everything is up and running, but I'm encountering the same error that others have reported:
    Error occurred when executing KSampler: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 104, 152] to have 4 channels, but got 8 channels instead.
    I'm running the workflow with a 24GB GPU and 64GB RAM.
    I've selected and downloaded the correct ldm version of IC Light.
    All nodes are installed and updated (including LayerDiffusion).
    I've tried all the wight_dtype settings in IC Light, but I keep getting the same error.
    Do you know what might be causing this?

    • @controlaltai
      @controlaltai  5 місяців тому

      Hi, There is an issue with the latest comfy ui, it’s broke the ic light node, developer is working on a fix, you have to use the legacy front end or wait till the developer fixes it.

    • @mohammadjavadnazari7941
      @mohammadjavadnazari7941 5 місяців тому

      @@controlaltai fixed now!

    • @controlaltai
      @controlaltai  5 місяців тому +1

      Yeah check and and updated pinned comments. For anyone else seeing this: "Comfy Update (Aug 27, 2024): If you Are getting KSampler Error, You need to update ComfyUI to "ComfyUI: 2622[38c22e](2024-08-27)" or higher, IC-Light and Layered Diffusion. Everything works as shown in the video. No change in workflow."

    • @agusdor1044
      @agusdor1044 5 місяців тому

      @@controlaltai ALL WORKING NOW YASSSSSSS!

  • @RafiSpaceOnline
    @RafiSpaceOnline 4 місяці тому

    Hello,
    i cant import ComfyUI-BiRefNet-ZHO node. I tried to install manualy i tried to install through ComfyUI Manager.. but import failed.
    To be honest because of this workflow I bought a membership of your channel .. and the workflow doesnt work for me...
    Can you please help me to install BiRefNet-ZHO please ???

    • @controlaltai
      @controlaltai  4 місяці тому

      @RafiSpaceOnline-j5b Hi The BiRef ZhoZhoZho nodes does not import after latest comfy UI update. You can fix it, follow github instructions here: github.com/ZHO-ZHO-ZHO/ComfyUI-BiRefNet-ZHO/issues/21. I have verified the fix on my end and it works.
      Here are direct instructions: drive.google.com/file/d/1NMkK7fYpXog0fv1P2QmKzfYg9v1HPHZM/view?usp=drivesdk

    • @sairammw
      @sairammw 4 місяці тому

      Facing same problem, no matter how many times i install or change it to "myutils", it says node not installed

    • @controlaltai
      @controlaltai  4 місяці тому

      @@sairammw Instrcutions require change in code. Not just a rename, please read carefully:
      After you rename you should open dataset.py and make changes as given in the screen shot. This issue a node and the node dev is not bothered to integrate this simple fix. Other people have made this solution all he has to do is merge it in the github. For now we have to do it manually.
      drive.google.com/file/d/1NMkK7fYpXog0fv1P2QmKzfYg9v1HPHZM/view?usp=drivesdk

    • @sairammw
      @sairammw 4 місяці тому

      @@controlaltai yes had edited data.py too, 10th line... Yes but still facing the same problem

  • @ImagindeDash
    @ImagindeDash 6 місяців тому

    Thank you for the tutorial but I´m getting this error: Error occurred when executing LayerUtility: ImageBlendAdvance V2:
    'NoneType' object is not iterable

    • @controlaltai
      @controlaltai  6 місяців тому

      Not sure what is this error, could be some connections are wrong. Ensure background and layer are correct.

    • @ronshalev1842
      @ronshalev1842 6 місяців тому +1

      Hi, did you manage to fix that? I have the same error

    • @controlaltai
      @controlaltai  6 місяців тому +1

      You can email me the workflow. I can have a look at it for you. mail @ controlaltai . com (without spaces)

    • @ImagindeDash
      @ImagindeDash 6 місяців тому +1

      @@ronshalev1842Hi, I fixed the error changing the value in the node Impactint, from 0 to 1.

    • @ronshalev1842
      @ronshalev1842 6 місяців тому

      @@ImagindeDash Thank you that did the work!

  • @sagarsinghvi2766
    @sagarsinghvi2766 5 місяців тому +1

    Can you share the workflow for us to download?

    • @agusdor1044
      @agusdor1044 5 місяців тому

      you have to be a chanel member or build it by yourself, looking the video

  • @affanyanuar
    @affanyanuar 4 місяці тому

    Got an error on birefnet zho ? Any solution?

    • @controlaltai
      @controlaltai  4 місяці тому

      Give more details of the error please.

    • @affanyanuar
      @affanyanuar 4 місяці тому

      @@controlaltai When loading the graph, the following node types were not found:
      ComfyUI-BiRefNet-ZHO
      GitHub Logo
      Author:ZHO-ZHO-ZHO

    • @affanyanuar
      @affanyanuar 4 місяці тому

      i already installed by comfy manager and manually , it still same issue

    • @controlaltai
      @controlaltai  4 місяці тому

      @@affanyanuar Hi,
      Please find fix here:
      github.com/ZHO-ZHO-ZHO/ComfyUI-BiRefNet-ZHO/issues/21

  • @SanchezGodsent
    @SanchezGodsent 6 місяців тому

    i have this problem: Error occurred when executing LayerUtility: HLFrequencyDetailRestore:
    images do not match

    • @controlaltai
      @controlaltai  6 місяців тому +1

      Yeah if you don't put a manual mask you get that error. It passes on an empty image which is a mis match. Have mentioned this in the video. So connect your image to the preview bridge. Get the error, then manually mask and save the preview bridge and run queue prompt again. It should go through.

  • @bizonlarheryerde
    @bizonlarheryerde 6 місяців тому

    Is your channel’s membership option turned on? I can’t see it anywhere.

    • @controlaltai
      @controlaltai  6 місяців тому +1

      Yes, here is the link:
      ua-cam.com/channels/gDNws07qS4twPydBatuugw.htmljoin

  • @CharlesPrithviRaj
    @CharlesPrithviRaj 6 місяців тому

    couldnt find the birefnetultra node, which custom node is it from ?

    • @controlaltai
      @controlaltai  6 місяців тому +1

      That's from LayerStyle custom node.

  • @赵毅-b9y
    @赵毅-b9y 5 місяців тому

    May I ask what caused this error
    Error occurred when executing KSampler:
    Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 128, 128] to have 4 channels, but got 8 channels instead

    • @controlaltai
      @controlaltai  5 місяців тому

      Make sure you have layered diffusion custom node installed.

    • @赵毅-b9y
      @赵毅-b9y 5 місяців тому

      @@controlaltai There is an installation, but after running it on the K sampler, this problem occurs, which is very frustrating

    • @controlaltai
      @controlaltai  5 місяців тому

      @@赵毅-b9y make sure you downloaded the correct ic apply model, these are ldm models and not the standard ones.

    • @controlaltai
      @controlaltai  5 місяців тому

      Hi, issue seems to be fixed: "Comfy Update (Aug 27, 2024): If you Are getting KSampler Error, You need to update ComfyUI to "ComfyUI: 2622[38c22e](2024-08-27)" or higher, IC-Light and Layered Diffusion. Everything works as shown in the video. No change in workflow."

    • @kunalpuri9492
      @kunalpuri9492 2 місяці тому

      @@controlaltai sadly, even after grabbing the right models and updating everything, this ksampler issue seems to have resurfaced. That's a pity. This looks like a great workflow that's plaqued by node inconsistencies. No fault of the workflow.

  • @SteMax-d6z
    @SteMax-d6z 5 місяців тому

    at upscale part, i ImapactInt = 2, the product image get bigger, it bigger than bg imgae. i dont know why ,sir help

    • @controlaltai
      @controlaltai  5 місяців тому

      Are you building the workflow from scratch? Double check the video. The bg has to be upscaled .

    • @djodjogri
      @djodjogri 15 днів тому

      You have to upload a bigger image than the one you created. Change image size on Photoshop.

  • @cinematicfilm6559
    @cinematicfilm6559 3 місяці тому

    Does it work on human also ?

    • @controlaltai
      @controlaltai  3 місяці тому

      No, changes face, unless using train Lora

    • @cinematicfilm6559
      @cinematicfilm6559 3 місяці тому

      @@controlaltai so where to plug that lora if i have it.

    • @controlaltai
      @controlaltai  3 місяці тому

      @cinematicfilm6559 after checkpoint, you have to connect the clip as well. Note that the LoRa should be trained on sd 1.5.

  • @JD-ls5vt
    @JD-ls5vt 5 місяців тому

    Which version of the UI was used for this? Keep getting the " but got 8 channels instead" error. Even with the required layered diffusion node and correct "fc-ldm" model, issue persists. Bypassing the lc light apply node lets it complete flow execution.

    • @controlaltai
      @controlaltai  5 місяців тому

      The old one, as comfy came out on 15th aug, and this was posted on July 25. I will check it in some hours and get back to you. If it’s broken in latest version, will update and make a post. Try putting the node in layered diffusion folder instead of u net and see if that works. The channel 8 error is highly unlikely a comfy update issue. I will re check though.

    • @JD-ls5vt
      @JD-ls5vt 5 місяців тому

      @@controlaltaiThanks for the reply, the ldm model only seems to be recognized in the unet or diffusion_models folder. I'm using ComfyUI: 2611[8ae23d](2024-08-23)
      Manager: V2.50.2

    • @controlaltai
      @controlaltai  5 місяців тому +1

      The error is with the IC Light node. Will post an updated workflow as the Impact Pack Switch also malfunctions after the new update. The ic light dev is working on a fix: github.com/huchenlei/ComfyUI-IC-Light-Native/issues/44, will let you know once its pushed.

    • @agusdor1044
      @agusdor1044 5 місяців тому

      @@controlaltai im interested in this fix too! tyyy

    • @controlaltai
      @controlaltai  5 місяців тому

      Hi, Issue has been fixed: "Comfy Update (Aug 27, 2024): If you Are getting KSampler Error, You need to update ComfyUI to "ComfyUI: 2622[38c22e](2024-08-27)" or higher, IC-Light and Layered Diffusion. Everything works as shown in the video. No change in workflow."

  • @ismgroov4094
    @ismgroov4094 6 місяців тому

    i have error,sir. "Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 8, 104, 152] to have 4 channels, but got 8 channels instead"

    • @controlaltai
      @controlaltai  6 місяців тому

      Is layered diffusion custom node installed?

    • @ismgroov4094
      @ismgroov4094 6 місяців тому

      @@controlaltai i did sir

    • @ismgroov4094
      @ismgroov4094 6 місяців тому

      @@controlaltai there is something wrong with "Ic light apply" node.. plz help me.

    • @controlaltai
      @controlaltai  6 місяців тому +1

      Choose the fc model and not fbc. Download the correct models from the link. This is the ldm version of models and not what's given on kijai GitHub.

    • @ismgroov4094
      @ismgroov4094 6 місяців тому

      @@controlaltai thx sir, isolved sir! ❤️🙏🏻🥹

  • @packshotstudio2118
    @packshotstudio2118 5 місяців тому

    How to paste with connection?

    • @controlaltai
      @controlaltai  5 місяців тому +1

      Control shift v

    • @packshotstudio2118
      @packshotstudio2118 5 місяців тому

      ​@@controlaltai Hey, I started supporting your channel and downloaded the workflow, but at the end ( IMAGE COMPARER), it's not generating an image-I'm getting a black screen. Also, I have two red boxes on the IPA adapter and in LOAD CLIP vision. Do you know why this might be happening

    • @controlaltai
      @controlaltai  5 місяців тому

      Hi, thank you. It's probably the wrong IP adapter selected. Send me a screenshot of the following via email. Checkpoint group, IC light group, IP adapter Group. Along with the cmd screenshot of the error. When the box is red. Need to see what is happening to trouble shoot it. mail @ controlaltai . com (without spaces).

    • @packshotstudio2118
      @packshotstudio2118 5 місяців тому

      @@controlaltai Thank you, I sent the message - thank you for your help

    • @controlaltai
      @controlaltai  5 місяців тому

      There is another thing, the preview bridge node was updated. Ensure that blocked option in it is set to never.

  • @SteMax-d6z
    @SteMax-d6z 5 місяців тому

    thx so much.
    i update comfyui, but also getting KSampler Error, :TypeError: calculate_weight() got an unexpected keyword argument 'intermediate_dtype':
    sir help me!!!!!

    • @controlaltai
      @controlaltai  5 місяців тому

      Not sure what is that error. Check the checkpoint. You are suppose to use an sd1.5 checkpoint only.

    • @SteMax-d6z
      @SteMax-d6z 5 місяців тому

      @@controlaltai i bypass the load resadapter, it work ,i don,t know why

    • @SteMax-d6z
      @SteMax-d6z 5 місяців тому

      but if “load and apply ic light " not by pass, and i bypass the load resadapter, it don't work

    • @SteMax-d6z
      @SteMax-d6z 5 місяців тому

      @@controlaltai i use the same checkpoint juggernaut

    • @controlaltai
      @controlaltai  5 місяців тому

      Juggernaut has sdxl and s1.5 checkpoints, reconfirm you are using sd1.5 checkpoint and not sdxl

  • @FrauPolleHey
    @FrauPolleHey 4 місяці тому

    Hi!
    I tried everything to install the Layer Style nodes, without success, can anyone help here please?
    (IMPORT FAILED) ComfyUI Layer Style
    (IMPORT FAILED) ComfyUI-BiRefNet-ZHO
    I tried to install manually and with manager, same :(

    • @controlaltai
      @controlaltai  4 місяці тому +1

      @FrauPolleHey Cannot help without looking at the cause of the failure import. I need to look at your system on the cause of the failure. Typically it will tell you what import or dependencies install failed. You then have to do it manually. Send me an email with the entire cmd boot up text after clean boot up. I will try and help via email. mail @ controlaltai . com (without spaces)

    • @FrauPolleHey
      @FrauPolleHey 4 місяці тому

      @@controlaltai sent, thank you

  • @nasrulacown6066
    @nasrulacown6066 5 місяців тому

    Hi, i have follow your instruction. Its working for the first run. But when i change the background image and change the resolution in SDXLResolution node, i'm getting "image do not match" error. I dont know what is the problem but these were the only things that i change. This is the error message {Error occurred when executing LayerUtility: HLFrequencyDetailRestore:
    images do not match
    File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_LayerStyle\py\hl_frequency_detail_restore.py", line 73, in hl_frequency_detail_restore
    ret_image.paste(background_image, _mask)
    File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\Image.py", line 1847, in paste
    self.im.paste(im, box, mask.im)

    • @controlaltai
      @controlaltai  5 місяців тому +1

      Images do not match cause you have not created a mask in the preview bridge. Whenever passing the image to HL frequency you need to have it masked. If you added the switch like in the video, change to switch no 2. If using 1, mask manually and then queue prompt.

    • @nasrulacown6066
      @nasrulacown6066 5 місяців тому

      @@controlaltai Woww thanks man, i switch the mask for detail to no 2 and it works. I see, so that is the error you've been talking about in the video.

  • @TimesNewRomanAI
    @TimesNewRomanAI Місяць тому

    Any one has received this error:
    ICLightAppply
    Cannot copy out of meta tensor; no data!

    • @controlaltai
      @controlaltai  Місяць тому +1

      Not sure what the error is never heard of it. I suggest you first try this locally then read documentation on how to set up run pod

    • @TimesNewRomanAI
      @TimesNewRomanAI Місяць тому

      @@controlaltai The fun thing is I'm running on a pod because I don't have a pc to run on local

    • @controlaltai
      @controlaltai  Місяць тому

      Unfortunately I never worked with runpod. So I can’t advise. Please look at some other tutorials on how to load checkpoints workflows etc on runpod. Once loaded. If there are workflow issues I can help you with that. Most of the clients (companies) I work work have be do the workflows locally

  • @smatbootes
    @smatbootes 5 місяців тому

    Hello all ! :) I have an issue when I execute
    "Error occurred when executing IPAdapterModelLoader:
    invalid IPAdapter model C:\ComfyUI_windows_portable\ComfyUI\models\ipadapter\iclight_sd15_fc_unet_ldm.safetensors
    File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 316, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 191, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 168, in _map_node_over_list
    process_inputs(input_dict, i)
    File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 157, in process_inputs
    results.append(getattr(obj, func)(**inputs))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 657, in load_ipadapter_model
    return (ipadapter_model_loader(ipadapter_file),)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\utils.py", line 147, in ipadapter_model_loader
    raise Exception("invalid IPAdapter model {}".format(file))"
    could someone help me?

    • @controlaltai
      @controlaltai  5 місяців тому +1

      "invalid IPAdapter model C:\ComfyUI_windows_portable\ComfyUI\models\ipadapter\iclight_sd15_fc_unet_ldm.safetensors" Hi, this is not an IP Adapter model. Its the ic light model, you have put it in the ip adapter folder. Recheck the video on where the ic light models go.

  • @ameerziadi4253
    @ameerziadi4253 6 місяців тому

    can you share workflow

    • @controlaltai
      @controlaltai  6 місяців тому +2

      Ready made json files are for channel paid members only. You can just build the workflow following the tutorial. Nothing is hidden.

    • @SanchezGodsent
      @SanchezGodsent 6 місяців тому

      @@controlaltai where i this private channel?

    • @controlaltai
      @controlaltai  6 місяців тому

      UA-cam Join Membership

  • @FlowFidelity
    @FlowFidelity 6 місяців тому

    Well this is where I stop tonight " Error occurred when executing UNETLoader:
    ERROR: Could not detect model type of: C:\ComfyUI_windows_portable\ComfyUI\models\unet\IC-Light\iclight_sd15_fc.safetensors " got to retrace the steps again I guess

    • @controlaltai
      @controlaltai  6 місяців тому +1

      Okay so you have downloaded the wrong models. Check models in requirements or check description. You have to download the layered diffusion version of the model. Here is the link
      huggingface.co/huchenlei/IC-Light-ldm/tree/main

    • @FlowFidelity
      @FlowFidelity 6 місяців тому

      @@controlaltai ahhh, I was thinking that could be it. Thank you so much for your patience. Now I can sleep :)

  • @ZiadHayes
    @ZiadHayes 15 днів тому

    omg I will never watch one hour video made with AI voice

    • @controlaltai
      @controlaltai  15 днів тому +1

      Noted. This is a technical tutorial aimed at helping users with ComfyUI. If the AI voice isn’t your preference, the focus is on the content and learning-feel free to skip if it’s not for you.

  • @CerebricTech
    @CerebricTech 5 місяців тому

    Its amazing, even this is rocket science for me yet, this is most detailed explained video for product iv seen till now..
    Thanx.

  • @ronshalev1842
    @ronshalev1842 5 місяців тому

    Is it possible to control the image background blur result?

    • @controlaltai
      @controlaltai  5 місяців тому

      Yeah with prompting. In the video tutorial I use dof, which is depth of field. You can add clear sharp in the positive and dof in the negative. Once you have a clear bg you can add depth of field using a blur node from layer style nodes.