Update (Oct 28, 2024): The BiRef ZhoZhoZho nodes do not import after the latest comfy UI update. The Fix is simple: 1. Install the node via Comfy Manager. Restart and you will get a filed import error. 2. Close Comfy in the browser. 3. Down load Fixed Zip I made: drive.google.com/file/d/1oHO_m5DoUWkU7ViMg3P9dQqWUH06tkCX/view?usp=sharing 4. Replace all contents of the zip in: ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO 5. Restart ComfyUI. The node should now work. Update (Sep 5, 2024): The Preview Bridge node in the latest update as a new option called "Block". Ensure that it is set to "never" and not "if_empty_mask". This will allow the preview bridge node to pass on the image to the H/L Frequency Node as shown in the video and transfer the details. If set to "if_empty_mask" you will not get any preview, it will show as a black output. I had asked the dev to update the node so that the default behavior is always set to never, he has fixed and done the same. Update the node again to the latest version. Comfy Update (Aug 27, 2024): If you Are getting KSampler Error, You need to update ComfyUI to "ComfyUI: 2622[38c22e](2024-08-27)" or higher, IC-Light and Layered Diffusion. Everything works as shown in the video. No change in workflow. IC-Light is based on SD1.5, but all generates are SDXL Resolution, then 4x Upscale. I hope you find the tutorial helpful. Please note: at 5:17 layered diffusion custom node is needed, even though no nodes are used otherwise you will get an error as follows: RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 64, 64] to have 4 channels, but got 8 channels instead
Oh MY GOD! This is incredible! The first two random images I tried off the top turned out amazing, first try. You're the most underrated SD channel on youtube, thank you for this amazing work. Can't wait to get my hands dirty with this. Wish you the best.
Hi, did you get the error: Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 8, 104, 152] to have 4 channels, but got 8 channels instead ?
🙏 Thanks for sharing this tutorial ❓ Question, is there a way to add effects in front of the main object without distorting the product (ie smoke effect in front of the main object)? Btw, BiRef Zhozhozho is not functional again.
Hi, That is difficult with accurate lighting. However you can do it with custom nodes in comfy ui with pseudo effects. Or you can use photoshop for further edits. Comfy will obviously require a different workflow, not shown here. For the BiREF ZhoZhoZho, i wrote a note here: Update (Sep 14, 2024): The BiRef ZhoZhoZho nodes does not import after latest comfy UI update. You can fix it, follow github instructions here: github.com/ZHO-ZHO-ZHO/ComfyUI-BiRefNet-ZHO/issues/21. I have verified the fix on my end and it works. Here are direct instructions: drive.google.com/file/d/1NMkK7fYpXog0fv1P2QmKzfYg9v1HPHZM/view?usp=drivesdk It just checked it now 5 minutes back and Its loading on my system, everything updated. If that is too much of a headache the Layer Style node also has a BiREF node, use from there and avoid ZhoZhoZho. As the dev never updates or fixes anything once broken. I should start avoiding using his custom nodes in tutorials.
First of all, thank you very much for the great tutorial! Can you give any recommendations to preserve (or restore) the original colors of the product?
Thank you for this tutorial. However, I don't understand why we need to segment the image again at 16:51. We already have the mask and the image with the new composition (product size and placement) as output of the "ImageBlendAdvance V2" node. Why are we repeating the segmentation process? The resulting images and masks of the new segmentation seem to me to be the same as the outputs from the "ImageBlendAdvance V2" . Sorry to ask about that. I'm a sub, and thoroughly enjoy your tutorials.
Hi, There are Multiple reasons. First, we blend the object with a grey background in this node. It's only the mask; there is no image. We need the PNG image again to be transparent. Second, this mask is not that good for some objects as it fails to mask them properly from this node after resizing. In the test, 1 out of 10 times, it caused an issue. Since I had to use the transparent PNG, I thought we should give options for masking and getting the mask again.
thank you for this tutorial ! can you help me. i got error KSampler The new shape must be larger than the original tensor in all dimensions what that mean
in the video 13:56, I can not connect mask after both birefnet model to Mask Segment Node. I tried different product images which have the same result. Is that because no mask generated? I play back your video, but cannot find the solution.
What do you mean you cannot connect the mask. Explain the error. Connecting the mask is dragging out a noodle line and connecting the the switch input.
The node must not be installed. Here is the fix to install node: The BiRef ZhoZhoZho nodes does not import after latest comfy UI update. You can fix it, follow github instructions here: github.com/ZHO-ZHO-ZHO/ComfyUI-BiRefNet-ZHO/issues/21. I have verified the fix on my end and it works. Here are direct instructions: drive.google.com/file/d/1NMkK7fYpXog0fv1P2QmKzfYg9v1HPHZM/view?usp=drivesdk
do you know how to fix this "Sizes of tensors must match except in dimension 1. Expected size 104 but got size 112 for tensor number 1 in the list." i kept getting this error in KSampler when trying to edit using another picture.
There is some issue with the models. Use SD 1.5 checkpoint and not SDXL or Flux (they are not compatible). If you still get that error, then check if the ic_light models are correct and the IP adapter models are correct.
Thanks a lot for this amazing workflow ! Any chance to use Flux to generate the background even if i know that IC light is not compatible at this state ? Maybe by "IC Lighting" the image generated by Flux ?
Welcome, unfortunately, no. We have to pass the image via IC Light which is Sd 1.5. When you due that in the k sampling it will degrade again. IC Light has to be compatible with flux to have this generation blend properly with flux background.
Hello, thank you for the course, I've realized the whole workflow, and I just can't make the product look transparent when placing transparent glass products such as perfume and wine glasses, it can't show the background content through the glass at all, I've watch the tutorials again and again and didn't find where to set the transparency of the product, may I ask where to set it? Looking forward to your answer, thanks!
There is no transparency setting. If you are looking for clear objects see through glass you need to switch from here to photoshop and do manually. Hence we put a bokeh background in the prompt.
Actually this one is older, it came out first I think. I started with kijai, all respects to him for his work. However I was not getting the results that I wanted. I switch to an entire different way as the way this one works is different, was impressed with the results, just went on building the workflow from there. Don't have a side to side comparison as the nodes and method applied are both different. So I cannot be sure if any is better, as I really did not go back to the kijai one and tried to get that working the way i wanted.
Yeah, you can use custom conditioning. A switch is given in the workflow. Copy and paste from the Ollama generation to custom text condition, then make the switch to 2.
Send me your current workflow with the prompt the reference bg and the product image to mail @ controlaltai . com (without spaces). The workflow is complicated, obviously something is missed. Will have a look and revert to you via email.
Hi, great tutorial, by the way! I have a slight problem. The resulting image of a black product is different from the original. For example, if the product is black running shoes and the background is a green scenery, the result will make the shoes appear green. I also tried a black bag, and it turned white. the details is still there, but this result is after ksampler. probably has something to do with the IPAdapter or IClight ??
Thank you for the incredible workflow, I got an issue when generating the ksampler before the details and color adjust parts, the ksampler image became total black at 60%, and the colormatch got error(stack expects a non-empty TensorList), do you have any clues?
Hi, Not untill I see what you have done with the workflow. It's quite complex to identify the issue. You mail me, I can have a look and see if I can troubleshoot it. mail @ controlaltai . con ( without spaces)
Two questions (: 1. Should I update Ollama when it says there is a new update? 2. Is there an option to create the background focused? Almost every photo with the product has a blurry background. it feels like the photo was taken by a lens with low number of aputur. Thanks!
The background is as per prompting and checkpoint. You need to change the prompts and playing around with the checkpoint lora to get your desired results. The workflow remains the same. Yes update ollama. Always best to keep it to the latest version. Will not negatively affect the workflow.
This is amazing! Any ideas about this error? Error occurred when executing KSampler: Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 8, 104, 152] to have 4 channels, but got 8 channels instead
Hi, is there a way that i can make the background less cartoonish? I already try many checkpoint but it give the same result. How do i make realistic background. I already use realistic image for the background image though.
Hi, You see in the video the background images. They are not cartoonish. So it's the prompting or the checkpoint. I cannot tell unless I look at the workflow.
@@controlaltai after some experimenting, i add this to the prompt {describe the image in extreme detail Include "atmosphere, mood & tone and lighting". Write the description as if you are a product photographer. include the word "hyper realistic" and "shot on dslr" and "shot using 12mm lens" and "aperture f 1.2" and "lifelike texture" and "macro shot" and "faded color grading" and "slow shutter" and "long exposure" in the description} and it worked. Thanks bro for this awesome workflow.
Great 👍 We need a better llm vision model. The llama 3.1 is far better but nothing vision atm. Your prompt instruction is very interesting, will try it out, thanks.
Hi, I'm trying to use this WF but I only have a 6GB GPU. I've tried on various online platforms and even locally with the ComfyCloud node (which allows you to work locally but with a cloud GPU for generations), but I haven't been able to use the WF successfully with any of these alternatives. Could you tell me if you know whether this WF could be used with a service like RunPod or something similar? Ty!!!
The workflow can be used on the cloud or locally. Does not matter where you run it, 6gb VRAM won't do. There are lot of things happening here, and lot of models getting loaded. 24 Gb is recommended. But you can try this at 12 GB at a bare minimum. I haven't tested it, as I don't have a 12 GB hardware.
Thank you for the tutorial but I´m getting this error: Error occurred when executing LayerUtility: ImageBlendAdvance V2: 'NoneType' object is not iterable
Hi, Workflow is only made available for paid channel members. You don't need to become a paid member. Everything is shown in the video to recreate the workflow from scratch.
I cannot understand what node 29 you are talking about. Visually see which node the error is coming from, along with the cmd error. That will help me understand what the issue is.
@@controlaltai error node Switch ( impact Pack ) . Error occurred when executing ImpactSwitch: Node 5 says it needs input input0, but there is no input to that node at all File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 294, in execute execution_list.make_input_strong_link(unique_id, i) File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\comfy_execution\graph.py", line 94, in make_input_strong_link raise NodeInputError(f"Node {to_node_id} says it needs input {to_input}, but there is no input to that node at all")
Can you email me a screenshot of the workflow and zoom in on the node which has the error, I need to look at what is going on. mail @ controlaltai . com (without spaces).
hi! I've successfully loaded the workflow in a cloud instance. Everything is up and running, but I'm encountering the same error that others have reported: Error occurred when executing KSampler: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 104, 152] to have 4 channels, but got 8 channels instead. I'm running the workflow with a 24GB GPU and 64GB RAM. I've selected and downloaded the correct ldm version of IC Light. All nodes are installed and updated (including LayerDiffusion). I've tried all the wight_dtype settings in IC Light, but I keep getting the same error. Do you know what might be causing this?
Hi, There is an issue with the latest comfy ui, it’s broke the ic light node, developer is working on a fix, you have to use the legacy front end or wait till the developer fixes it.
Yeah check and and updated pinned comments. For anyone else seeing this: "Comfy Update (Aug 27, 2024): If you Are getting KSampler Error, You need to update ComfyUI to "ComfyUI: 2622[38c22e](2024-08-27)" or higher, IC-Light and Layered Diffusion. Everything works as shown in the video. No change in workflow."
May I ask what caused this error Error occurred when executing KSampler: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 128, 128] to have 4 channels, but got 8 channels instead
Hi, issue seems to be fixed: "Comfy Update (Aug 27, 2024): If you Are getting KSampler Error, You need to update ComfyUI to "ComfyUI: 2622[38c22e](2024-08-27)" or higher, IC-Light and Layered Diffusion. Everything works as shown in the video. No change in workflow."
@@controlaltai sadly, even after grabbing the right models and updating everything, this ksampler issue seems to have resurfaced. That's a pity. This looks like a great workflow that's plaqued by node inconsistencies. No fault of the workflow.
Yeah if you don't put a manual mask you get that error. It passes on an empty image which is a mis match. Have mentioned this in the video. So connect your image to the preview bridge. Get the error, then manually mask and save the preview bridge and run queue prompt again. It should go through.
Hi, No we can't unfortunately. The IC Light model was trained on SD 1.5. Its not supported on anything else but SD 1.5 based or fined tuned checkpoints of SD 1.5.
Which version of the UI was used for this? Keep getting the " but got 8 channels instead" error. Even with the required layered diffusion node and correct "fc-ldm" model, issue persists. Bypassing the lc light apply node lets it complete flow execution.
The old one, as comfy came out on 15th aug, and this was posted on July 25. I will check it in some hours and get back to you. If it’s broken in latest version, will update and make a post. Try putting the node in layered diffusion folder instead of u net and see if that works. The channel 8 error is highly unlikely a comfy update issue. I will re check though.
@@controlaltaiThanks for the reply, the ldm model only seems to be recognized in the unet or diffusion_models folder. I'm using ComfyUI: 2611[8ae23d](2024-08-23) Manager: V2.50.2
The error is with the IC Light node. Will post an updated workflow as the Impact Pack Switch also malfunctions after the new update. The ic light dev is working on a fix: github.com/huchenlei/ComfyUI-IC-Light-Native/issues/44, will let you know once its pushed.
Hi, Issue has been fixed: "Comfy Update (Aug 27, 2024): If you Are getting KSampler Error, You need to update ComfyUI to "ComfyUI: 2622[38c22e](2024-08-27)" or higher, IC-Light and Layered Diffusion. Everything works as shown in the video. No change in workflow."
Hello, i cant import ComfyUI-BiRefNet-ZHO node. I tried to install manualy i tried to install through ComfyUI Manager.. but import failed. To be honest because of this workflow I bought a membership of your channel .. and the workflow doesnt work for me... Can you please help me to install BiRefNet-ZHO please ???
@RafiSpaceOnline-j5b Hi The BiRef ZhoZhoZho nodes does not import after latest comfy UI update. You can fix it, follow github instructions here: github.com/ZHO-ZHO-ZHO/ComfyUI-BiRefNet-ZHO/issues/21. I have verified the fix on my end and it works. Here are direct instructions: drive.google.com/file/d/1NMkK7fYpXog0fv1P2QmKzfYg9v1HPHZM/view?usp=drivesdk
@@sairammw Instrcutions require change in code. Not just a rename, please read carefully: After you rename you should open dataset.py and make changes as given in the screen shot. This issue a node and the node dev is not bothered to integrate this simple fix. Other people have made this solution all he has to do is merge it in the github. For now we have to do it manually. drive.google.com/file/d/1NMkK7fYpXog0fv1P2QmKzfYg9v1HPHZM/view?usp=drivesdk
thx so much. i update comfyui, but also getting KSampler Error, :TypeError: calculate_weight() got an unexpected keyword argument 'intermediate_dtype': sir help me!!!!!
Hi! I tried everything to install the Layer Style nodes, without success, can anyone help here please? (IMPORT FAILED) ComfyUI Layer Style (IMPORT FAILED) ComfyUI-BiRefNet-ZHO I tried to install manually and with manager, same :(
@FrauPolleHey Cannot help without looking at the cause of the failure import. I need to look at your system on the cause of the failure. Typically it will tell you what import or dependencies install failed. You then have to do it manually. Send me an email with the entire cmd boot up text after clean boot up. I will try and help via email. mail @ controlaltai . com (without spaces)
Hi, i have follow your instruction. Its working for the first run. But when i change the background image and change the resolution in SDXLResolution node, i'm getting "image do not match" error. I dont know what is the problem but these were the only things that i change. This is the error message {Error occurred when executing LayerUtility: HLFrequencyDetailRestore: images do not match File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list results.append(getattr(obj, func)(**slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_LayerStyle\py\hl_frequency_detail_restore.py", line 73, in hl_frequency_detail_restore ret_image.paste(background_image, _mask) File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\Image.py", line 1847, in paste self.im.paste(im, box, mask.im)
Images do not match cause you have not created a mask in the preview bridge. Whenever passing the image to HL frequency you need to have it masked. If you added the switch like in the video, change to switch no 2. If using 1, mask manually and then queue prompt.
@@controlaltai Hey, I started supporting your channel and downloaded the workflow, but at the end ( IMAGE COMPARER), it's not generating an image-I'm getting a black screen. Also, I have two red boxes on the IPA adapter and in LOAD CLIP vision. Do you know why this might be happening
Hi, thank you. It's probably the wrong IP adapter selected. Send me a screenshot of the following via email. Checkpoint group, IC light group, IP adapter Group. Along with the cmd screenshot of the error. When the box is red. Need to see what is happening to trouble shoot it. mail @ controlaltai . com (without spaces).
Hello all ! :) I have an issue when I execute "Error occurred when executing IPAdapterModelLoader: invalid IPAdapter model C:\ComfyUI_windows_portable\ComfyUI\models\ipadapter\iclight_sd15_fc_unet_ldm.safetensors File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 316, in execute output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 191, in get_output_data return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 168, in _map_node_over_list process_inputs(input_dict, i) File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 157, in process_inputs results.append(getattr(obj, func)(**inputs)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 657, in load_ipadapter_model return (ipadapter_model_loader(ipadapter_file),) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\utils.py", line 147, in ipadapter_model_loader raise Exception("invalid IPAdapter model {}".format(file))" could someone help me?
"invalid IPAdapter model C:\ComfyUI_windows_portable\ComfyUI\models\ipadapter\iclight_sd15_fc_unet_ldm.safetensors" Hi, this is not an IP Adapter model. Its the ic light model, you have put it in the ip adapter folder. Recheck the video on where the ic light models go.
Well this is where I stop tonight " Error occurred when executing UNETLoader: ERROR: Could not detect model type of: C:\ComfyUI_windows_portable\ComfyUI\models\unet\IC-Light\iclight_sd15_fc.safetensors " got to retrace the steps again I guess
Okay so you have downloaded the wrong models. Check models in requirements or check description. You have to download the layered diffusion version of the model. Here is the link huggingface.co/huchenlei/IC-Light-ldm/tree/main
Yeah with prompting. In the video tutorial I use dof, which is depth of field. You can add clear sharp in the positive and dof in the negative. Once you have a clear bg you can add depth of field using a blur node from layer style nodes.
Update (Oct 28, 2024): The BiRef ZhoZhoZho nodes do not import after the latest comfy UI update. The Fix is simple:
1. Install the node via Comfy Manager. Restart and you will get a filed import error.
2. Close Comfy in the browser.
3. Down load Fixed Zip I made: drive.google.com/file/d/1oHO_m5DoUWkU7ViMg3P9dQqWUH06tkCX/view?usp=sharing
4. Replace all contents of the zip in: ComfyUI\custom_nodes\ComfyUI-BiRefNet-ZHO
5. Restart ComfyUI. The node should now work.
Update (Sep 5, 2024): The Preview Bridge node in the latest update as a new option called "Block". Ensure that it is set to "never" and not "if_empty_mask". This will allow the preview bridge node to pass on the image to the H/L Frequency Node as shown in the video and transfer the details. If set to "if_empty_mask" you will not get any preview, it will show as a black output. I had asked the dev to update the node so that the default behavior is always set to never, he has fixed and done the same. Update the node again to the latest version.
Comfy Update (Aug 27, 2024): If you Are getting KSampler Error, You need to update ComfyUI to "ComfyUI: 2622[38c22e](2024-08-27)" or higher, IC-Light and Layered Diffusion. Everything works as shown in the video. No change in workflow.
IC-Light is based on SD1.5, but all generates are SDXL Resolution, then 4x Upscale. I hope you find the tutorial helpful. Please note: at 5:17 layered diffusion custom node is needed, even though no nodes are used otherwise you will get an error as follows:
RuntimeError: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 64, 64] to have 4 channels, but got 8 channels instead
thx sooooo much
Thank you so much, I followed the video for 2 days and finally managed to make it, awesome tutorial! 👍
sir help me!!!!! i bought your workflow :( ... plz
Already replied to you.
@@ismgroov4094 Where did you buy it?
Oh MY GOD! This is incredible! The first two random images I tried off the top turned out amazing, first try. You're the most underrated SD channel on youtube, thank you for this amazing work. Can't wait to get my hands dirty with this. Wish you the best.
Hi, did you get the error: Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 8, 104, 152] to have 4 channels, but got 8 channels instead ?
Nope, it worked on mine
I can't seem to get the BiRefNet zho node to show up. I have the ones by dznodes. How can i get the one by zho? Could you please help?
Its amazing, even this is rocket science for me yet, this is most detailed explained video for product iv seen till now..
Thanx.
honestly this is too good, thank you so much
really really well done
that was just the coolest video I've ever seen. comfy rules.
Impressive stuff, amazing work!
Thank You!!
Yes, finally! thank you for this tutorial
🙏 Thanks for sharing this tutorial ❓ Question, is there a way to add effects in front of the main object without distorting the product (ie smoke effect in front of the main object)?
Btw, BiRef Zhozhozho is not functional again.
Hi, That is difficult with accurate lighting. However you can do it with custom nodes in comfy ui with pseudo effects. Or you can use photoshop for further edits. Comfy will obviously require a different workflow, not shown here.
For the BiREF ZhoZhoZho, i wrote a note here:
Update (Sep 14, 2024): The BiRef ZhoZhoZho nodes does not import after latest comfy UI update. You can fix it, follow github instructions here: github.com/ZHO-ZHO-ZHO/ComfyUI-BiRefNet-ZHO/issues/21. I have verified the fix on my end and it works. Here are direct instructions: drive.google.com/file/d/1NMkK7fYpXog0fv1P2QmKzfYg9v1HPHZM/view?usp=drivesdk
It just checked it now 5 minutes back and Its loading on my system, everything updated. If that is too much of a headache the Layer Style node also has a BiREF node, use from there and avoid ZhoZhoZho. As the dev never updates or fixes anything once broken. I should start avoiding using his custom nodes in tutorials.
First of all, thank you very much for the great tutorial!
Can you give any recommendations to preserve (or restore) the original colors of the product?
Hi, Use the color match tool. Transfer sensitive colors details via masking.
Thank you for this tutorial. However, I don't understand why we need to segment the image again at 16:51. We already have the mask and the image with the new composition (product size and placement) as output of the "ImageBlendAdvance V2" node. Why are we repeating the segmentation process? The resulting images and masks of the new segmentation seem to me to be the same as the outputs from the "ImageBlendAdvance V2" . Sorry to ask about that. I'm a sub, and thoroughly enjoy your tutorials.
Hi, There are Multiple reasons. First, we blend the object with a grey background in this node. It's only the mask; there is no image. We need the PNG image again to be transparent. Second, this mask is not that good for some objects as it fails to mask them properly from this node after resizing. In the test, 1 out of 10 times, it caused an issue. Since I had to use the transparent PNG, I thought we should give options for masking and getting the mask again.
@@controlaltai thanks
Must be a worthy one. will test and post here..
Very impressive render quality!
Love this. Thank you so much
thank you for this tutorial !
can you help me. i got error
KSampler
The new shape must be larger than the original tensor in all dimensions
what that mean
Something wrong with the model. Ensure checkpoint is sd1.5 and not flux or sdxl
ZHO BiRefNet module is no longer available for download via confyui manager
Use the birefnet from layer style custom node. It’s basically the same.
in the video 13:56, I can not connect mask after both birefnet model to Mask Segment Node. I tried different product images which have the same result. Is that because no mask generated? I play back your video, but cannot find the solution.
What do you mean you cannot connect the mask. Explain the error. Connecting the mask is dragging out a noodle line and connecting the the switch input.
@@controlaltai No error. the output "mask" in Birefnet cannot dragging out a noodle line to the switch input.
The node must not be installed. Here is the fix to install node:
The BiRef ZhoZhoZho nodes does not import after latest comfy UI update. You can fix it, follow github instructions here: github.com/ZHO-ZHO-ZHO/ComfyUI-BiRefNet-ZHO/issues/21. I have verified the fix on my end and it works. Here are direct instructions: drive.google.com/file/d/1NMkK7fYpXog0fv1P2QmKzfYg9v1HPHZM/view?usp=drivesdk
do you know how to fix this "Sizes of tensors must match except in dimension 1. Expected size 104 but got size 112 for tensor number 1 in the list." i kept getting this error in KSampler when trying to edit using another picture.
There is some issue with the models. Use SD 1.5 checkpoint and not SDXL or Flux (they are not compatible). If you still get that error, then check if the ic_light models are correct and the IP adapter models are correct.
Thanks a lot for this amazing workflow ! Any chance to use Flux to generate the background even if i know that IC light is not compatible at this state ? Maybe by "IC Lighting" the image generated by Flux ?
Welcome, unfortunately, no. We have to pass the image via IC Light which is Sd 1.5. When you due that in the k sampling it will degrade again. IC Light has to be compatible with flux to have this generation blend properly with flux background.
@@controlaltai Thank you for your quick reply. I'll wait for IC Light to be compatible with Flux ! ☺
Hello, thank you for the course, I've realized the whole workflow, and I just can't make the product look transparent when placing transparent glass products such as perfume and wine glasses, it can't show the background content through the glass at all, I've watch the tutorials again and again and didn't find where to set the transparency of the product, may I ask where to set it? Looking forward to your answer, thanks!
There is no transparency setting. If you are looking for clear objects see through glass you need to switch from here to photoshop and do manually. Hence we put a bokeh background in the prompt.
@@controlaltai Thanks for the reply, I'll give it a try
Very nice. What exactly is the difference between the old IC light models and the once you've used here? Do they yield better results? Thanks
Actually this one is older, it came out first I think. I started with kijai, all respects to him for his work. However I was not getting the results that I wanted. I switch to an entire different way as the way this one works is different, was impressed with the results, just went on building the workflow from there. Don't have a side to side comparison as the nodes and method applied are both different. So I cannot be sure if any is better, as I really did not go back to the kijai one and tried to get that working the way i wanted.
how can i get this workflow?
great work, thank you !
Is it possible to edit / change the background & product (STRING) promts ?
Yeah, you can use custom conditioning. A switch is given in the workflow. Copy and paste from the Ollama generation to custom text condition, then make the switch to 2.
@@controlaltai Thank you for your quick reply. Sadly it does seem to work for me, the final image doesn't change.
Send me your current workflow with the prompt the reference bg and the product image to mail @ controlaltai . com (without spaces). The workflow is complicated, obviously something is missed. Will have a look and revert to you via email.
at 30:00 you mention copying the negative prompt from CivitAI, could you expound on this? Thanks!
Well all i did was opened sample images from juggernaut aftermath and checked the negatives used. Copy and pasted that. That's what I meant.
@@controlaltai ooooh that makes sense. Thanks
Can we have something like flare? When u have both product shot and the background? Thanks alot
What does flare mean. Use the workflow you get the product lighting only. Rest of the post processing do it out of comfy. This would still save time.
Hi, great tutorial, by the way! I have a slight problem. The resulting image of a black product is different from the original. For example, if the product is black running shoes and the background is a green scenery, the result will make the shoes appear green. I also tried a black bag, and it turned white. the details is still there, but this result is after ksampler. probably has something to do with the IPAdapter or IClight ??
Hi, Send me the workflow and the images to mail @ controlaltai . com (without spaces), without looking and testing myself, can’t trouble shoot.
Thank you for the incredible workflow, I got an issue when generating the ksampler before the details and color adjust parts, the ksampler image became total black at 60%, and the colormatch got error(stack expects a non-empty TensorList), do you have any clues?
Hi, Not untill I see what you have done with the workflow. It's quite complex to identify the issue. You mail me, I can have a look and see if I can troubleshoot it. mail @ controlaltai . con ( without spaces)
Two questions (:
1. Should I update Ollama when it says there is a new update?
2. Is there an option to create the background focused? Almost every photo with the product has a blurry background. it feels like the photo was taken by a lens with low number of aputur.
Thanks!
The background is as per prompting and checkpoint. You need to change the prompts and playing around with the checkpoint lora to get your desired results. The workflow remains the same.
Yes update ollama. Always best to keep it to the latest version. Will not negatively affect the workflow.
@@controlaltai thank you! I'll try it.
Hi thanks for the good tutorial, but the Resadapter for comfyUI import failed as well, how can i fix it?
Hi, what’s the import fail error?
Can this workflow be used correctly with comfyui using Google Colab?
I don’t have any idea about Google colab. Never played on it. All the work done for clients is usually locally.
Magnific 👌
DOPE!
This is amazing!
Any ideas about this error?
Error occurred when executing KSampler:
Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 8, 104, 152] to have 4 channels, but got 8 channels instead
Never mind! I see that the custom node ComfyUI-layerdiffuse (layerdiffusion) is required and this resolves the error :)
Hi, is there a way that i can make the background less cartoonish? I already try many checkpoint but it give the same result. How do i make realistic background. I already use realistic image for the background image though.
Hi, You see in the video the background images. They are not cartoonish. So it's the prompting or the checkpoint. I cannot tell unless I look at the workflow.
@@controlaltai after some experimenting, i add this to the prompt {describe the image in extreme detail Include "atmosphere, mood & tone and lighting". Write the description as if you are a product photographer. include the word "hyper realistic" and "shot on dslr" and "shot using 12mm lens" and "aperture f 1.2" and "lifelike texture" and "macro shot" and "faded color grading" and "slow shutter" and "long exposure" in the description} and it worked. Thanks bro for this awesome workflow.
Great 👍 We need a better llm vision model. The llama 3.1 is far better but nothing vision atm. Your prompt instruction is very interesting, will try it out, thanks.
Hi, I'm trying to use this WF but I only have a 6GB GPU. I've tried on various online platforms and even locally with the ComfyCloud node (which allows you to work locally but with a cloud GPU for generations), but I haven't been able to use the WF successfully with any of these alternatives. Could you tell me if you know whether this WF could be used with a service like RunPod or something similar? Ty!!!
The workflow can be used on the cloud or locally. Does not matter where you run it, 6gb VRAM won't do. There are lot of things happening here, and lot of models getting loaded. 24 Gb is recommended. But you can try this at 12 GB at a bare minimum. I haven't tested it, as I don't have a 12 GB hardware.
For some reason cstom masking for text does not work properly. I am wondering wehat might be the isue?
What part of custom masking is not working, after making a custom mask, you have to switch to custom mask in the mask selection switch.
@@controlaltai masking for the version b.
Explain, I cannot understand what is version b. There is no “version b” in the workflow.
@@controlaltai Image b in hte image comparer
Check the preview bridge node for masking, before the masking switch, the preview bridge node option if empty mask should be set to “never”.
Thank you for the tutorial but I´m getting this error: Error occurred when executing LayerUtility: ImageBlendAdvance V2:
'NoneType' object is not iterable
Not sure what is this error, could be some connections are wrong. Ensure background and layer are correct.
Hi, did you manage to fix that? I have the same error
You can email me the workflow. I can have a look at it for you. mail @ controlaltai . com (without spaces)
@@ronshalev1842Hi, I fixed the error changing the value in the node Impactint, from 0 to 1.
@@ImagindeDash Thank you that did the work!
Thank you. How does one install the VIT MATTE detail model, Pymatting is working for me in Ultra, but I seem to be missing VIT MATTE
I have explained in the video. Check from 6:28
@@controlaltai Thank you! That's what I was looking for!
@@controlaltai BTW are you on LinkedIn ? I did a post about this tutorial and would love to tag you. Thanks again for the great tutorial!
Hi, no went off LinkedIn years back. That's fine feel free to share.
@@controlaltai not gonna lie I originally skipped that section, and really wish I had not. Going back through it now :)
Hi, where I can download your workflow (json file)?
Hi, Workflow is only made available for paid channel members. You don't need to become a paid member. Everything is shown in the video to recreate the workflow from scratch.
i'm error node Switch , it say : node 29 says it needs i nput inpu 0, but there is no input to that node at all . Help me
I cannot understand what node 29 you are talking about. Visually see which node the error is coming from, along with the cmd error. That will help me understand what the issue is.
@@controlaltai error node Switch ( impact Pack ) . Error occurred when executing ImpactSwitch:
Node 5 says it needs input input0, but there is no input to that node at all
File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\execution.py", line 294, in execute
execution_list.make_input_strong_link(unique_id, i)
File "C:\Stable Diffusion\ComfyUI_windows_portable\ComfyUI\comfy_execution\graph.py", line 94, in make_input_strong_link
raise NodeInputError(f"Node {to_node_id} says it needs input {to_input}, but there is no input to that node at all")
Can you email me a screenshot of the workflow and zoom in on the node which has the error, I need to look at what is going on. mail @ controlaltai . com (without spaces).
Hello, I cannot find VAE Encode ArgMax in my comfyui. Which plugin do I want to download
Hello, Check the video for custom nodes requirements. It's a part of the main ic light custom node as shown.
@@controlaltai Thank you, I have found a solution. The version I downloaded had an issue, so I couldn't find it
hi! I've successfully loaded the workflow in a cloud instance. Everything is up and running, but I'm encountering the same error that others have reported:
Error occurred when executing KSampler: Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 104, 152] to have 4 channels, but got 8 channels instead.
I'm running the workflow with a 24GB GPU and 64GB RAM.
I've selected and downloaded the correct ldm version of IC Light.
All nodes are installed and updated (including LayerDiffusion).
I've tried all the wight_dtype settings in IC Light, but I keep getting the same error.
Do you know what might be causing this?
Hi, There is an issue with the latest comfy ui, it’s broke the ic light node, developer is working on a fix, you have to use the legacy front end or wait till the developer fixes it.
@@controlaltai fixed now!
Yeah check and and updated pinned comments. For anyone else seeing this: "Comfy Update (Aug 27, 2024): If you Are getting KSampler Error, You need to update ComfyUI to "ComfyUI: 2622[38c22e](2024-08-27)" or higher, IC-Light and Layered Diffusion. Everything works as shown in the video. No change in workflow."
@@controlaltai ALL WORKING NOW YASSSSSSS!
I have installed everything but I cant find the switch (any) node on my search . What am I doing wrong?
Ok.. figured that one out.. lol. Had to update comfyui from outside. not the manager
May I ask what caused this error
Error occurred when executing KSampler:
Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 128, 128] to have 4 channels, but got 8 channels instead
Make sure you have layered diffusion custom node installed.
@@controlaltai There is an installation, but after running it on the K sampler, this problem occurs, which is very frustrating
@@赵毅-b9y make sure you downloaded the correct ic apply model, these are ldm models and not the standard ones.
Hi, issue seems to be fixed: "Comfy Update (Aug 27, 2024): If you Are getting KSampler Error, You need to update ComfyUI to "ComfyUI: 2622[38c22e](2024-08-27)" or higher, IC-Light and Layered Diffusion. Everything works as shown in the video. No change in workflow."
@@controlaltai sadly, even after grabbing the right models and updating everything, this ksampler issue seems to have resurfaced. That's a pity. This looks like a great workflow that's plaqued by node inconsistencies. No fault of the workflow.
i have this problem: Error occurred when executing LayerUtility: HLFrequencyDetailRestore:
images do not match
Yeah if you don't put a manual mask you get that error. It passes on an empty image which is a mis match. Have mentioned this in the video. So connect your image to the preview bridge. Get the error, then manually mask and save the preview bridge and run queue prompt again. It should go through.
at upscale part, i ImapactInt = 2, the product image get bigger, it bigger than bg imgae. i dont know why ,sir help
Are you building the workflow from scratch? Double check the video. The bg has to be upscaled .
can we have this kind of workflow with flux? this video deserve more views. Good work sir/ma'am!
Hi, No we can't unfortunately. The IC Light model was trained on SD 1.5. Its not supported on anything else but SD 1.5 based or fined tuned checkpoints of SD 1.5.
Which version of the UI was used for this? Keep getting the " but got 8 channels instead" error. Even with the required layered diffusion node and correct "fc-ldm" model, issue persists. Bypassing the lc light apply node lets it complete flow execution.
The old one, as comfy came out on 15th aug, and this was posted on July 25. I will check it in some hours and get back to you. If it’s broken in latest version, will update and make a post. Try putting the node in layered diffusion folder instead of u net and see if that works. The channel 8 error is highly unlikely a comfy update issue. I will re check though.
@@controlaltaiThanks for the reply, the ldm model only seems to be recognized in the unet or diffusion_models folder. I'm using ComfyUI: 2611[8ae23d](2024-08-23)
Manager: V2.50.2
The error is with the IC Light node. Will post an updated workflow as the Impact Pack Switch also malfunctions after the new update. The ic light dev is working on a fix: github.com/huchenlei/ComfyUI-IC-Light-Native/issues/44, will let you know once its pushed.
@@controlaltai im interested in this fix too! tyyy
Hi, Issue has been fixed: "Comfy Update (Aug 27, 2024): If you Are getting KSampler Error, You need to update ComfyUI to "ComfyUI: 2622[38c22e](2024-08-27)" or higher, IC-Light and Layered Diffusion. Everything works as shown in the video. No change in workflow."
Got an error on birefnet zho ? Any solution?
Give more details of the error please.
@@controlaltai When loading the graph, the following node types were not found:
ComfyUI-BiRefNet-ZHO
GitHub Logo
Author:ZHO-ZHO-ZHO
i already installed by comfy manager and manually , it still same issue
@@affanyanuar Hi,
Please find fix here:
github.com/ZHO-ZHO-ZHO/ComfyUI-BiRefNet-ZHO/issues/21
Is your channel’s membership option turned on? I can’t see it anywhere.
Yes, here is the link:
ua-cam.com/channels/gDNws07qS4twPydBatuugw.htmljoin
Hello,
i cant import ComfyUI-BiRefNet-ZHO node. I tried to install manualy i tried to install through ComfyUI Manager.. but import failed.
To be honest because of this workflow I bought a membership of your channel .. and the workflow doesnt work for me...
Can you please help me to install BiRefNet-ZHO please ???
@RafiSpaceOnline-j5b Hi The BiRef ZhoZhoZho nodes does not import after latest comfy UI update. You can fix it, follow github instructions here: github.com/ZHO-ZHO-ZHO/ComfyUI-BiRefNet-ZHO/issues/21. I have verified the fix on my end and it works.
Here are direct instructions: drive.google.com/file/d/1NMkK7fYpXog0fv1P2QmKzfYg9v1HPHZM/view?usp=drivesdk
Facing same problem, no matter how many times i install or change it to "myutils", it says node not installed
@@sairammw Instrcutions require change in code. Not just a rename, please read carefully:
After you rename you should open dataset.py and make changes as given in the screen shot. This issue a node and the node dev is not bothered to integrate this simple fix. Other people have made this solution all he has to do is merge it in the github. For now we have to do it manually.
drive.google.com/file/d/1NMkK7fYpXog0fv1P2QmKzfYg9v1HPHZM/view?usp=drivesdk
@@controlaltai yes had edited data.py too, 10th line... Yes but still facing the same problem
Hey, can anyone advise where I can find the image blend advanced v2 node?
LayerStyle Custom Node. Check video custom node requirements.
Can you share the workflow for us to download?
you have to be a chanel member or build it by yourself, looking the video
couldnt find the birefnetultra node, which custom node is it from ?
That's from LayerStyle custom node.
thx so much.
i update comfyui, but also getting KSampler Error, :TypeError: calculate_weight() got an unexpected keyword argument 'intermediate_dtype':
sir help me!!!!!
Not sure what is that error. Check the checkpoint. You are suppose to use an sd1.5 checkpoint only.
@@controlaltai i bypass the load resadapter, it work ,i don,t know why
but if “load and apply ic light " not by pass, and i bypass the load resadapter, it don't work
@@controlaltai i use the same checkpoint juggernaut
Juggernaut has sdxl and s1.5 checkpoints, reconfirm you are using sd1.5 checkpoint and not sdxl
Does it work on human also ?
No, changes face, unless using train Lora
@@controlaltai so where to plug that lora if i have it.
@cinematicfilm6559 after checkpoint, you have to connect the clip as well. Note that the LoRa should be trained on sd 1.5.
Hi!
I tried everything to install the Layer Style nodes, without success, can anyone help here please?
(IMPORT FAILED) ComfyUI Layer Style
(IMPORT FAILED) ComfyUI-BiRefNet-ZHO
I tried to install manually and with manager, same :(
@FrauPolleHey Cannot help without looking at the cause of the failure import. I need to look at your system on the cause of the failure. Typically it will tell you what import or dependencies install failed. You then have to do it manually. Send me an email with the entire cmd boot up text after clean boot up. I will try and help via email. mail @ controlaltai . com (without spaces)
@@controlaltai sent, thank you
i have error,sir. "Given groups=1, weight of size [320, 4, 3, 3], expected input[1, 8, 104, 152] to have 4 channels, but got 8 channels instead"
Is layered diffusion custom node installed?
@@controlaltai i did sir
@@controlaltai there is something wrong with "Ic light apply" node.. plz help me.
Choose the fc model and not fbc. Download the correct models from the link. This is the ldm version of models and not what's given on kijai GitHub.
@@controlaltai thx sir, isolved sir! ❤️🙏🏻🥹
Hi, i have follow your instruction. Its working for the first run. But when i change the background image and change the resolution in SDXLResolution node, i'm getting "image do not match" error. I dont know what is the problem but these were the only things that i change. This is the error message {Error occurred when executing LayerUtility: HLFrequencyDetailRestore:
images do not match
File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 152, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 82, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\execution.py", line 75, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_LayerStyle\py\hl_frequency_detail_restore.py", line 73, in hl_frequency_detail_restore
ret_image.paste(background_image, _mask)
File "C:\Users\User\Desktop\ComfyUI_windows_portable_nvidia\ComfyUI_windows_portable\python_embeded\Lib\site-packages\PIL\Image.py", line 1847, in paste
self.im.paste(im, box, mask.im)
Images do not match cause you have not created a mask in the preview bridge. Whenever passing the image to HL frequency you need to have it masked. If you added the switch like in the video, change to switch no 2. If using 1, mask manually and then queue prompt.
@@controlaltai Woww thanks man, i switch the mask for detail to no 2 and it works. I see, so that is the error you've been talking about in the video.
How to paste with connection?
Control shift v
@@controlaltai Hey, I started supporting your channel and downloaded the workflow, but at the end ( IMAGE COMPARER), it's not generating an image-I'm getting a black screen. Also, I have two red boxes on the IPA adapter and in LOAD CLIP vision. Do you know why this might be happening
Hi, thank you. It's probably the wrong IP adapter selected. Send me a screenshot of the following via email. Checkpoint group, IC light group, IP adapter Group. Along with the cmd screenshot of the error. When the box is red. Need to see what is happening to trouble shoot it. mail @ controlaltai . com (without spaces).
@@controlaltai Thank you, I sent the message - thank you for your help
There is another thing, the preview bridge node was updated. Ensure that blocked option in it is set to never.
Hello all ! :) I have an issue when I execute
"Error occurred when executing IPAdapterModelLoader:
invalid IPAdapter model C:\ComfyUI_windows_portable\ComfyUI\models\ipadapter\iclight_sd15_fc_unet_ldm.safetensors
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 316, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 191, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 168, in _map_node_over_list
process_inputs(input_dict, i)
File "C:\ComfyUI_windows_portable\ComfyUI\execution.py", line 157, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\IPAdapterPlus.py", line 657, in load_ipadapter_model
return (ipadapter_model_loader(ipadapter_file),)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_IPAdapter_plus\utils.py", line 147, in ipadapter_model_loader
raise Exception("invalid IPAdapter model {}".format(file))"
could someone help me?
"invalid IPAdapter model C:\ComfyUI_windows_portable\ComfyUI\models\ipadapter\iclight_sd15_fc_unet_ldm.safetensors" Hi, this is not an IP Adapter model. Its the ic light model, you have put it in the ip adapter folder. Recheck the video on where the ic light models go.
can you share workflow
Ready made json files are for channel paid members only. You can just build the workflow following the tutorial. Nothing is hidden.
@@controlaltai where i this private channel?
UA-cam Join Membership
Well this is where I stop tonight " Error occurred when executing UNETLoader:
ERROR: Could not detect model type of: C:\ComfyUI_windows_portable\ComfyUI\models\unet\IC-Light\iclight_sd15_fc.safetensors " got to retrace the steps again I guess
Okay so you have downloaded the wrong models. Check models in requirements or check description. You have to download the layered diffusion version of the model. Here is the link
huggingface.co/huchenlei/IC-Light-ldm/tree/main
@@controlaltai ahhh, I was thinking that could be it. Thank you so much for your patience. Now I can sleep :)
Its amazing, even this is rocket science for me yet, this is most detailed explained video for product iv seen till now..
Thanx.
Is it possible to control the image background blur result?
Yeah with prompting. In the video tutorial I use dof, which is depth of field. You can add clear sharp in the positive and dof in the negative. Once you have a clear bg you can add depth of field using a blur node from layer style nodes.