КОМЕНТАРІ •

  • @leolis78
    @leolis78 3 місяці тому

    Great video!! please make more videos about product photography.

    • @kaziahmed
      @kaziahmed 3 місяці тому +1

      Definitely, will do! 🙌

  • @baheth3elmy16
    @baheth3elmy16 3 місяці тому

    I really like your videos!!!!!

    • @kaziahmed
      @kaziahmed 3 місяці тому

      Thank you 🙌🏽🙌🏽

  • @dollarproduction24
    @dollarproduction24 3 місяці тому

    Superb strategies ❤

  • @farhandhanani791
    @farhandhanani791 22 дні тому

    Getting thiis error! in my comfy UI while running it on RunPod. Can you please guide
    Warning: Missing Node Types
    When loading the graph, the following node types were not found:
    easy imageRemBg
    ImageResize+
    UltimateSDUpscale
    UpscaleModelLoader
    easy ipadapterApply
    No selected item
    Nodes that have failed to load will show as red on the graph.

  • @SejalDatta-l9u
    @SejalDatta-l9u 3 місяці тому

    Excellent video Kazi
    A few quick questions:
    1. How would you size/scale the image to be in proportion to the background scene?
    I tried a simple positive prompt: dark city street of london, street lamps
    The outcome was my model being the same size as the street lights.
    2. How can you incorporate your image so it seamlessly blends into your background without it looking like a cut out?
    Have you managed to incorporate depth, xyz positioning, shadows and or dimensions (e.g 3D)?
    I'd be keen to see your workflow.
    Thank you and keep up the good work!

    • @kaziahmed
      @kaziahmed 3 місяці тому +1

      I have experienced the scale and proportion issue. I haven't figured out a way to tackle that yet. I will do some more experiments this week, and try to improve on the workflow.
      As for blending the background, you gotta tweak the settings a bit, use mask blur, adjust sigma values etc. The workflow isn't perfect yet for production level work, it's very basic right now, for experimenting with ideas and brainstorming. You brought some valid points in the conversation, thank you for that! 🙌🏽

    • @SejalDatta-l9u
      @SejalDatta-l9u 3 місяці тому +1

      @kaziahmed excellent answers. I appreciate your cadidness.
      Yes please look into it. I'm not designing anything for production - I'm just a bit of a perfectionist :)

  • @agartajewelry705
    @agartajewelry705 3 місяці тому

    Thanks ❤

    • @kaziahmed
      @kaziahmed 3 місяці тому

      You're wecome :)

  • @oskarwreiber5854
    @oskarwreiber5854 3 дні тому

    How would i do if i want the foreground object to have some changes too? i.e in your shoe example, i might want to add a bit of dirt and dust to them, and change their color. How would that be done in this workflow? Right now i can't do any changes to foreground object

    • @kaziahmed
      @kaziahmed 3 дні тому

      That can be achieved by adding a custom mask to the input image.

  • @ismgroov4094
    @ismgroov4094 3 місяці тому

    thx sir!

    • @kaziahmed
      @kaziahmed 3 місяці тому

      you're welcome 🙌🏽🙌🏽

  • @Cartooonita
    @Cartooonita Місяць тому

    How to do it if i have background already

  • @darkmatter9583
    @darkmatter9583 Місяць тому

    how to save your workflow on github , create it, and also upload to huggingface ? thanks

  • @agartajewelry705
    @agartajewelry705 3 місяці тому +1

    Workflow's link has just the screenshoot picture of workflow. But not the download option for the workflow. Can you please check?

    • @kaziahmed
      @kaziahmed 3 місяці тому +3

      As I mentioned in the video, the png file has the json embedded in it. Simply drag and drop the image into your ComfyUI and it will work. 🙌🏽🙌🏽

  • @agartajewelry705
    @agartajewelry705 3 місяці тому

    The portable Comfyui which you installed has already a python in it's package. So is there a special reason you also installing python again seperately?

    • @kaziahmed
      @kaziahmed 3 місяці тому +1

      Some users had a issue with the python not being added to the environment variable path, that's why I showed that.

  • @skozaa1
    @skozaa1 3 місяці тому

    Hi buddy. thanks for your effort. But I couldn't see the workflow json file anywhere. There is only a png image in the link you gave. Didn't I see this json?

    • @kaziahmed
      @kaziahmed 3 місяці тому +2

      The png file has the json embedded in it. Just drag and drop it in comfyUI.

    • @skozaa1
      @skozaa1 3 місяці тому

      @@kaziahmed oh okeyyy i understand. Thank you bro :)

    • @kaziahmed
      @kaziahmed 3 місяці тому

      @@skozaa1 You're welcome bro!

  • @steventapia_motiondesigner
    @steventapia_motiondesigner 3 місяці тому

    Thanks for the workflow Kazi. I'm getting a black halo around my image after the detailer is applied. Anyway to get rid of this black halo?

    • @kaziahmed
      @kaziahmed 3 місяці тому +1

      You have to adjust the mask and blur. Also try adjusting the sigma value.

    • @steventapia_motiondesigner
      @steventapia_motiondesigner 3 місяці тому

      @@kaziahmedThanks! I’ll try that!

  • @Adi3DPro
    @Adi3DPro 3 місяці тому

    Error occurred when executing KSampler:
    Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 128, 128] to have 4 channels, but got 8 channels instead
    File "L:\-work\-ai\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI
    odes.py", line 1371, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI
    odes.py", line 1341, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 795, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 697, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 684, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 663, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 568, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\k_diffusion\sampling.py", line 635, in sample_dpmpp_2m_sde
    denoised = model(x, sigmas[i] * s_in, **extra_args)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 291, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 650, in __call__
    return self.predict_noise(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 653, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 277, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 224, in calc_cond_batch
    output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\custom_nodes\ComfyUI-IC-Light-Native\ic_light_nodes.py", line 116, in wrapper_func
    return existing_wrapper(unet_apply, params=apply_c_concat(params))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\custom_nodes\ComfyUI-IC-Light-Native\ic_light_nodes.py", line 108, in unet_dummy_apply
    return unet_apply(x=params["input"], t=params["timestep"], **params["c"])
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\model_base.py", line 113, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\python_embeded\Lib\site-packages\torch
    n\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\python_embeded\Lib\site-packages\torch
    n\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 852, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 50, in forward_timestep_embed
    x = layer(x)
    ^^^^^^^^
    File "L:\-work\-ai\ComfyUI\python_embeded\Lib\site-packages\torch
    n\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\python_embeded\Lib\site-packages\torch
    n\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\ops.py", line 80, in forward
    return super().forward(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\python_embeded\Lib\site-packages\torch
    n\modules\conv.py", line 460, in forward
    return self._conv_forward(input, self.weight, self.bias)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\python_embeded\Lib\site-packages\torch
    n\modules\conv.py", line 456, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    • @Adi3DPro
      @Adi3DPro 3 місяці тому +1

      I am getting this error any help bro?

    • @kaziahmed
      @kaziahmed 3 місяці тому

      You need to install the extra nodes like KJNodes and LayerDiffuse. The links are in the video description. Also make sure you download the correct model files for IC Light.

    • @kaziahmed
      @kaziahmed 3 місяці тому

      @@Adi3DPro You need to install the extra nodes like KJNodes and LayerDiffuse. The links are in the video description. Also make sure you download the correct model files for IC Light.

  • @Coalbanksco
    @Coalbanksco 3 місяці тому

    My implementation of your workflow keeps dying on the KSampler node - "Error occurred when executing KSampler:vGiven groups=1, weight of size [320, 12, 3, 3], expected input[2, 8, 128, 128] to have 12 channels, but got 8 channels instead" Any ideas? I'd love to give this one a try, thanks for making the video for us!

    • @Coalbanksco
      @Coalbanksco 3 місяці тому

      Nevermind! After a bunch of random refreshes it works now - thank you!

    • @kaziahmed
      @kaziahmed 3 місяці тому +1

      @@Coalbanksco Glad it worked out! I know the setup can be a bit painful, but once all the pieces fall into places it works perfectly :)

    • @alexisnik135
      @alexisnik135 3 місяці тому

      @@kaziahmed having this exact same issue as well at first i though it was the sdxl model but it doesn'r work on sd 1.5 either

    • @kaziahmed
      @kaziahmed 3 місяці тому

      @alexisnik135 please check the video description. The IC light node installation is a bit tricky… you need to install layer diffuse and kjnodes. And for the ic light models don’t download them from comfyui manager, use the huggingface files link that I provided. It will work.
      Also, the workflow is for SD1.5

  • @SPCUTTL
    @SPCUTTL 3 місяці тому

    Very good video. Thank you. I tried to install everything by you explanation. My workflow stops at background replacement. It is empty. Do you know why?

    • @kaziahmed
      @kaziahmed 3 місяці тому +2

      You might be missing one of the custom nodes that's required for IC-Light. Make sure you have KJNodes and Layerdiffuse installed.
      Also, check the file names and locations from my video, it's a bit tricky. A few others on my fb had a similar issue, it was resolved by installing this node: github.com/huchenlei/ComfyUI-layerdiffuse

    • @ImagindeDash
      @ImagindeDash 3 місяці тому

      @@kaziahmed I had the same problem, in others workflows, but this helped me a lot. Thanks a lot bro.

    • @kaziahmed
      @kaziahmed 3 місяці тому

      @@ImagindeDash You're welcome bro 🙌🏽🙌🏽

  • @Adi3DPro
    @Adi3DPro 3 місяці тому

    How can I export my comfyui workflow as png, so I can load from png image as you make your file

    • @kaziahmed
      @kaziahmed 3 місяці тому

      I used a custom node, here's the link: github.com/pythongosssss/ComfyUI-Custom-Scripts?tab=readme-ov-file#workflow-images

  • @Adi3DPro
    @Adi3DPro 3 місяці тому

    Still Getting this
    I tried again from start to end
    ---------
    Error occurred when executing KSampler:
    Given groups=1, weight of size [320, 4, 3, 3], expected input[2, 8, 128, 128] to have 4 channels, but got 8 channels instead
    File "L:\-work\-ai\ComfyUI\ComfyUI\execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI
    odes.py", line 1371, in sample
    return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI
    odes.py", line 1341, in common_ksampler
    samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\sample.py", line 43, in sample
    samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 795, in sample
    return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 697, in sample
    return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 684, in sample
    output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 663, in inner_sample
    samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 568, in sample
    samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\python_embeded\Lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\k_diffusion\sampling.py", line 635, in sample_dpmpp_2m_sde
    denoised = model(x, sigmas[i] * s_in, **extra_args)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 291, in __call__
    out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 650, in __call__
    return self.predict_noise(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 653, in predict_noise
    return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 277, in sampling_function
    out = calc_cond_batch(model, conds, x, timestep, model_options)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\samplers.py", line 224, in calc_cond_batch
    output = model_options['model_function_wrapper'](model.apply_model, {"input": input_x, "timestep": timestep_, "c": c, "cond_or_uncond": cond_or_uncond}).chunk(batch_chunks)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\custom_nodes\ComfyUI-IC-Light-Native\ic_light_nodes.py", line 116, in wrapper_func
    return existing_wrapper(unet_apply, params=apply_c_concat(params))
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\custom_nodes\ComfyUI-IC-Light-Native\ic_light_nodes.py", line 108, in unet_dummy_apply
    return unet_apply(x=params["input"], t=params["timestep"], **params["c"])
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\model_base.py", line 113, in apply_model
    model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\python_embeded\Lib\site-packages\torch
    n\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\python_embeded\Lib\site-packages\torch
    n\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 852, in forward
    h = forward_timestep_embed(module, h, emb, context, transformer_options, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\ldm\modules\diffusionmodules\openaimodel.py", line 50, in forward_timestep_embed
    x = layer(x)
    ^^^^^^^^
    File "L:\-work\-ai\ComfyUI\python_embeded\Lib\site-packages\torch
    n\modules\module.py", line 1532, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\python_embeded\Lib\site-packages\torch
    n\modules\module.py", line 1541, in _call_impl
    return forward_call(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\ComfyUI\comfy\ops.py", line 80, in forward
    return super().forward(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\python_embeded\Lib\site-packages\torch
    n\modules\conv.py", line 460, in forward
    return self._conv_forward(input, self.weight, self.bias)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    File "L:\-work\-ai\ComfyUI\python_embeded\Lib\site-packages\torch
    n\modules\conv.py", line 456, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

    • @Adi3DPro
      @Adi3DPro 3 місяці тому

      Ok I got Solution the problem I was facing is related to (ComfyUI-layerdiffuse), I was trying to install by cmd, but it was not working same error
      so Solution for me is I have install (ComfyUI-layerdiffuse) Manually by download zip file then extract then restart and yes its working great
      thanks bro for ur help its so amazing
      ❤❤❤❤❤❤❤❤❤❤❤❤‍🩹❤‍🩹❤‍🩹❤‍🩹❤‍🩹❤‍🩹

    • @kaziahmed
      @kaziahmed 3 місяці тому

      Try doing a fresh install of comfyUI in a seperate folder. Then make sure you install all the missing nodes plus KJNodes and LayerDiffuse Node. The others who had this issue were able to simply solve it by doing this.
      Also for the IC Light models use the Huggingface link I provided.

    • @Adi3DPro
      @Adi3DPro 3 місяці тому

      @@kaziahmed Thanks Working Great

    • @panonesia
      @panonesia Місяць тому

      how you fix this issue? i have same problem with you

    • @kaziahmed
      @kaziahmed Місяць тому

      @@panonesia you have to install the extra nodes. specially KJNodes in comfyUI.