Hello Wie, this has got to be the mother-of-all background modification workflows. Congratulations on putting this up and thank you so much. In the chapter on Background Generation using Flux - is the subject outline in the Flux image inherited from the original image or are you using text prompts to get the same outlines for the placeholder subject?
I’m glad you liked the workflow! The outline of the subject comes from the Canny edge detection applied to the original image. If you have any more questions or need further clarification, just let me know!
Hi, it's not working for me, I'm getting an error: *Node Type:** XlabsSampler **Exception Type:** IndexError **Exception Message:** list index out of range. EDIT: The WF logic seems to be correct, it seems to be an issue with the X-Labs Sampler when it receives the Latent from the VAE Encode. It seems that it does not accept the Pixels + Vae. Replacing the controlnet model with a more stable one, like Instantx Controlnet Union or Misto Controlnet should solve the problem. Thank you very much for continuing to make WF great. !!!! Regards
Thank you again for your interest in the video and for appreciating the workflow! I’ve tested a variety of images and didn’t encounter any issues with the X-Labs Sampler. However, I did run into errors when I didn’t use “flux-dev” in the “Load Flux ControlNet” node or “flux1-dev-fp8” in the “Load Diffusion Model” node. I appreciate your suggestion to try Instantx ControlNet Union and Misto ControlNet. While I haven’t tested Misto ControlNet yet, I’ve found that Instantx ControlNet Union doesn’t perform as well as X-Labs ControlNet for my needs. If you have any more tips or questions, feel free to share!
@@my-ai-force Thank you for your great work! I've been messing around with your workflow for the past hours, it provides me with a great learning experience! While I managed to get it running with Instantx Controlnet and their nodes I keep getting this "list index out of range" with X-Labs, even when using flux-dev for the controlnet and flux1-dev-fp8 for the Diffusion Model
@@nar98 Thank you so much for catching that issue! I’ve updated the workflow to use the newly updated Xlabs Sampler, which now includes an additional “Denoising Strength” parameter. In the previous version, setting “image_to_image_strength” to 0 worked fine, but with the new Sampler, you can’t set it to 0. It’s recommended to set it to 1 instead. If you have any more questions or need further assistance, feel free to reach out!
I’ve been watching your UA-cam videos and using your workflows for my projects. I appreciate the value they bring. I'm seeking your help with workflows for my project using Flux-1 Dev Full Version: On my Windows 11 desktop, I'm using the Stability Matrix interface to run ComfyUI. # ComfyUI Error Report ## Error Details - **Node Type:** XlabsSampler - **Exception Type:** IndexError - **Exception Message:** list index out of range ## Stack Trace ``` Thanks! Surinder Singh
I'd love to try out your workflow but I don't like how long it's taking to generate since I'm on 3060 12GB. How long it'staking to genzrate on wwhich GPU?
I’d love to be able to use your workflow but I just loaded it into my comfy and I’m missing about 40 nodes, It's very daunting to chase after each of these
I totally get it-installing these nodes can definitely be a bit of a hassle. The good news is that these nodes are quite generic and you'll likely encounter them in various workflows. Once you get the hang of how they work, you'll find that your understanding of ComfyUI will really take off. Keep at it, and you'll master it in no time!
@@my-ai-force I was able to load all the missing nodes except for two "Get_with_blank_bg" and "LayerColor : Brightness & Contrast" and idea how I can get those?
@@my-ai-force Thank you Mr. Mao for your encouraging words. I haven’t given up and have managed to get all the nodes I’ll need. Currently I’ve made it as far as the 2ed grouping where I’m having a problem with the XLabs sampler. I’m getting an error message telling me that the sampler can’t process an image that’s been down sized somewhere to 145x112 ( or multiple of this) it needs sizes divisible by 2. My canny image to start is 1024X1326, it seems something is downsizing it before it reaches the sampler but I can’t figure out where. Anything you can tell me would be greatly appreciated, thank you.
Hi, great video as are your others which have been very useful, thank you. I have 1 issue with this workflow and that I cannot generate the background image in the second module and just left with a grey background and hope you can shed some light please? Everything else executes perfectly.
I'm thrilled to hear that the video was helpful for you! 😊 With so many parameters in the workflow, it's easy to accidentally misconfigure something. Have you had a chance to compare your setup with the original workflow to ensure all the parameters are correctly set? If you need any further assistance, feel free to ask!
@@my-ai-force Yes I checked carefully from the UA-cam that all the settings are the same. I found from experimentation that adjusting the Xlabs Sampler img2img and Denoise both to 1.0 solved my issue. Everything is working perfectly, thank you.🙂
I always got OutOfMemory even use 4090 card. All settings are same your default. When it runs to Xlabs Samplers of Generate Background, it shows out of memory. I use foto size 768 x 1536. Do you know why?
Hello. I saw some workflows on UA-cam. I downloaded them but I couldn't do them. If I showed them to you, would you prepare them for me for a fee so that I could do them?
Thank you for your interest! Right now, I'm fully focused on creating videos to provide the best content I can. While I don’t have the capacity to offer additional services at the moment, I’m glad to help with any questions or comments you have right here on the channel. Your understanding and support are much appreciated!
Hi, bro! It looks like you run your Comfy as an application. When I start my Comfy on PC it runs in Browser. How can I start my Comfy also as an application?
interesting workflow id like to try, thanks. altho, i cant get it running: ## Error Details - **Node Type:** KSampler - **Exception Type:** RuntimeError - **Exception Message:** mat1 and mat2 shapes cannot be multiplied (308x2048 and 768x320) any ideas? Edit: fixed that, needs another compatible checkpoint, but for some reason, the background from Relight to Repaint completly changes, and end result is just some plan colorless gray stuff
@my-ai-force Hi, great workflow. All working fine. But results are very bad quality, after all steps. And whatever correct image size I give, the workflow changes the image size so that my source person is badly cropped and image size and ratio are changed by your workflow. Not sure how to correct that
Thrilled you've used this workflow! 🎉 I’d love to dive deeper into how the people in the image are cropped after background removal. After removing the background, the individuals are scaled down using the ImageBlend node and then placed onto a custom blank canvas. This process slightly changes the original dimensions you set. To ensure the dimensions remain consistent when entering the SDXL model's latent space, it's best to make both the length and width multiples of 8. This helps maintain the integrity of your image dimensions throughout the workflow. If you have any more questions or need further assistance, feel free to reach out!
Hi bro Please help noob ... ControlNetLoader 434: - Value not in list: control_net_name: 'sdxl/diffusers_xl_canny_full.safetensors' not in ['diffusers_xl_canny_full.safetensors', 'flux-canny-controlnet-v3.safetensors']
You might want to download the "Juggernaut XL Lightning" checkpoint from the link provided in the video description. It’s a great resource to enhance your setup. If you need any help with the download or setup, feel free to ask!
I'm glad to hear this workflow has been helpful for you! 😊 This workflow starts with an empty background of a size you define. You can then replace this blank background with your own background image and input it into the “ImageBlend” node to overlay it with the original image. From there, you might need to adjust some of the later nodes in the workflow to better align with your goals. If you have any more questions or need further adjustments, feel free to ask!
@@my-ai-force Thank you SO much for the reply! What about relighting the subject to match the light of a background instead of the background matching the light of the subject? Is that an entirely different workflow? I'm still trying to get this workflow to work but I'm getting closer.
After i updated my comfyui manager today, i can't use it, can't load the node module of Layer Style and Layercolor. I reinstall it many times. Could everyone help me? .....(Q~Q):
@@my-ai-force Yes, i tried this way, remove the folder on the "ComfyUI\custom_nodes\ComfyUI_LayerStyle", and manually install it again, but it still can't be worked. I will try update all ComfyUI later. Thank you.
Amazing and brillant tutorial and thx for the workflow
My pleasure! Thanks for your support.
Another great video! Thank you!
Glad you enjoyed it!
Nice! Keep em coming. Good job. 👌 Thank you
Thanks, will do!
Well done, great job!
Thank you very much!
wow, thank you 🌿
You’re welcome 😊
Hello Wie, this has got to be the mother-of-all background modification workflows. Congratulations on putting this up and thank you so much. In the chapter on Background Generation using Flux - is the subject outline in the Flux image inherited from the original image or are you using text prompts to get the same outlines for the placeholder subject?
I’m glad you liked the workflow! The outline of the subject comes from the Canny edge detection applied to the original image. If you have any more questions or need further clarification, just let me know!
Good day! When I try to start workflow I got list index out of range from xlabssampler. Did you know how to fix this issue?
Hi, it's not working for me, I'm getting an error: *Node Type:** XlabsSampler **Exception Type:** IndexError **Exception Message:** list index out of range. EDIT: The WF logic seems to be correct, it seems to be an issue with the X-Labs Sampler when it receives the Latent from the VAE Encode. It seems that it does not accept the Pixels + Vae. Replacing the controlnet model with a more stable one, like Instantx Controlnet Union or Misto Controlnet should solve the problem. Thank you very much for continuing to make WF great. !!!! Regards
Thank you again for your interest in the video and for appreciating the workflow! I’ve tested a variety of images and didn’t encounter any issues with the X-Labs Sampler. However, I did run into errors when I didn’t use “flux-dev” in the “Load Flux ControlNet” node or “flux1-dev-fp8” in the “Load Diffusion Model” node. I appreciate your suggestion to try Instantx ControlNet Union and Misto ControlNet. While I haven’t tested Misto ControlNet yet, I’ve found that Instantx ControlNet Union doesn’t perform as well as X-Labs ControlNet for my needs. If you have any more tips or questions, feel free to share!
@@my-ai-force Thank you for your great work! I've been messing around with your workflow for the past hours, it provides me with a great learning experience! While I managed to get it running with Instantx Controlnet and their nodes I keep getting this "list index out of range" with X-Labs, even when using flux-dev for the controlnet and flux1-dev-fp8 for the Diffusion Model
@@nar98 Thank you so much for catching that issue! I’ve updated the workflow to use the newly updated Xlabs Sampler, which now includes an additional “Denoising Strength” parameter. In the previous version, setting “image_to_image_strength” to 0 worked fine, but with the new Sampler, you can’t set it to 0. It’s recommended to set it to 1 instead. If you have any more questions or need further assistance, feel free to reach out!
when I run last section(Restore Detail), 9 image coming out of Detail transfer node, Restore Detail node giving right image, what could be wrong
is 8 VRAM good enough for this workflow project ???
How long does it take normally for me it takes with an rtx3070 and 64gb of ram more than an hour
Hey I really want to try your workflow out but I am facing this error
Error occurred when executing XlabsSampler:
list index out of range
change image_to_image_strength to a value more than zero
May I know which model you used in "Repaint>load ControlNet Model" ??
I’ve been watching your UA-cam videos and using your workflows for my projects. I appreciate the value they bring.
I'm seeking your help with workflows for my project using Flux-1 Dev Full Version:
On my Windows 11 desktop, I'm using the Stability Matrix interface to run ComfyUI.
# ComfyUI Error Report
## Error Details
- **Node Type:** XlabsSampler
- **Exception Type:** IndexError
- **Exception Message:** list index out of range
## Stack Trace
```
Thanks! Surinder Singh
I'd love to try out your workflow but I don't like how long it's taking to generate since I'm on 3060 12GB. How long it'staking to genzrate on wwhich GPU?
I’d love to be able to use your workflow but I just loaded it into my comfy and I’m missing about 40 nodes, It's very daunting to chase after each of these
I totally get it-installing these nodes can definitely be a bit of a hassle. The good news is that these nodes are quite generic and you'll likely encounter them in various workflows. Once you get the hang of how they work, you'll find that your understanding of ComfyUI will really take off. Keep at it, and you'll master it in no time!
@@my-ai-force I was able to load all the missing nodes except for two "Get_with_blank_bg" and "LayerColor : Brightness & Contrast" and idea how I can get those?
@@my-ai-force Thank you Mr. Mao for your encouraging words. I haven’t given up and have managed to get all the nodes I’ll need. Currently I’ve made it as far as the 2ed grouping where I’m having a problem with the XLabs sampler. I’m getting an error message telling me that the sampler can’t process an image that’s been down sized somewhere to 145x112 ( or multiple of this) it needs sizes divisible by 2. My canny image to start is 1024X1326, it seems something is downsizing it before it reaches the sampler but I can’t figure out where. Anything you can tell me would be greatly appreciated, thank you.
Hi, great video as are your others which have been very useful, thank you. I have 1 issue with this workflow and that I cannot generate the background image in the second module and just left with a grey background and hope you can shed some light please? Everything else executes perfectly.
I'm thrilled to hear that the video was helpful for you! 😊 With so many parameters in the workflow, it's easy to accidentally misconfigure something. Have you had a chance to compare your setup with the original workflow to ensure all the parameters are correctly set? If you need any further assistance, feel free to ask!
@@my-ai-force Yes I checked carefully from the UA-cam that all the settings are the same. I found from experimentation that adjusting the Xlabs Sampler img2img and Denoise both to 1.0 solved my issue. Everything is working perfectly, thank you.🙂
I always got OutOfMemory even use 4090 card. All settings are same your default. When it runs to Xlabs Samplers of Generate Background, it shows out of memory. I use foto size 768 x 1536. Do you know why?
Can you show which version ComfyUI that you run this workflow pls
@@user-wi7vz2io5n Here's the link: github.com/comfyanonymous/ComfyUI/releases. I'm using 3090, and it works well.
Hello. I saw some workflows on UA-cam. I downloaded them but I couldn't do them. If I showed them to you, would you prepare them for me for a fee so that I could do them?
Thank you for your interest! Right now, I'm fully focused on creating videos to provide the best content I can. While I don’t have the capacity to offer additional services at the moment, I’m glad to help with any questions or comments you have right here on the channel. Your understanding and support are much appreciated!
Hi, bro! It looks like you run your Comfy as an application.
When I start my Comfy on PC it runs in Browser. How can I start my Comfy also as an application?
LOL, I’m running ComfyUI in my browser too! I just went full screen to make it easier for you all to see the interface clearly.
interesting workflow id like to try, thanks. altho, i cant get it running:
## Error Details
- **Node Type:** KSampler
- **Exception Type:** RuntimeError
- **Exception Message:** mat1 and mat2 shapes cannot be multiplied (308x2048 and 768x320)
any ideas?
Edit: fixed that, needs another compatible checkpoint, but for some reason, the background from Relight to Repaint completly changes, and end result is just some plan colorless gray stuff
Maybe you're using the one with the gray background for repainting.
@@my-ai-force I'm using identical settings as u show 🤷
@my-ai-force Hi, great workflow. All working fine. But results are very bad quality, after all steps. And whatever correct image size I give, the workflow changes the image size so that my source person is badly cropped and image size and ratio are changed by your workflow. Not sure how to correct that
Thrilled you've used this workflow! 🎉 I’d love to dive deeper into how the people in the image are cropped after background removal. After removing the background, the individuals are scaled down using the ImageBlend node and then placed onto a custom blank canvas. This process slightly changes the original dimensions you set. To ensure the dimensions remain consistent when entering the SDXL model's latent space, it's best to make both the length and width multiples of 8. This helps maintain the integrity of your image dimensions throughout the workflow. If you have any more questions or need further assistance, feel free to reach out!
@@my-ai-force Thank you for the clarification. I will try with your new instructions.
Hi bro
Please help noob ...
ControlNetLoader 434:
- Value not in list: control_net_name: 'sdxl/diffusers_xl_canny_full.safetensors' not in ['diffusers_xl_canny_full.safetensors', 'flux-canny-controlnet-v3.safetensors']
I did pass through the xlabs error. I am now stuck at reapaint around loading: juggernautXL_v9Rdphoto2Lightning.safetensors
Need help, please. Thanks
how did you resolve it? I got the same:
XlabsSampler
list index out of range
You might want to download the "Juggernaut XL Lightning" checkpoint from the link provided in the video description. It’s a great resource to enhance your setup. If you need any help with the download or setup, feel free to ask!
how did you resolve the xlabs sampler error?really eager to know the solution as i have spent a whole day trying to figure it out, but no luck
Amazing work, thank you for sharing it. Is there a way to use an existing image as the background instead of generating one?
I'm glad to hear this workflow has been helpful for you! 😊 This workflow starts with an empty background of a size you define. You can then replace this blank background with your own background image and input it into the “ImageBlend” node to overlay it with the original image. From there, you might need to adjust some of the later nodes in the workflow to better align with your goals. If you have any more questions or need further adjustments, feel free to ask!
@@my-ai-force Thank you SO much for the reply! What about relighting the subject to match the light of a background instead of the background matching the light of the subject? Is that an entirely different workflow? I'm still trying to get this workflow to work but I'm getting closer.
After i updated my comfyui manager today, i can't use it, can't load the node module of Layer Style and Layercolor. I reinstall it many times.
Could everyone help me? .....(Q~Q):
Have you tried to manually install these nodes without ComfyUI Manager?
@@my-ai-force Yes, i tried this way, remove the folder on the "ComfyUI\custom_nodes\ComfyUI_LayerStyle", and manually install it again, but it still can't be worked. I will try update all ComfyUI later. Thank you.
i fix it now, i update it all again, thank you