Perfect Relighting: Preserve Colors and Details (Stable Diffusion & IC-Light)
Вставка
- Опубліковано 3 лип 2024
- Finally, a way to relight people with IC-Light without color shifting and losing out on details.
In this episode of Stable Diffusion for Professional Creatives, we finally solve one of the main issues with IC-Light: color shifts!
Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi
Workflow: openart.ai/workflows/risunobu...
(install the missing nodes via comfyUI manager, or use:)
IC-Light comfyUI github: github.com/kijai/ComfyUI-IC-L...
IC-Light model (fc only, no need to use the fbc model): huggingface.co/lllyasviel/ic-...
Frequency Separation (my first ever custom nodes): github.com/risunobushi/comfyU...
u/SpacePXL nodes: github.com/spacepxl/ComfyUI-I...
Model: most 1.5 models, I'm using epicRealism civitai.com/models/25694/epic...
Auxiliary controlNet nodes: github.com/Fannovel16/comfyui...
Timestamps:
00:00 - Intro
00:29 - Workflow overview
01:30 - Color Matching options overview
03:03 - In-Depth workflow explanation
06:58 - In-Depth Color Matching options explanation
09:37 - Optional IPAdapter FaceID pass
10:42 - More Examples and tests
13:13 - Limitations
14:37 - Conclusions
15:24 - Outro
#stablediffusion #iclight #stablediffusiontutorial #relight #ai #generativeai #generativeart #comfyui #comfyuitutorial #risunobushi_ai #sdxl #sd #risunobushi #andreabaioni - Наука та технологія
You can find the workflow here: openart.ai/workflows/risunobushi/relight-people-preserve-colors-and-details/W50hRGaBRUlBT1ReD4EF
Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi
amazing to see that you made the node yourself..
more power to you and the community..
I have no idea whats going on in the node but the result work like a charm,Thank you so much!
ahah I know that the You Can Ignore This Group is a bit of a tangle, but I promise it's nothing too fancy! Glad it's working for you!
This is without doubt the best comfyui workflow and explanation on youtube. Thank you so much for sharing, liked and subscribed.
Thank you for the kind words!
Thank you so much for sharing this, I can't wait to play around with it this week. You post some of the most useful SD videos on UA-cam.
thank you!
I love your work Sir. Thank you.
Super cool 👍
Bravo!
Thank you Andrea. REALLY useful, as usual. Keep going on, as usual :)
Thank you! Will do!
Thx sir
I really loved your wortkflow :) I just modified it so it takes on the pixel size of what ever image you put in. I hope that's ok. ..... squares drive me crazy haha
Sure! There's so many ways to resize images, I just default to a X/Y resizer set to square because that's the most common config
Cool)
I want to know why there is an error in the faceid part: the clipvision model cannot be found. I have downloaded the CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors model and put it in the clipvision file of models, but I don’t know. Because the model was downloaded incorrectly and other issues were collected, this error has not been resolved.
Did you download both the ViT-bigG and ViT-H models? Do you have insightface installed properly?
In the color matching image I encountered the problem "The size of tensor a (64) must match the size of tensor b (1152) at non-unidimensional 1", and the problem of missing facial segmentation and facial analysis models, and wanted to know How do I deal with him? Thanks
You're most probably not painting the light mask in the light mask group's preview bridge, or haven't hooked up the load image as mask node into the grow mask with blur node if you're importing a custom light mask
Nearly got this going, however, I have 1 issue that I cant resolve, in the Face Segmentation Node:
Error occurred when executing FaceSegmentation:
'NoneType' object is not subscriptable
File "/Volumes/Mac Mini Ext 1/StabilityMatrix/Packages/ComfyUI/execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
File "/Volumes/Mac Mini Ext 1/StabilityMatrix/Packages/ComfyUI/execution.py", line 81, in get_output_data
return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
File "/Volumes/Mac Mini Ext 1/StabilityMatrix/Packages/ComfyUI/execution.py", line 74, in map_node_over_list
results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
File "/Volumes/Mac Mini Ext 1/StabilityMatrix/Packages/ComfyUI/custom_nodes/ComfyUI_FaceAnalysis/faceanalysis.py", line 531, in segment
landmarks = landmarks[-2]
Do you have insightface installed? I know it’s a pain to install it on macs
This seems like a great workflow! I allmost got it running, but when the mask is generated, it shows a tiny black square as the preview after "Convert mask to image" So the first relit image also shows as a tiny square, i've been playing with the image resize parameters but it doesn't seem to change anything. Any advice will be apreciated!
Hi! You're most probably either:
- not drawing a mask on the preview bridge node where the light masks are created, or
- not importing a custom mask AND connecting the mask output to the grow mask with blur node
If IC-Light doesn't see a light mask, you get a tiny little box
@@risunobushi_ai Thanks for the response! Indeed I got it after I drew the mask! Im taking my first steps with AI and you were a great help, Thanks for your content! Greetings from Argentina
hey thanks for your great tutorial. I'm totally new at stable diffusion and comfy UI, I'm a vfx compositor and using node-based software called Nuke. that's why the comfy UI caught my attention a lot. I'm at the stage of watching many videos these days. and thanks for all of videos. Have a question, instead of jpgs or pngs, can we work with EXR or DPX files in comfy UI generally? for inpaint or relight purposes? DPXs are 10-16 bit usually, and exr's are 16 bit half float as well.... I'm doing it as sending a frame from Nuke to Photoshop and doing some generative fills and export back to Nuke... I love generative fill but control-wise it's not that great. I'm really impressed by comfy UI/Stable diffusion and I hope I can use it in my pipeline.
thanks
Hey there, thanks for the kind words! Unfortunately AFAIK while comfyUI accepts 32bit files, and EXR with some custom nodes, and can theoretically output 32bit files, anything inside of it is processed at 8bits, as the models are trained with that color depth. That's part of the reason why color matching is so hard, 8bit just isn't enough to do any meaningful post processing.
That being said, a viewer reached out and they have a Nuke tutorial about extracting normal maps from comfyUI using IC-Light and using it in Nuke, you can find it here: ua-cam.com/video/CwhQ4Dl7Fn8/v-deo.html
@@risunobushi_ai thanks for your answer! you had even shared a video with Nuke thanks :)) yeah actually I'd seen that video but especially AOV passes must be 32bit... if I can import 10-16 or 32 bit files to comfyUI somehow, then there must be some solutions I can achieve, I can just render the 10-16bit files in sRGB colorspace before sending comfyui so there won't be any overexposed data unless there are ultra bright things... it must be working like 8bits... though AI generated parts will be 8 bit quality I guess... will do some tests...I'm still watching many videos before starting tests. thanks again for your quick response and your great videos!!
i get this error: Error occurred when executing FrequencyCombination:
operands could not be broadcast together with shapes (550,3,1000) (544,3,1000)
This one's on me being a bad coder (well, technically not a coder at all) and not having accounted for unusual WxH ratios when scripting the Frequency Separation nodes. I'm going to add a image resize node after the relit image so this gets solved and update the workflow. Check back in 5 minutes and download it again.
Updated.
Wow thank you so much 🫶🏼
Hey man can you make use of IDM-VTON as it very good with putting your choice of clothes in ai images but it does require some refining and the part of refining is what I can't figure out, please man it would help me a lot!
I've seen a new zero-shot research from researchers at google that looks promising, but IDM and the likes are not there yet, there's no amount of refining that can fix the missing precision from IDM and other zero-shot vitons right now. In the future, yeah, but there's a reason why Google and Alibaba are spending big money to research this.
Hi andrea, I hope you're doing well! I could really use your help with ComfyUI IC-Light. Would it be possible to set up a quick Discord call to discuss it? It won't take much of your time, and I would greatly appreciate it. Thank you so much
Hey there! Please send me a email at andrea@andreabaioni.com, this week and the coming weeks are packed with calls and deadlines and I can’t do much one-on-ones
@@risunobushi_ai Thank you so much for your quick response, Andrea! I understand you're very busy. I'll send you an email shortly. I really appreciate your willingness to help!