Perfect Relighting: Preserve Colors and Details (Stable Diffusion & IC-Light)

Поділитися
Вставка
  • Опубліковано 3 лип 2024
  • Finally, a way to relight people with IC-Light without color shifting and losing out on details.
    In this episode of Stable Diffusion for Professional Creatives, we finally solve one of the main issues with IC-Light: color shifts!
    Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi
    Workflow: openart.ai/workflows/risunobu...
    (install the missing nodes via comfyUI manager, or use:)
    IC-Light comfyUI github: github.com/kijai/ComfyUI-IC-L...
    IC-Light model (fc only, no need to use the fbc model): huggingface.co/lllyasviel/ic-...
    Frequency Separation (my first ever custom nodes): github.com/risunobushi/comfyU...
    u/SpacePXL nodes: github.com/spacepxl/ComfyUI-I...
    Model: most 1.5 models, I'm using epicRealism civitai.com/models/25694/epic...
    Auxiliary controlNet nodes: github.com/Fannovel16/comfyui...
    Timestamps:
    00:00 - Intro
    00:29 - Workflow overview
    01:30 - Color Matching options overview
    03:03 - In-Depth workflow explanation
    06:58 - In-Depth Color Matching options explanation
    09:37 - Optional IPAdapter FaceID pass
    10:42 - More Examples and tests
    13:13 - Limitations
    14:37 - Conclusions
    15:24 - Outro
    #stablediffusion #iclight #stablediffusiontutorial #relight #ai #generativeai #generativeart #comfyui #comfyuitutorial #risunobushi_ai #sdxl #sd #risunobushi #andreabaioni
  • Наука та технологія

КОМЕНТАРІ • 40

  • @risunobushi_ai
    @risunobushi_ai  23 дні тому +3

    You can find the workflow here: openart.ai/workflows/risunobushi/relight-people-preserve-colors-and-details/W50hRGaBRUlBT1ReD4EF
    Want to support me? You can buy me a coffee here: ko-fi.com/risunobushi

    • @sumeetprashant1
      @sumeetprashant1 22 дні тому +1

      amazing to see that you made the node yourself..
      more power to you and the community..

  • @jbrocktheworld
    @jbrocktheworld 23 дні тому +2

    I have no idea whats going on in the node but the result work like a charm,Thank you so much!

    • @risunobushi_ai
      @risunobushi_ai  23 дні тому +1

      ahah I know that the You Can Ignore This Group is a bit of a tangle, but I promise it's nothing too fancy! Glad it's working for you!

  • @Douchebagus
    @Douchebagus 16 днів тому +1

    This is without doubt the best comfyui workflow and explanation on youtube. Thank you so much for sharing, liked and subscribed.

  • @SimonDickerman
    @SimonDickerman 23 дні тому +2

    Thank you so much for sharing this, I can't wait to play around with it this week. You post some of the most useful SD videos on UA-cam.

  • @destructiveeyeofdemi
    @destructiveeyeofdemi 23 дні тому +1

    I love your work Sir. Thank you.

  • @bipinpeter7820
    @bipinpeter7820 23 дні тому +1

    Super cool 👍​

  • @yanmotta
    @yanmotta 23 дні тому +1

    Bravo!

  • @yotraxx
    @yotraxx 23 дні тому +1

    Thank you Andrea. REALLY useful, as usual. Keep going on, as usual :)

  • @ismgroov4094
    @ismgroov4094 23 дні тому +1

    Thx sir

  • @ImAlecPonce
    @ImAlecPonce 23 дні тому +1

    I really loved your wortkflow :) I just modified it so it takes on the pixel size of what ever image you put in. I hope that's ok. ..... squares drive me crazy haha

    • @risunobushi_ai
      @risunobushi_ai  23 дні тому

      Sure! There's so many ways to resize images, I just default to a X/Y resizer set to square because that's the most common config

  • @jahormaksimau1597
    @jahormaksimau1597 23 дні тому +1

    Cool)

  • @yunpengwang
    @yunpengwang 22 дні тому

    I want to know why there is an error in the faceid part: the clipvision model cannot be found. I have downloaded the CLIP-ViT-bigG-14-laion2B-39B-b160k.safetensors model and put it in the clipvision file of models, but I don’t know. Because the model was downloaded incorrectly and other issues were collected, this error has not been resolved.

    • @risunobushi_ai
      @risunobushi_ai  22 дні тому

      Did you download both the ViT-bigG and ViT-H models? Do you have insightface installed properly?

  • @yunpengwang
    @yunpengwang 22 дні тому

    In the color matching image I encountered the problem "The size of tensor a (64) must match the size of tensor b (1152) at non-unidimensional 1", and the problem of missing facial segmentation and facial analysis models, and wanted to know How do I deal with him? Thanks

    • @risunobushi_ai
      @risunobushi_ai  22 дні тому

      You're most probably not painting the light mask in the light mask group's preview bridge, or haven't hooked up the load image as mask node into the grow mask with blur node if you're importing a custom light mask

  • @egarywi1
    @egarywi1 23 дні тому

    Nearly got this going, however, I have 1 issue that I cant resolve, in the Face Segmentation Node:
    Error occurred when executing FaceSegmentation:
    'NoneType' object is not subscriptable
    File "/Volumes/Mac Mini Ext 1/StabilityMatrix/Packages/ComfyUI/execution.py", line 151, in recursive_execute
    output_data, output_ui = get_output_data(obj, input_data_all)
    File "/Volumes/Mac Mini Ext 1/StabilityMatrix/Packages/ComfyUI/execution.py", line 81, in get_output_data
    return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)
    File "/Volumes/Mac Mini Ext 1/StabilityMatrix/Packages/ComfyUI/execution.py", line 74, in map_node_over_list
    results.append(getattr(obj, func)(**slice_dict(input_data_all, i)))
    File "/Volumes/Mac Mini Ext 1/StabilityMatrix/Packages/ComfyUI/custom_nodes/ComfyUI_FaceAnalysis/faceanalysis.py", line 531, in segment
    landmarks = landmarks[-2]

    • @risunobushi_ai
      @risunobushi_ai  22 дні тому

      Do you have insightface installed? I know it’s a pain to install it on macs

  • @andresbares
    @andresbares 6 днів тому

    This seems like a great workflow! I allmost got it running, but when the mask is generated, it shows a tiny black square as the preview after "Convert mask to image" So the first relit image also shows as a tiny square, i've been playing with the image resize parameters but it doesn't seem to change anything. Any advice will be apreciated!

    • @risunobushi_ai
      @risunobushi_ai  6 днів тому

      Hi! You're most probably either:
      - not drawing a mask on the preview bridge node where the light masks are created, or
      - not importing a custom mask AND connecting the mask output to the grow mask with blur node
      If IC-Light doesn't see a light mask, you get a tiny little box

    • @andresbares
      @andresbares 5 днів тому

      @@risunobushi_ai Thanks for the response! Indeed I got it after I drew the mask! Im taking my first steps with AI and you were a great help, Thanks for your content! Greetings from Argentina

  • @keremoganvfx
    @keremoganvfx 22 дні тому

    hey thanks for your great tutorial. I'm totally new at stable diffusion and comfy UI, I'm a vfx compositor and using node-based software called Nuke. that's why the comfy UI caught my attention a lot. I'm at the stage of watching many videos these days. and thanks for all of videos. Have a question, instead of jpgs or pngs, can we work with EXR or DPX files in comfy UI generally? for inpaint or relight purposes? DPXs are 10-16 bit usually, and exr's are 16 bit half float as well.... I'm doing it as sending a frame from Nuke to Photoshop and doing some generative fills and export back to Nuke... I love generative fill but control-wise it's not that great. I'm really impressed by comfy UI/Stable diffusion and I hope I can use it in my pipeline.
    thanks

    • @risunobushi_ai
      @risunobushi_ai  22 дні тому

      Hey there, thanks for the kind words! Unfortunately AFAIK while comfyUI accepts 32bit files, and EXR with some custom nodes, and can theoretically output 32bit files, anything inside of it is processed at 8bits, as the models are trained with that color depth. That's part of the reason why color matching is so hard, 8bit just isn't enough to do any meaningful post processing.
      That being said, a viewer reached out and they have a Nuke tutorial about extracting normal maps from comfyUI using IC-Light and using it in Nuke, you can find it here: ua-cam.com/video/CwhQ4Dl7Fn8/v-deo.html

    • @keremoganvfx
      @keremoganvfx 22 дні тому

      @@risunobushi_ai thanks for your answer! you had even shared a video with Nuke thanks :)) yeah actually I'd seen that video but especially AOV passes must be 32bit... if I can import 10-16 or 32 bit files to comfyUI somehow, then there must be some solutions I can achieve, I can just render the 10-16bit files in sRGB colorspace before sending comfyui so there won't be any overexposed data unless there are ultra bright things... it must be working like 8bits... though AI generated parts will be 8 bit quality I guess... will do some tests...I'm still watching many videos before starting tests. thanks again for your quick response and your great videos!!

  • @user-ck5sh2um3b
    @user-ck5sh2um3b 22 дні тому

    i get this error: Error occurred when executing FrequencyCombination:
    operands could not be broadcast together with shapes (550,3,1000) (544,3,1000)

    • @risunobushi_ai
      @risunobushi_ai  22 дні тому

      This one's on me being a bad coder (well, technically not a coder at all) and not having accounted for unusual WxH ratios when scripting the Frequency Separation nodes. I'm going to add a image resize node after the relit image so this gets solved and update the workflow. Check back in 5 minutes and download it again.

    • @risunobushi_ai
      @risunobushi_ai  22 дні тому +1

      Updated.

    • @user-ck5sh2um3b
      @user-ck5sh2um3b 22 дні тому +1

      Wow thank you so much 🫶🏼

  • @xdevx9623
    @xdevx9623 23 дні тому

    Hey man can you make use of IDM-VTON as it very good with putting your choice of clothes in ai images but it does require some refining and the part of refining is what I can't figure out, please man it would help me a lot!

    • @risunobushi_ai
      @risunobushi_ai  23 дні тому

      I've seen a new zero-shot research from researchers at google that looks promising, but IDM and the likes are not there yet, there's no amount of refining that can fix the missing precision from IDM and other zero-shot vitons right now. In the future, yeah, but there's a reason why Google and Alibaba are spending big money to research this.

  • @mohammednasr7422
    @mohammednasr7422 19 днів тому

    Hi andrea, I hope you're doing well! I could really use your help with ComfyUI IC-Light. Would it be possible to set up a quick Discord call to discuss it? It won't take much of your time, and I would greatly appreciate it. Thank you so much

    • @risunobushi_ai
      @risunobushi_ai  19 днів тому

      Hey there! Please send me a email at andrea@andreabaioni.com, this week and the coming weeks are packed with calls and deadlines and I can’t do much one-on-ones

    • @mohammednasr7422
      @mohammednasr7422 19 днів тому

      @@risunobushi_ai Thank you so much for your quick response, Andrea! I understand you're very busy. I'll send you an email shortly. I really appreciate your willingness to help!