One question: What can I do if I have several people in my picture, e.g. in the background? Can I somehow influence Facedetailer to only refine the main person in the middle?
Thanks for the tutorial! I've set up the simple face restore as shown but I only get a black image is output while the unrestored image comes out fine. Any ideas? //edit: it says "starting restore_face etc ...with fidelity 0.5. Then there's just "prompt executed" and that's it.
Good tutorial, but a few times on the last step I end up getting the same face on everybody, any idea what the problem is? maybe because on the prompt I say a woman/ a man, but no idea how to fix it. thanks!
Great workflow for fix. I'm wondering, with proper scenes where characters are actually not looking at the camera, like 3/4, view looking phone, using tablet or something, not like creepy looking the camera, I'm wondering if I'm the only one who gets bad results on type of images. But I will definitely try this new fix. Thanks for the tip.
Interesting, but the 2nd method does not work for me. No matter what the resolution, I always get this error: Error occurred when executing FaceDetailer: The size of tensor a (0) must match the size of tensor b (256) at non-singleton dimension 1 File "C:\Users\Alex\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all)
I've tested the JuggernautXL lightning model and it works great without any modification to the workflow. Some models may work better with different schedulers, cfg, etc., but in general they should work fine.
SO that's how you correctly use turbo models, till now I used 20 steps with turbo models, and just 1 pass, it seems using 2 pass with 5 steps each is much much better, what about using deep shrink alongside it?
I just played around with it a bit and it doesn’t seem to have much of an effect on this workflow, likely because of the minimal upscaling and lower denoise value, but thanks for bringing that node to my attention! I can definitely see a lot of other uses for it. EDIT: I realized I was using it incorrectly by trying to inject it into the second pass. Once I figured out how to use it properly, I could definitely see the potential. It's hard to tell whether the Kohya by itself is better than the two pass or not, but Kohya into a second pass is pretty great. I noticed that reducing CFG and steps for the second pass is helpful to reduce the "overbaked" look.
That FaceDetailer looks amazing, I like creating images with multiple people in them, so faces are the bane of my existence
Its the most discouraging part of making ai art
try hands.
Insane!...I already had all the nodes from other tutorials installed but I never knew exactly what each one did. Thanks for sharing your Worflow!
Thanks having a simple and advaced face detailer is clever . Going to try it, Got a sub from me, keep going!
This looks great thanks for sharing. How can this be altered for img2img?
Here's a modified workflow: comfyworkflows.com/workflows/cd47fbe6-68cc-4f40-8646-dfc62d32eeb4
is there a place to plug an image already generated into this to fix?
One question: What can I do if I have several people in my picture, e.g. in the background? Can I somehow influence Facedetailer to only refine the main person in the middle?
probably crop that section run the fix and image composite it back in
Thanks for the tutorial! I've set up the simple face restore as shown but I only get a black image is output while the unrestored image comes out fine. Any ideas?
//edit: it says "starting restore_face etc ...with fidelity 0.5. Then there's just "prompt executed" and that's it.
Hello, how can we do professional face changing like this?
Hi, Does it work from already existing images ...?
Good tutorial, but a few times on the last step I end up getting the same face on everybody, any idea what the problem is? maybe because on the prompt I say a woman/ a man, but no idea how to fix it. thanks!
Great workflow for fix. I'm wondering, with proper scenes where characters are actually not looking at the camera, like 3/4, view looking phone, using tablet or something, not like creepy looking the camera, I'm wondering if I'm the only one who gets bad results on type of images. But I will definitely try this new fix. Thanks for the tip.
Interesting, but the 2nd method does not work for me. No matter what the resolution, I always get this error:
Error occurred when executing FaceDetailer:
The size of tensor a (0) must match the size of tensor b (256) at non-singleton dimension 1
File "C:\Users\Alex\ComfyUI_windows_portable\ComfyUI\execution.py", line 151, in recursive_execute
output_data, output_ui = get_output_data(obj, input_data_all)
this sometimes happens when mixing SD and SDXL assets in the workflow.
would a Lightning Model be a plug and play replacement for this, just because of the different License
I've tested the JuggernautXL lightning model and it works great without any modification to the workflow. Some models may work better with different schedulers, cfg, etc., but in general they should work fine.
@@HowDoTutorials I will try it, thanks
Useful video thanks!
Great tutorial, thank you
how to facedetail vid2vid?
SO that's how you correctly use turbo models, till now I used 20 steps with turbo models, and just 1 pass, it seems using 2 pass with 5 steps each is much much better, what about using deep shrink alongside it?
I just played around with it a bit and it doesn’t seem to have much of an effect on this workflow, likely because of the minimal upscaling and lower denoise value, but thanks for bringing that node to my attention! I can definitely see a lot of other uses for it.
EDIT: I realized I was using it incorrectly by trying to inject it into the second pass. Once I figured out how to use it properly, I could definitely see the potential. It's hard to tell whether the Kohya by itself is better than the two pass or not, but Kohya into a second pass is pretty great. I noticed that reducing CFG and steps for the second pass is helpful to reduce the "overbaked" look.
What is GPU spec you are using?.
I'm using a 3090 which is probably something I should mention going forward so people can set their expectations properly. 😅
You have an interesting cadence to your speech. Is this a real voice or AI?
A bit of both. I record the narration with my real voice, edit out the spaces and ums (mostly), and then pass it through ElevenLabs speech to speech.
@@HowDoTutorials That explains why I kept going back and forth with my opinion on this. Thank you 👍🏼
@@HowDoTutorials Thats very clever. its a very soothing voice