Sorry for my poor english.I am an old french (82yo) and l sincerely thank you so much for a so helpfull and hard work. People like you are a golden mine when i try to make some progress in Comfy. Thanks again
It's really amazing! I have one question: I don’t want to change the face but would like to enhance it. Could you please let me know how to retain the original face while doing so?
To try, iclight (sd1.5) then you can upscale using your flux upscaler, that would be the same as you show but you'll have to prompt the background. That would be a background swapper
Hi, I've been following your videos and utilizing your workflows. Your work is impressive, and I appreciate the value it brings. However, I'm encountering some challenges and would appreciate your assistance in refining it to better suit my project's needs. Here’s what I’m aiming for: 1. **Simple Image-to-Image Workflow: ** - Load a photo of a wedding couple with a transparent background. - Input a prompt to generate a new background. - Seamlessly integrate the couple into the new background, ensuring the final image looks natural and not like a cutout pasted onto the background. 2. **Alternative Option: ** - Upload two photos: one with a transparent background and another as the background image. - Merge these images so that they blend smoothly. I’d appreciate your guidance on creating a workflow that meets these requirements. Thank You! Surinder Singh
What you are looking for is currently not possible as the IC Light feature is for sd1.5 and it is not there for flux yet, atleast I couldn't get it to match color grading exactly with the background. Also the images of subject will slightly change no matter what model you use. Your use case seems bit difficult for now.
Thanks for this video! i love the idea of loading a subject image and a background image. I was not able to get the GGUF loaders to work. so i just used the standard FLUX loaders. I have not been able to get the face detection working. Good thing you have an On/Off button for it.
I got the face detection part working. I watch your other video and learned what I needed to do. I needed to add prompts for what to search for in the image.
I'm uploading a video on this probably tomorrow, a new workflow which automatically fixes both hands, and then face without upscaling the whole image 👍
You said it is not magical and cannot place the same subject in the image. But couldn't you have just inpainted the character face in the final outputted image, or segmented and recreated it using the same character's LoRA and something like Yolo, etc.?
Yes you can definitely inpaint, what I meant was that it will not have the photoshop output where you can take the exact subject (Same outfit etc) and then adjust lighting and all. You can inpaint the face but it will still be a different generated body and also we can only use Lora of someone who's lora exists. So these are some limitations
Yes definitely it can work. You just need to use the Lora Stacker and Apply Lora (You can check my other videos which use it) and rather than taking direct model input, take model input from the Lora Apply Node. Although if you want to change the style completely (like art etc) then denoising and max / based shift must be changed)
To work with flux in a proper way, you really need a 4070 or higher (in preference 4090) I have a 3060 12gb and it sucks to use flux. more than 20 min to generate one image with a new prompt, and if you just regenerate using the same prompt it's something around 5 min. But if you change the prompt, 20 min again. And the first load... ufff... 2h to load and ganarete the first image.
Sorry for my poor english.I am an old french (82yo) and l sincerely thank you so much for a so helpfull and hard work. People like you are a golden mine when i try to make some progress in Comfy. Thanks again
It's an inspiration for us to see people of your age willing to get into all the latest stuff, motivates us a lot. Thanks 🌟🔥💯
I'm a learner and researcher. Thank you very much for sharing your enthusiasm.
Very nice trutorial thanks, greatings from France
It's really amazing! I have one question: I don’t want to change the face but would like to enhance it. Could you please let me know how to retain the original face while doing so?
Agreed; it would be useful to retain the original model character in her/his entirety with some enhancements.
To try, iclight (sd1.5) then you can upscale using your flux upscaler, that would be the same as you show but you'll have to prompt the background. That would be a background swapper
i did do this and it does work.
Thanks for the suggestion, I'll try this out 💯🔥
Hi, I've been following your videos and utilizing your workflows. Your work is impressive, and I appreciate the value it brings.
However, I'm encountering some challenges and would appreciate your assistance in refining it to better suit my project's needs.
Here’s what I’m aiming for:
1. **Simple Image-to-Image Workflow: **
- Load a photo of a wedding couple with a transparent background.
- Input a prompt to generate a new background.
- Seamlessly integrate the couple into the new background, ensuring the final image looks natural and not like a cutout pasted onto the background.
2. **Alternative Option: **
- Upload two photos: one with a transparent background and another as the background image.
- Merge these images so that they blend smoothly.
I’d appreciate your guidance on creating a workflow that meets these requirements.
Thank You!
Surinder Singh
What you are looking for is currently not possible as the IC Light feature is for sd1.5 and it is not there for flux yet, atleast I couldn't get it to match color grading exactly with the background. Also the images of subject will slightly change no matter what model you use. Your use case seems bit difficult for now.
That is only possible with photoshop Generative AI.
Thanks for this video! i love the idea of loading a subject image and a background image.
I was not able to get the GGUF loaders to work. so i just used the standard FLUX loaders.
I have not been able to get the face detection working. Good thing you have an On/Off button for it.
I got the face detection part working. I watch your other video and learned what I needed to do. I needed to add prompts for what to search for in the image.
cant we maintain the consistency of the character or object
good video but the links to download workflow not working
Can you please tell me how to add more to fix fingers by mask as well as face?
I'm uploading a video on this probably tomorrow, a new workflow which automatically fixes both hands, and then face without upscaling the whole image 👍
@@xclbrxtra That's awesome! very much looking forward to it and immediately like it for the future)
what if I want to upload a backgroun but generate the subject on top of it
hanks a lot,your video is so great
If you're in marketing - this is great. If you are an artist - pass.
Is there any perspective matching so that the image looks like it fits
I'm trying this concept out, it's a bit complex but hopefully I can make it work.
You said it is not magical and cannot place the same subject in the image. But couldn't you have just inpainted the character face in the final outputted image, or segmented and recreated it using the same character's LoRA and something like Yolo, etc.?
Yes you can definitely inpaint, what I meant was that it will not have the photoshop output where you can take the exact subject (Same outfit etc) and then adjust lighting and all. You can inpaint the face but it will still be a different generated body and also we can only use Lora of someone who's lora exists. So these are some limitations
how is it possible to imput our own prompt ? the workflow is perfect appart from that
You can disconnect the Florence2 string input and connect a string node, then you can input your own prompt.
@@xclbrxtra ok i'll try that. It's my first day on comfyui i have to test many things aha
image composite I don't understand what values should be set for the image to be in the middle? I play with the values and I get an error.
x 512 )
I posted a comment and it was deleted but I really would like to know if Flux loras work with this set up. Thank you.
Yes definitely it can work. You just need to use the Lora Stacker and Apply Lora (You can check my other videos which use it) and rather than taking direct model input, take model input from the Lora Apply Node. Although if you want to change the style completely (like art etc) then denoising and max / based shift must be changed)
And how do I change the promt that I automatically wrote?
I figured it out)
its giving me errors for the workflow
how much your pc cost?
Bro you'll get a RTX 4060 laptop within 90k - 120k INR
you ll need higher vram laptop, i suggest go for 8 gb or 12 gb above
To work with flux in a proper way, you really need a 4070 or higher (in preference 4090) I have a 3060 12gb and it sucks to use flux. more than 20 min to generate one image with a new prompt, and if you just regenerate using the same prompt it's something around 5 min. But if you change the prompt, 20 min again. And the first load... ufff... 2h to load and ganarete the first image.
@@FYBarbosa why dont u use guff models ?
@FYBarbosa maybe have another issue, i have 3060 6gb vram. it's not taking much time as u told. Maybe your processor is old generation,