This was excellent. Thank you for showing people there are other and better options than the Midjourney face swap method which of course limits your flexibility tremendously. I'm actually in the process of doing this exact thing and was building my ideal workflow, but what you did with the outfit and masking is really fantastic. Very helpful.
Great video. I find using the prompt, "staring directly into the camera" works well for portrait shots or using "portrait". I'm also glad you're using low quality prompts instead of high quality and cinematic photography. Most average phone users don't have access to high quality cameras. 😁
This was Absolutely AMAING! I've watched countless videos, spent a good amount but nothing as detailed and as great as this.. Definitely subscribing to this channel
I wish I'd have watched your video sooner. Well, if you posted it earlier 😂😂 just today I have come to a similar solution in order to achieve this. And now I still have a question: what if you want to switch to full body figure, or want to have a profile or rear view of your model, do the masks and the images going into the IPadapters remain unchanged? Or do you have to switch IPadapter's image and mask accordingly?
where did u get the IPadapter, how did u config and install it, which CLIP vision did u use, where did u get it? where did u get the open pose? i tried to find all by my self, but the end result is that my faceswapping looks like dogshit, can u help?
I was the same when I saw the video, frustrated because it doesn't explain many details, but I was able to make it work, but it was on my own, I took the time to look for everything you mention separately and watch videos, and I was able to get what was missing.
Hey! Thank you for the video. Can you advice about one problem? I used your workflow, but I catch this error: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]). I don't understand where it comes from
So for that I had to get the correct clip_vision model. I don't know that it will let me post a link in the comments so here is how to find it: Go to his link to "All Useful Links & Workflow Visit:" in his description. Go to "IPAdapter plus Models HuggingFace Link" Go to the main directory, then the "Models" folder. Go to the "Image_Encorder" folder. Download "model.safetensors" From there you have to put the model you downloaded in your comfyUI, clip vision model folder. That same place also has alternative IPAdapter models.
thanks for this video! amazing! ..No matter what I try, I don't get the option of applying the Apply IPAdapter Node.. What do I miss? would be thankful for any help!
video is great, but you did not provide the links to download the models. I have downloaded your worflow, tried to install all models same as you have, and always get an error. I'm sure the problem is on my side. but you could take more time to explain the details.
Good video but you left a lot of info out, and its making me mad cause it wont work. 1. I keep getting errors, i think its because i got a random clip vision model 2. You did not show how to create or get a pose or how to even use the controlnet. I dont have any images like you i have to input my image, and dont know how to make one
Hey thanks for the video, but I where do you download all the controlnets from and how do you install them in to ComfyUI as well as where did you get the Clip Vision model from ?
I have followed every steps but the result is not as good as yours…it turns out I am getting distorted faces. I have switched to other checkpoint models (SD1.5) still the same😢😢
Cool, but what about a different perspective view of the face? it seems like the face always keeps the same perspective regardless of Openpose settings. is there any way to fix this?
Hi friend, a couple of questions here: 1) how do I get the KSampler with the image preview?, I just have the normal one with no image preview. 2) I search for the "Ultimate SC Upscaler", but I did not find it, it is something I have to install?, if so, from where can I download it? 3) Now, I did find the Image Up Scale Loader Node, but did not find the 4x_foolhardy.pth feature, is that something I have to download from somewhere else?, in which folder do I drop it?, so it appears on the node as an option next time. Generally speaking there are some tools and features you are using on your video, that we don't know from where they came from, another option is to buy your workflow, which is not a problem because you are doing and amazing job, and we can support you in that way, however if I purchase the work flow, I will have the same problem because the lack of tools and at the end the work flow won't work as expected. I will go some research on how to get those tools and then come back to this video. Thanks bro!
Great video, one question: why if I change the initial checkpoint despite keeping or removing the LoRA and connecting the model directly to the IFAdapter the whole process stops working because of a different size of the matrices?
Hey everyone! 😊 I'm planning to create a comprehensive course on creating a virtual influencer from scratch and growing an Instagram account. It'll be a long and detailed course, so I'm thinking of making it a paid course, but at a reasonable price. What do you think? Would you be interested in something like that?
hey, nice tutorial, really well explained in detail. I am getting an error when running Ksampler and i was wondering if you could help me with that: "It seems that models and clips are mixed and interconnected between SDXL Base, SDXL Refiner, SD1.x, and SD2.x. Please verify. " one thing that is different from my workflow and your workflow is that in the clip vision loader I am using "clip_vision_vit_h.safetensors" instead of "model.safetensors" because I couldn't fine the file you are using on the web, any chance you could post a link to that file or help me out resolve this error? thanks in advance
Maybe you address this in another video, but to me, the faces were vastly different from one another, evolving on every pass and getting away from the original one (before adding the SMILE in the prompt).
Ok. I have a serious problem with Stable Diffusion and comfyui. Both using epicrealism and realistic vision as models, they generate me images of naked women. Which I don't mind but it's not what I intend to do.... I have used the same prompt as in this video and other clean prompts.... But the thing doesn't get better!!! 🤣 🤣 🤣
How can you increase the accuracy for retaining the face? The face, although quite subtle, was noticeably different in each generation, which is noticeable to a follower as I noticed just in this video. Thanks!
The Workflow does not work any more. Error occurred when executing IPAdapterUnifiedLoader: ClipVision model not found. And dont say i did something wrong i used the .json
Amazing! Any recommendations on how to optimize it for SDXL? For a reason I can’t explain if I update all the models I’m getting worth results with SDXL compare to 1.5… 🤔
Nothing in ComfyUI is beginner freindly. There is pain, there is frustration, there is lost sleep and then more pain. I miss Lawrence wont take this unkindly, these arrows looked rarher painfull 😂.
Cool video! Got my KSampler blocking (loading 0% over 25 steps) after IP Adapter, don't know why but maybe related to the AMD graphic card I am using? Generation of image and upscaling was working...
Very good, but the face painting is not very like the reference, the clothes can not change the same, are generated a little similar, but still very different, what is a good way
NGL already have an account, have 6k followers and steadily growing. Can't share the secret sauce though, but making images is only 1/5 of the process ;)
You're correct, the clothes aren't always 100% the same. However, by adjusting the Ksampler denoise strength and IP adapter weight value, I can usually achieve about 80 to 90% similarity in the clothing details. also it depends on the type of clothes.
anyone know why my ksampler step would be extremely slow? I followed these directions to a T, twice and both times the process stalled at Ksampler and took 10-15mins for just 2-4 images. help!
When I use a same phto of clothes, almost similar characters appear, but when the image of the clothes changes, a completely different character appears. What should I fix?
partially right .. however i think the latent isn't totally empty . there is openpose . and the generation after the first one gets seeds and clips from last one . less denoise = less changes ....
@@JustFeral indeed, this must be related to the IPadapter in some way or I can't see the point, as he's starting from an empty latent. What's the point of partially denoising the noise.
I know this isn't the point of your video, but I'm confused about one basic thing in your workflow. You generate images with KSampler set to "Randomize" until you like one, then change it to fixed. How does this actually work for you? It shouldn't, and it doesn't for me. Randomize generates a new seed AFTER the image is generated. So if you see a good image, changing the KSampler from Randomize to Fixed won't give you the same image again because the seed has ALREADY changed. Yet in your video it doesn't?!
I think for the initial face generation SDXL is used, but then later when you use IP and ControlNet, you should use a 1.5 based model instead. Probably you get the error because you are mixing SDXL and 1.5 models.
It was a problem regarding mixing SDXL with sd...,my fault. By the way I still can't make the final image with clothes similar to the face one...Do you have any advice?@@stephan935
@@kentverge I honestly can't tell you. Sometimes it's in the name. I mostly know it based on the information in civitai. There you find the model it's based on (e.g. SDXL or 1.5).
its so interesting and amazing how this is now. I don't know what many of the terms he says mean though. like dpmpp_sde.. .. and that kind of stuff. I feel like a total idiot watching this. There is no way I can learn this now and keep up with it. It's really so cool, but it sure makes me feel stupid.
Had an Error with the IPAdapter Apply any one an idea why it could be happening? Error occurred when executing IPAdapterApply: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).
Can you make a proper video of how to set up the work space before following steps in this video? As the video you are directing to is missing heella lots of parts and nothing works?
Hi, thanks for your content ! Great stuff and efforts I have a little request would it be possible to make a video to explore AI models with the Multiarea conditioning . it would a great asset Thanks and regards !
I'm updating this workflow because a lot has changed since then. +++ IPAdapter version 2 is even more advanced now. Be sure to check out my latest videos for all the updates.
@@Aiconomist Please update this, i try use other nodes and adapt something similar that you .json file, but nothing work for me :(, appear a error shows "copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664])" Please help us and update the .json file in the web, thank u very much
Thanks for the great tutorial and the sample json files! I was able to do almost the same thing using the sample Json, but for some reason the face part is broken and does not show a beautiful face like your tutorial. For Automatic1111, Adetailer can be used to beautifully redraw face images as well, but currently Adetailer is not available for ComfyUI. Is there something I should do?
any idea what kind of graphics card you can pull this off with??? I have a rx 6700 xt and an old laptop with a Gtx 1050 so it's clear i will need to upgrade but what is like the minimal card so i don't have to spend a lot
📢 Last chance to get 40% OFF my AI Digital Model for Beginners COURSE: aiconomist.gumroad.com/l/ai-model-course
This was excellent. Thank you for showing people there are other and better options than the Midjourney face swap method which of course limits your flexibility tremendously. I'm actually in the process of doing this exact thing and was building my ideal workflow, but what you did with the outfit and masking is really fantastic. Very helpful.
There's nothing comfy about ComfyUI
This 🤣🤣
@@xviovxthen you should do it manually on a1111 to get the idea why it’s called comfy
Agree😂
The flexibility it gives us over other tools are justified
😂😂😂
Great video. I find using the prompt, "staring directly into the camera" works well for portrait shots or using "portrait". I'm also glad you're using low quality prompts instead of high quality and cinematic photography. Most average phone users don't have access to high quality cameras. 😁
where do you get all the images for clothing and poses from?
Fantastic video. Simple, straightforward and well explained.
Much appreciated!
perfect video , only consistent and detailed video i was looking for . this is gem.thank you so much
load clip vision model. what model do a load ? where to find this model.safetensors
This was Absolutely AMAING! I've watched countless videos, spent a good amount but nothing as detailed and as great as this.. Definitely subscribing to this channel
That's actually really incredible the psychology and systems that go into making that much money
I wish I'd have watched your video sooner. Well, if you posted it earlier 😂😂 just today I have come to a similar solution in order to achieve this. And now I still have a question: what if you want to switch to full body figure, or want to have a profile or rear view of your model, do the masks and the images going into the IPadapters remain unchanged? Or do you have to switch IPadapter's image and mask accordingly?
Whats ur Clip vison model?
where did u get the IPadapter, how did u config and install it, which CLIP vision did u use, where did u get it? where did u get the open pose? i tried to find all by my self, but the end result is that my faceswapping looks like dogshit, can u help?
I was the same when I saw the video, frustrated because it doesn't explain many details, but I was able to make it work, but it was on my own, I took the time to look for everything you mention separately and watch videos, and I was able to get what was missing.
IPAdapterApply node fails every time I try to use the workflow. I'm a noob help please :(
I have a question, if we have a clothing or jewlery brand deal, how can we make the model wear that product?
Where did you installed? Wich folder
Hey! Thank you for the video. Can you advice about one problem? I used your workflow, but I catch this error: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]). I don't understand where it comes from
Could you solve it? I have the same issue and don't understand how to proceed
So for that I had to get the correct clip_vision model.
I don't know that it will let me post a link in the comments so here is how to find it:
Go to his link to "All Useful Links & Workflow Visit:" in his description.
Go to "IPAdapter plus Models HuggingFace Link"
Go to the main directory, then the "Models" folder. Go to the "Image_Encorder" folder. Download "model.safetensors"
From there you have to put the model you downloaded in your comfyUI, clip vision model folder.
That same place also has alternative IPAdapter models.
@@tiggy4591 thankyou very much it worked like charm
@@sunnyandharia907 Awesome, I struggled with it a bit last night. I'm glad it helped.
@@tiggy4591 i love you! you are the GOAT
thanks for this video! amazing! ..No matter what I try, I don't get the option of applying the Apply IPAdapter Node.. What do I miss? would be thankful for any help!
video is great, but you did not provide the links to download the models.
I have downloaded your worflow, tried to install all models same as you have, and always get an error.
I'm sure the problem is on my side. but you could take more time to explain the details.
thanks for making this video, I just bought your workflow
Good video but you left a lot of info out, and its making me mad cause it wont work.
1. I keep getting errors, i think its because i got a random clip vision model
2. You did not show how to create or get a pose or how to even use the controlnet. I dont have any images like you i have to input my image, and dont know how to make one
it worked, thanks, where i find the openposes to download?
What if I just wanted to change the pose alone, no change of clothes or anything else, how do I go about that pls
I was/am finding this extremely usefull. However, am I the only person who can't seem to find any reference to this CLIP Vision file? pls help 🙏🏻
I was looking for 2 hours yesterday no chance 😢
manager > install models > CLIPVision model (IP-adapter) if you used this guide exactly you want the SD1.5 version
thank you bro!
Hey thanks for the video, but I where do you download all the controlnets from and how do you install them in to ComfyUI as well as where did you get the Clip Vision model from ?
huging face
Can u give is a link to this model?
I have also struggle, have you found it yet?
the ipadapter apply node is not found, can't figure out how can i fix this? any solutuons?
I have the same problem, try use others but not work
I have followed every steps but the result is not as good as yours…it turns out I am getting distorted faces. I have switched to other checkpoint models (SD1.5) still the same😢😢
Cool, but what about a different perspective view of the face? it seems like the face always keeps the same perspective regardless of Openpose settings. is there any way to fix this?
great tutorial... very easy to follow. thank you!
Hi friend, a couple of questions here:
1) how do I get the KSampler with the image preview?, I just have the normal one with no image preview.
2) I search for the "Ultimate SC Upscaler", but I did not find it, it is something I have to install?, if so, from where can I download it?
3) Now, I did find the Image Up Scale Loader Node, but did not find the 4x_foolhardy.pth feature, is that something I have to download from somewhere else?, in which folder do I drop it?, so it appears on the node as an option next time.
Generally speaking there are some tools and features you are using on your video, that we don't know from where they came from, another option is to buy your workflow, which is not a problem because you are doing and amazing job, and we can support you in that way, however if I purchase the work flow, I will have the same problem because the lack of tools and at the end the work flow won't work as expected.
I will go some research on how to get those tools and then come back to this video. Thanks bro!
same
Still using a1111, I love the simplicity.
You can do more with comfy.
Excelente video hermano, bastante interesante.
I'm always thankful. Rather than first creating a female model as a prompt, is it impossible to import a photo of a female model?
Woaw, it seems very difficult but great in the same Time. I will try tomorrow
Great video, one question: why if I change the initial checkpoint despite keeping or removing the LoRA and connecting the model directly to the IFAdapter the whole process stops working because of a different size of the matrices?
Hey everyone! 😊
I'm planning to create a comprehensive course on creating a virtual influencer from scratch and growing an Instagram account. It'll be a long and detailed course, so I'm thinking of making it a paid course, but at a reasonable price. What do you think? Would you be interested in something like that?
this is wild! Thanks.
hey, nice tutorial, really well explained in detail. I am getting an error when running Ksampler and i was wondering if you could help me with that:
"It seems that models and clips are mixed and interconnected between SDXL Base, SDXL Refiner, SD1.x, and SD2.x. Please verify. "
one thing that is different from my workflow and your workflow is that in the clip vision loader I am using "clip_vision_vit_h.safetensors" instead of "model.safetensors" because I couldn't fine the file you are using on the web, any chance you could post a link to that file or help me out resolve this error?
thanks in advance
I am going to ignore that you took MKBHD's voice and thank you for the tutorial :P
I am trying to find a course on creating a virtual influencer, so if you can tell me when you make it will be great!
Maybe you address this in another video, but to me, the faces were vastly different from one another, evolving on every pass and getting away from the original one (before adding the SMILE in the prompt).
Hey thanks for the video! I cannot find the Ultimate SD Upscaler. has it been removed? If so is there something else you would suggest in its place?
@RichardMJr you need to install it from the manager , you can manually or do it like he does at the start
I just installed it using Pinokio but how to you start the creation of the image?
Excellent video!
Glad you liked it!
Ok. I have a serious problem with Stable Diffusion and comfyui. Both using epicrealism and realistic vision as models, they generate me images of naked women. Which I don't mind but it's not what I intend to do.... I have used the same prompt as in this video and other clean prompts.... But the thing doesn't get better!!! 🤣 🤣 🤣
lol you can fix this issue using keywords like (nude, nsfw...) in the negative prompts
New to this, why not combine with faceswap?
Amazing content. Thanks for sharing.
Is it possible to place her in a specific environment by providing a reference image of the environment?
How can you increase the accuracy for retaining the face? The face, although quite subtle, was noticeably different in each generation, which is noticeable to a follower as I noticed just in this video. Thanks!
The Workflow does not work any more. Error occurred when executing IPAdapterUnifiedLoader:
ClipVision model not found.
And dont say i did something wrong i used the .json
hi im having issue with my generation before upscale the face looks messedup, any suggestions ?
how to resolve bad hands efficiently? is there any neat trick besides negative embeddings?
I fix bad hands in pictures using Photoshop and inpainting.
Are they really consistent? I don't see Jennifer lawrence in any of the final generated samples!
The face of Lawrence is only for reference to AI.
Amazing! Any recommendations on how to optimize it for SDXL? For a reason I can’t explain if I update all the models I’m getting worth results with SDXL compare to 1.5… 🤔
How to add custom clothing of my brand
Nothing in ComfyUI is beginner freindly. There is pain, there is frustration, there is lost sleep and then more pain. I miss Lawrence wont take this unkindly, these arrows looked rarher painfull 😂.
Wow, this is nice, I wish there was an easier way to do this or this is too complicated to me :(
good video but there were things left to explain, for example: installing the models, in which folder to install it
Cool video!
Got my KSampler blocking (loading 0% over 25 steps) after IP Adapter, don't know why but maybe related to the AMD graphic card I am using?
Generation of image and upscaling was working...
Very good, but the face painting is not very like the reference, the clothes can not change the same, are generated a little similar, but still very different, what is a good way
NGL already have an account, have 6k followers and steadily growing. Can't share the secret sauce though, but making images is only 1/5 of the process ;)
Link the acct?
@@Rubberglass If he's pretending to not be an AI, any link to his account could potentially compromise the whole operation.
Where do I need to put model for IPAdapter? It is not detecting downloaded model anywhere
I get this error while using your workflow from gumroad: ClipVision model not found.. help me please
You have shared everything thank you for that. If you could also share the openpose image and the character dress one would be appreciated
But the clothes not same 100% right ?
You're correct, the clothes aren't always 100% the same. However, by adjusting the Ksampler denoise strength and IP adapter weight value, I can usually achieve about 80 to 90% similarity in the clothing details. also it depends on the type of clothes.
God bless you! Tysm
Out of context, but what AI voice do you use? can be done local?
anyone know why my ksampler step would be extremely slow? I followed these directions to a T, twice and both times the process stalled at Ksampler and took 10-15mins for just 2-4 images. help!
my model is coming out with two heads all the time, do you know how I can solve it? I've tried several negative prompts but it doesn't help.
You provide links to the wrong models on pixellabs
This is great stuff.
where should I put ipdapterplus models? I put in "custom_modules->ComfyUI_IPAdapter_plus\models" but it didn't detect the model?
When I use a same phto of clothes, almost similar characters appear, but when the image of the clothes changes, a completely different character appears. What should I fix?
After 3 installs I deleted it as it keep saying directory didn't exist which we B.S.
And another question: what's the purpose of setting the denoise value to 0.75 in the ksampler at 7:45 with an empty latent?
There is none. Cause an empty latent image is just pure noise. He likely meant to convert an image into latent space or something.
partially right .. however i think the latent isn't totally empty . there is openpose . and the generation after the first one gets seeds and clips from last one . less denoise = less changes ....
@@JustFeral indeed, this must be related to the IPadapter in some way or I can't see the point, as he's starting from an empty latent. What's the point of partially denoising the noise.
I know this isn't the point of your video, but I'm confused about one basic thing in your workflow. You generate images with KSampler set to "Randomize" until you like one, then change it to fixed. How does this actually work for you? It shouldn't, and it doesn't for me. Randomize generates a new seed AFTER the image is generated. So if you see a good image, changing the KSampler from Randomize to Fixed won't give you the same image again because the seed has ALREADY changed. Yet in your video it doesn't?!
I think it does change in his video. The face is not consistent.
Set it to fixed from the start and use arrow to change seed one up for each generation until you find a nice variation.
@@KachZz yes I know how to work around the issue but in the video he has the node set to Randomize yet its behaviour is not expected.
Where can I download the model for clip vision?
Please make a video about the posing and where to get this, and how you make the outfits
Error occurred when executing KSampler:
mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)
Can someone help me with that?
I think for the initial face generation SDXL is used, but then later when you use IP and ControlNet, you should use a 1.5 based model instead. Probably you get the error because you are mixing SDXL and 1.5 models.
It was a problem regarding mixing SDXL with sd...,my fault. By the way I still can't make the final image with clothes similar to the face one...Do you have any advice?@@stephan935
@@stephan935 fixed, thank you 🙏
@@stephan935 I have a bunch of models. How can I tell which are 1.5?
@@kentverge I honestly can't tell you. Sometimes it's in the name. I mostly know it based on the information in civitai. There you find the model it's based on (e.g. SDXL or 1.5).
InsightFace must be provided for FaceID models. any one getting this error ?
So what is the miimum system req?
¿Qué pasa con la identidad de un perfil usado en Instagram, te pueden banear? Best
How to consistent up to 3 character or more just style change with prompt
its so interesting and amazing how this is now. I don't know what many of the terms he says mean though. like dpmpp_sde.. .. and that kind of stuff. I feel like a total idiot watching this. There is no way I can learn this now and keep up with it. It's really so cool, but it sure makes me feel stupid.
Can u help me? Why is sometimes the facy ugly and sometimes perfect, depens which pose i take ?
I bet the creator of the program is an electrician 😂
or the flying spaghetti monster 😂
Had an Error with the IPAdapter Apply any one an idea why it could be happening?
Error occurred when executing IPAdapterApply:
Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).
did you ever figure out the issue? I have the same thing happening to me
@@RagesmithFTW Yeah I think it was that there are two different clip for the IPAdapter and depending on the model you have to use one or the other.
@@pixelriegel alright thanks ill try to see if i can figure it out
Any ideas how to fix it?@@RagesmithFTW
how do you do to see ksampler and upscaler progress ??
I'm searching for it and i can't find it
PLEASEE
Awesome......
Can you make a proper video of how to set up the work space before following steps in this video? As the video you are directing to is missing heella lots of parts and nothing works?
So i tried and everything works. But the next type I open comfyui the character face is getting change how to fix that can you make a video for this
Hi, thanks for your content !
Great stuff and efforts
I have a little request would it be possible to make a video to explore AI models with the Multiarea conditioning .
it would a great asset
Thanks and regards !
How can i make a fashion model?! I mean same person with different outfit that i choose. Thanks
i cant find ip adapter apply nodes , what should i do ?
I'm updating this workflow because a lot has changed since then. +++ IPAdapter version 2 is even more advanced now. Be sure to check out my latest videos for all the updates.
@@Aiconomist Please update this, i try use other nodes and adapt something similar that you .json file, but nothing work for me :(, appear a error shows "copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664])"
Please help us and update the .json file in the web, thank u very much
Hey i have a question, how can i do all of this, if i have already generated model in stable diffusion?
If I get a Macbook M3 pro , can I easily install Comfy UI and work?
thank you for your content, Please make a video how to use stable diffusion for creating designs for merch by amazon
Is it free ? Or do we have to pay something
Thanks for the great tutorial and the sample json files! I was able to do almost the same thing using the sample Json, but for some reason the face part is broken and does not show a beautiful face like your tutorial. For Automatic1111, Adetailer can be used to beautifully redraw face images as well, but currently Adetailer is not available for ComfyUI. Is there something I should do?
There is a plugin called Face Detailer but of course, it needs more nodes and connections making the workflow more complicated.
any idea what kind of graphics card you can pull this off with???
I have a rx 6700 xt and an old laptop with a Gtx 1050 so it's clear i will need to upgrade but what is like the minimal card so i don't have to spend a lot
cant get my characters eyes to look normal :( any tips?