This was excellent. Thank you for showing people there are other and better options than the Midjourney face swap method which of course limits your flexibility tremendously. I'm actually in the process of doing this exact thing and was building my ideal workflow, but what you did with the outfit and masking is really fantastic. Very helpful.
This was Absolutely AMAING! I've watched countless videos, spent a good amount but nothing as detailed and as great as this.. Definitely subscribing to this channel
Great video. I find using the prompt, "staring directly into the camera" works well for portrait shots or using "portrait". I'm also glad you're using low quality prompts instead of high quality and cinematic photography. Most average phone users don't have access to high quality cameras. 😁
How can you increase the accuracy for retaining the face? The face, although quite subtle, was noticeably different in each generation, which is noticeable to a follower as I noticed just in this video. Thanks!
Hey! Thank you for the video. Can you advice about one problem? I used your workflow, but I catch this error: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]). I don't understand where it comes from
So for that I had to get the correct clip_vision model. I don't know that it will let me post a link in the comments so here is how to find it: Go to his link to "All Useful Links & Workflow Visit:" in his description. Go to "IPAdapter plus Models HuggingFace Link" Go to the main directory, then the "Models" folder. Go to the "Image_Encorder" folder. Download "model.safetensors" From there you have to put the model you downloaded in your comfyUI, clip vision model folder. That same place also has alternative IPAdapter models.
anyone know why my ksampler step would be extremely slow? I followed these directions to a T, twice and both times the process stalled at Ksampler and took 10-15mins for just 2-4 images. help!
Hey everyone! 😊 I'm planning to create a comprehensive course on creating a virtual influencer from scratch and growing an Instagram account. It'll be a long and detailed course, so I'm thinking of making it a paid course, but at a reasonable price. What do you think? Would you be interested in something like that?
I have followed every steps but the result is not as good as yours…it turns out I am getting distorted faces. I have switched to other checkpoint models (SD1.5) still the same😢😢
where did u get the IPadapter, how did u config and install it, which CLIP vision did u use, where did u get it? where did u get the open pose? i tried to find all by my self, but the end result is that my faceswapping looks like dogshit, can u help?
I was the same when I saw the video, frustrated because it doesn't explain many details, but I was able to make it work, but it was on my own, I took the time to look for everything you mention separately and watch videos, and I was able to get what was missing.
I wish I'd have watched your video sooner. Well, if you posted it earlier 😂😂 just today I have come to a similar solution in order to achieve this. And now I still have a question: what if you want to switch to full body figure, or want to have a profile or rear view of your model, do the masks and the images going into the IPadapters remain unchanged? Or do you have to switch IPadapter's image and mask accordingly?
I'm updating this workflow because a lot has changed since then. +++ IPAdapter version 2 is even more advanced now. Be sure to check out my latest videos for all the updates.
@@Aiconomist Please update this, i try use other nodes and adapt something similar that you .json file, but nothing work for me :(, appear a error shows "copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664])" Please help us and update the .json file in the web, thank u very much
thanks for this video! amazing! ..No matter what I try, I don't get the option of applying the Apply IPAdapter Node.. What do I miss? would be thankful for any help!
Hey thanks for the video, but I where do you download all the controlnets from and how do you install them in to ComfyUI as well as where did you get the Clip Vision model from ?
Cool, but what about a different perspective view of the face? it seems like the face always keeps the same perspective regardless of Openpose settings. is there any way to fix this?
Had an Error with the IPAdapter Apply any one an idea why it could be happening? Error occurred when executing IPAdapterApply: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).
Hi friend, a couple of questions here: 1) how do I get the KSampler with the image preview?, I just have the normal one with no image preview. 2) I search for the "Ultimate SC Upscaler", but I did not find it, it is something I have to install?, if so, from where can I download it? 3) Now, I did find the Image Up Scale Loader Node, but did not find the 4x_foolhardy.pth feature, is that something I have to download from somewhere else?, in which folder do I drop it?, so it appears on the node as an option next time. Generally speaking there are some tools and features you are using on your video, that we don't know from where they came from, another option is to buy your workflow, which is not a problem because you are doing and amazing job, and we can support you in that way, however if I purchase the work flow, I will have the same problem because the lack of tools and at the end the work flow won't work as expected. I will go some research on how to get those tools and then come back to this video. Thanks bro!
The Workflow does not work any more. Error occurred when executing IPAdapterUnifiedLoader: ClipVision model not found. And dont say i did something wrong i used the .json
partially right .. however i think the latent isn't totally empty . there is openpose . and the generation after the first one gets seeds and clips from last one . less denoise = less changes ....
@@JustFeral indeed, this must be related to the IPadapter in some way or I can't see the point, as he's starting from an empty latent. What's the point of partially denoising the noise.
hey, nice tutorial, really well explained in detail. I am getting an error when running Ksampler and i was wondering if you could help me with that: "It seems that models and clips are mixed and interconnected between SDXL Base, SDXL Refiner, SD1.x, and SD2.x. Please verify. " one thing that is different from my workflow and your workflow is that in the clip vision loader I am using "clip_vision_vit_h.safetensors" instead of "model.safetensors" because I couldn't fine the file you are using on the web, any chance you could post a link to that file or help me out resolve this error? thanks in advance
Great video, one question: why if I change the initial checkpoint despite keeping or removing the LoRA and connecting the model directly to the IFAdapter the whole process stops working because of a different size of the matrices?
Hm... i finally got all needed models and nodes and shit but it still does not work as shown... I get an error message "Error occurred when executing IPAdapterApply: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664])." And then it goes on with execution.py line 85, line 78, ... but i think the cause is something with the resolution of the images I put in? Is that somehow relevant? Like all input images have to be the same resolution?
When I use a same phto of clothes, almost similar characters appear, but when the image of the clothes changes, a completely different character appears. What should I fix?
I think for the initial face generation SDXL is used, but then later when you use IP and ControlNet, you should use a 1.5 based model instead. Probably you get the error because you are mixing SDXL and 1.5 models.
It was a problem regarding mixing SDXL with sd...,my fault. By the way I still can't make the final image with clothes similar to the face one...Do you have any advice?@@stephan935
@@kentverge I honestly can't tell you. Sometimes it's in the name. I mostly know it based on the information in civitai. There you find the model it's based on (e.g. SDXL or 1.5).
any idea what kind of graphics card you can pull this off with??? I have a rx 6700 xt and an old laptop with a Gtx 1050 so it's clear i will need to upgrade but what is like the minimal card so i don't have to spend a lot
Cool video! Got my KSampler blocking (loading 0% over 25 steps) after IP Adapter, don't know why but maybe related to the AMD graphic card I am using? Generation of image and upscaling was working...
Amazing! Any recommendations on how to optimize it for SDXL? For a reason I can’t explain if I update all the models I’m getting worth results with SDXL compare to 1.5… 🤔
How can I use multiple prompts at the same time? Say put 5 into the queue and then move onto the next one? So the same that with the stable dif default UI can be done by using the texfile or textbox on the very bottom?
video is great, but you did not provide the links to download the models. I have downloaded your worflow, tried to install all models same as you have, and always get an error. I'm sure the problem is on my side. but you could take more time to explain the details.
Thanks for the great tutorial and the sample json files! I was able to do almost the same thing using the sample Json, but for some reason the face part is broken and does not show a beautiful face like your tutorial. For Automatic1111, Adetailer can be used to beautifully redraw face images as well, but currently Adetailer is not available for ComfyUI. Is there something I should do?
You're correct, the clothes aren't always 100% the same. However, by adjusting the Ksampler denoise strength and IP adapter weight value, I can usually achieve about 80 to 90% similarity in the clothing details. also it depends on the type of clothes.
Very good, but the face painting is not very like the reference, the clothes can not change the same, are generated a little similar, but still very different, what is a good way
📢 Last chance to get 40% OFF my AI Digital Model for Beginners COURSE: aiconomist.gumroad.com/l/ai-model-course
This was excellent. Thank you for showing people there are other and better options than the Midjourney face swap method which of course limits your flexibility tremendously. I'm actually in the process of doing this exact thing and was building my ideal workflow, but what you did with the outfit and masking is really fantastic. Very helpful.
IPAdapterApply node fails every time I try to use the workflow. I'm a noob help please :(
This was Absolutely AMAING! I've watched countless videos, spent a good amount but nothing as detailed and as great as this.. Definitely subscribing to this channel
Great video. I find using the prompt, "staring directly into the camera" works well for portrait shots or using "portrait". I'm also glad you're using low quality prompts instead of high quality and cinematic photography. Most average phone users don't have access to high quality cameras. 😁
where do you get all the images for clothing and poses from?
load clip vision model. what model do a load ? where to find this model.safetensors
perfect video , only consistent and detailed video i was looking for . this is gem.thank you so much
Fantastic video. Simple, straightforward and well explained.
Much appreciated!
where should I put ipdapterplus models? I put in "custom_modules->ComfyUI_IPAdapter_plus\models" but it didn't detect the model?
How can you increase the accuracy for retaining the face? The face, although quite subtle, was noticeably different in each generation, which is noticeable to a follower as I noticed just in this video. Thanks!
That's actually really incredible the psychology and systems that go into making that much money
There's nothing comfy about ComfyUI
This 🤣🤣
@@xviovxthen you should do it manually on a1111 to get the idea why it’s called comfy
Agree😂
The flexibility it gives us over other tools are justified
😂😂😂
Hey! Thank you for the video. Can you advice about one problem? I used your workflow, but I catch this error: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1024]). I don't understand where it comes from
Could you solve it? I have the same issue and don't understand how to proceed
So for that I had to get the correct clip_vision model.
I don't know that it will let me post a link in the comments so here is how to find it:
Go to his link to "All Useful Links & Workflow Visit:" in his description.
Go to "IPAdapter plus Models HuggingFace Link"
Go to the main directory, then the "Models" folder. Go to the "Image_Encorder" folder. Download "model.safetensors"
From there you have to put the model you downloaded in your comfyUI, clip vision model folder.
That same place also has alternative IPAdapter models.
@@tiggy4591 thankyou very much it worked like charm
@@sunnyandharia907 Awesome, I struggled with it a bit last night. I'm glad it helped.
@@tiggy4591 i love you! you are the GOAT
Whats ur Clip vison model?
What if I just wanted to change the pose alone, no change of clothes or anything else, how do I go about that pls
the ipadapter apply node is not found, can't figure out how can i fix this? any solutuons?
I have the same problem, try use others but not work
anyone know why my ksampler step would be extremely slow? I followed these directions to a T, twice and both times the process stalled at Ksampler and took 10-15mins for just 2-4 images. help!
Hey everyone! 😊
I'm planning to create a comprehensive course on creating a virtual influencer from scratch and growing an Instagram account. It'll be a long and detailed course, so I'm thinking of making it a paid course, but at a reasonable price. What do you think? Would you be interested in something like that?
I have followed every steps but the result is not as good as yours…it turns out I am getting distorted faces. I have switched to other checkpoint models (SD1.5) still the same😢😢
great tutorial... very easy to follow. thank you!
I have a question, if we have a clothing or jewlery brand deal, how can we make the model wear that product?
Where did you installed? Wich folder
I just installed it using Pinokio but how to you start the creation of the image?
where did u get the IPadapter, how did u config and install it, which CLIP vision did u use, where did u get it? where did u get the open pose? i tried to find all by my self, but the end result is that my faceswapping looks like dogshit, can u help?
I was the same when I saw the video, frustrated because it doesn't explain many details, but I was able to make it work, but it was on my own, I took the time to look for everything you mention separately and watch videos, and I was able to get what was missing.
@@ramondiaz5796how?
I wish I'd have watched your video sooner. Well, if you posted it earlier 😂😂 just today I have come to a similar solution in order to achieve this. And now I still have a question: what if you want to switch to full body figure, or want to have a profile or rear view of your model, do the masks and the images going into the IPadapters remain unchanged? Or do you have to switch IPadapter's image and mask accordingly?
my model is coming out with two heads all the time, do you know how I can solve it? I've tried several negative prompts but it doesn't help.
i cant find ip adapter apply nodes , what should i do ?
I'm updating this workflow because a lot has changed since then. +++ IPAdapter version 2 is even more advanced now. Be sure to check out my latest videos for all the updates.
@@Aiconomist Please update this, i try use other nodes and adapt something similar that you .json file, but nothing work for me :(, appear a error shows "copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664])"
Please help us and update the .json file in the web, thank u very much
it worked, thanks, where i find the openposes to download?
How to consistent up to 3 character or more just style change with prompt
I get this error while using your workflow from gumroad: ClipVision model not found.. help me please
thanks for this video! amazing! ..No matter what I try, I don't get the option of applying the Apply IPAdapter Node.. What do I miss? would be thankful for any help!
Hey thanks for the video, but I where do you download all the controlnets from and how do you install them in to ComfyUI as well as where did you get the Clip Vision model from ?
huging face
Can u give is a link to this model?
I have also struggle, have you found it yet?
cant get my characters eyes to look normal :( any tips?
hi im having issue with my generation before upscale the face looks messedup, any suggestions ?
Where do I need to put model for IPAdapter? It is not detecting downloaded model anywhere
Cool, but what about a different perspective view of the face? it seems like the face always keeps the same perspective regardless of Openpose settings. is there any way to fix this?
Had an Error with the IPAdapter Apply any one an idea why it could be happening?
Error occurred when executing IPAdapterApply:
Error(s) in loading state_dict for Resampler:
size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664]).
did you ever figure out the issue? I have the same thing happening to me
@@RagesmithFTW Yeah I think it was that there are two different clip for the IPAdapter and depending on the model you have to use one or the other.
@@pixelriegel alright thanks ill try to see if i can figure it out
Any ideas how to fix it?@@RagesmithFTW
Hi friend, a couple of questions here:
1) how do I get the KSampler with the image preview?, I just have the normal one with no image preview.
2) I search for the "Ultimate SC Upscaler", but I did not find it, it is something I have to install?, if so, from where can I download it?
3) Now, I did find the Image Up Scale Loader Node, but did not find the 4x_foolhardy.pth feature, is that something I have to download from somewhere else?, in which folder do I drop it?, so it appears on the node as an option next time.
Generally speaking there are some tools and features you are using on your video, that we don't know from where they came from, another option is to buy your workflow, which is not a problem because you are doing and amazing job, and we can support you in that way, however if I purchase the work flow, I will have the same problem because the lack of tools and at the end the work flow won't work as expected.
I will go some research on how to get those tools and then come back to this video. Thanks bro!
same
A youtube account called Future Frontier just copied your tutorial only changing the voice. Good tutorial by the way!
Thank you !!
The Workflow does not work any more. Error occurred when executing IPAdapterUnifiedLoader:
ClipVision model not found.
And dont say i did something wrong i used the .json
someone help me please, I didnt find the Load IPAdapter in my nodes and models
And another question: what's the purpose of setting the denoise value to 0.75 in the ksampler at 7:45 with an empty latent?
There is none. Cause an empty latent image is just pure noise. He likely meant to convert an image into latent space or something.
partially right .. however i think the latent isn't totally empty . there is openpose . and the generation after the first one gets seeds and clips from last one . less denoise = less changes ....
@@JustFeral indeed, this must be related to the IPadapter in some way or I can't see the point, as he's starting from an empty latent. What's the point of partially denoising the noise.
I have an issue with the load clip vision doesn’t load the safetensor
anybody got a link to the workflow?
the "model.safetensors" file at 5:07 is too large for the file system to install, what can i do ?
i've been trying to find this
Bende buna yanıt arıyorum ya, dosyayı bulamadım gerçi internette, sizde link var mı? Çözebildiniz mi o sorunu?
Where can I download the model for clip vision?
thanks for making this video, I just bought your workflow
So what is the miimum system req?
how to resolve bad hands efficiently? is there any neat trick besides negative embeddings?
I fix bad hands in pictures using Photoshop and inpainting.
Out of context, but what AI voice do you use? can be done local?
this is wild! Thanks.
Where do you get the ControlNet poses? @aiconomist
You can use openposes .com
Can u help me? Why is sometimes the facy ugly and sometimes perfect, depens which pose i take ?
how do you do to see ksampler and upscaler progress ??
I'm searching for it and i can't find it
PLEASEE
hey, nice tutorial, really well explained in detail. I am getting an error when running Ksampler and i was wondering if you could help me with that:
"It seems that models and clips are mixed and interconnected between SDXL Base, SDXL Refiner, SD1.x, and SD2.x. Please verify. "
one thing that is different from my workflow and your workflow is that in the clip vision loader I am using "clip_vision_vit_h.safetensors" instead of "model.safetensors" because I couldn't fine the file you are using on the web, any chance you could post a link to that file or help me out resolve this error?
thanks in advance
InsightFace must be provided for FaceID models. any one getting this error ?
Hey thanks for the video! I cannot find the Ultimate SD Upscaler. has it been removed? If so is there something else you would suggest in its place?
@RichardMJr you need to install it from the manager , you can manually or do it like he does at the start
Excellent video!
Glad you liked it!
How to add custom clothing of my brand
Great video, one question: why if I change the initial checkpoint despite keeping or removing the LoRA and connecting the model directly to the IFAdapter the whole process stops working because of a different size of the matrices?
Hey i have a question, how can i do all of this, if i have already generated model in stable diffusion?
Hm... i finally got all needed models and nodes and shit but it still does not work as shown... I get an error message "Error occurred when executing IPAdapterApply: Error(s) in loading state_dict for Resampler: size mismatch for proj_in.weight: copying a param with shape torch.Size([768, 1280]) from checkpoint, the shape in current model is torch.Size([768, 1664])." And then it goes on with execution.py line 85, line 78, ... but i think the cause is something with the resolution of the images I put in? Is that somehow relevant? Like all input images have to be the same resolution?
read the comment from @alexalex9511 someone postet an answer
When I use a same phto of clothes, almost similar characters appear, but when the image of the clothes changes, a completely different character appears. What should I fix?
Error occurred when executing KSampler:
mat1 and mat2 shapes cannot be multiplied (154x2048 and 768x320)
Can someone help me with that?
I think for the initial face generation SDXL is used, but then later when you use IP and ControlNet, you should use a 1.5 based model instead. Probably you get the error because you are mixing SDXL and 1.5 models.
It was a problem regarding mixing SDXL with sd...,my fault. By the way I still can't make the final image with clothes similar to the face one...Do you have any advice?@@stephan935
@@stephan935 fixed, thank you 🙏
@@stephan935 I have a bunch of models. How can I tell which are 1.5?
@@kentverge I honestly can't tell you. Sometimes it's in the name. I mostly know it based on the information in civitai. There you find the model it's based on (e.g. SDXL or 1.5).
any idea what kind of graphics card you can pull this off with???
I have a rx 6700 xt and an old laptop with a Gtx 1050 so it's clear i will need to upgrade but what is like the minimal card so i don't have to spend a lot
I was/am finding this extremely usefull. However, am I the only person who can't seem to find any reference to this CLIP Vision file? pls help 🙏🏻
I was looking for 2 hours yesterday no chance 😢
manager > install models > CLIPVision model (IP-adapter) if you used this guide exactly you want the SD1.5 version
New to this, why not combine with faceswap?
Excelente video hermano, bastante interesante.
Cool video!
Got my KSampler blocking (loading 0% over 25 steps) after IP Adapter, don't know why but maybe related to the AMD graphic card I am using?
Generation of image and upscaling was working...
If I get a Macbook M3 pro , can I easily install Comfy UI and work?
Amazing! Any recommendations on how to optimize it for SDXL? For a reason I can’t explain if I update all the models I’m getting worth results with SDXL compare to 1.5… 🤔
Woaw, it seems very difficult but great in the same Time. I will try tomorrow
How can I use multiple prompts at the same time? Say put 5 into the queue and then move onto the next one? So the same that with the stable dif default UI can be done by using the texfile or textbox on the very bottom?
After 3 installs I deleted it as it keep saying directory didn't exist which we B.S.
where to download open pose model?
video is great, but you did not provide the links to download the models.
I have downloaded your worflow, tried to install all models same as you have, and always get an error.
I'm sure the problem is on my side. but you could take more time to explain the details.
So i tried and everything works. But the next type I open comfyui the character face is getting change how to fix that can you make a video for this
I'm always thankful. Rather than first creating a female model as a prompt, is it impossible to import a photo of a female model?
How can i make a fashion model?! I mean same person with different outfit that i choose. Thanks
Are they really consistent? I don't see Jennifer lawrence in any of the final generated samples!
The face of Lawrence is only for reference to AI.
I am trying to find a course on creating a virtual influencer, so if you can tell me when you make it will be great!
Did anyone happen to figure out what Clip Vision model was used?⁉
This is great stuff.
Amazing content. Thanks for sharing.
Is it possible to place her in a specific environment by providing a reference image of the environment?
You have shared everything thank you for that. If you could also share the openpose image and the character dress one would be appreciated
does that work on Macbook machines?
I still have no idea how you do the shortcut search node .
Just double click on the grid
thank you bro!
Wow, this is nice, I wish there was an easier way to do this or this is too complicated to me :(
Awesome......
Thanks for the great tutorial and the sample json files! I was able to do almost the same thing using the sample Json, but for some reason the face part is broken and does not show a beautiful face like your tutorial. For Automatic1111, Adetailer can be used to beautifully redraw face images as well, but currently Adetailer is not available for ComfyUI. Is there something I should do?
There is a plugin called Face Detailer but of course, it needs more nodes and connections making the workflow more complicated.
Made it to 4.36 ... Where do I get blank face model clothes image from?
Did you find out?
@@nopixeltime nope, did you?
why wont it show the preview image on my screen?
how download clip vision model ?
I am going to ignore that you took MKBHD's voice and thank you for the tutorial :P
But the clothes not same 100% right ?
You're correct, the clothes aren't always 100% the same. However, by adjusting the Ksampler denoise strength and IP adapter weight value, I can usually achieve about 80 to 90% similarity in the clothing details. also it depends on the type of clothes.
Can you run this in chrome?😅
i cant make IPAdapter work, make a video downloading pls
please do on for automatic1111
Very good, but the face painting is not very like the reference, the clothes can not change the same, are generated a little similar, but still very different, what is a good way