I can't get PuLID to run at all as on the Advanced Sampler, every time I try to run this on any workflow I get this message: expected scalar type Half but found BFloat16 I think it's related to VRAM, but I'm wondering if people under 16GB of VRAM can run PuLID at all?
Pulid installation is a little tricky, few of the people I know gave up on running it after trying to install all the nodes, but it works fine for me and I run it on a 4060 which is just 8GB. Ensure you have downloaded all the required models, the flux pulid model also needs to be downloaded, and a few other.
@@lucifer9814 Thanks for trying to help. Yeah, it seems like people have had problems trying to get PuLID to run for them. It's great to know that it can work under 16GB and even work on 8GB. This definitely has to come down to a models problem. I downloaded all the ones AI Entrepreneur provided on Patreon, but I might still be missing some. I will have to look for that Flux PuLID one.
@ragemax8852 no problem mate, I suggest you look up some more videos for pulid, pulid in general is somewhat complicated to install, but trust me a person like me who's got 0 knowledge with coding and whatnot managed to install it 😁
@@lucifer9814 Yeah, it's very complicated, I'm in the same boat as you when it comes to coding and stuff. LOL! 🤣 I'm confident that I can figure this out eventually, I just need to be steered in the right direction. Since there's so many videos, which video specifically helped you the most? I might be able to go from there and get it figured out.
newbie question: So does ComfyUI just allow discrete masks, without feathering / antialiasing? When I tried it yesterday, it seemed like I could choose an opacity, but it was the same opacity for the entire mask. If I can't create feathered masks (or masks with varying opacity) within comfyUI - can I create them elsewhere, import them into Comfy and use them - or can Comfy only deal with masks with the same opacity everywhere?
There are usually two cases: One is that the folder `antelopev2` is not placed directly under `insightface`, but that there is another folder `antelopev2` under `antelopev2`. Instead, the `glintr100.onnx` file is corrupt. You can re-download a file from HuggingFace: huggingface.co/DIAMONIK7777/antelopev2/blob/main/glintr100.onnx
@@my-ai-force I refer to the perfect face match part, I tried to follow your tutorial but I have problems with the SDXL Repaint. The XDSL model generates an image that does not refer to the INPAINT image created in the first part.
I love to find out about nifty technical stuff, that isn't tied to the particular workflow, like anything everywhere node, bus node, or seed everywhere node, is most enjoyable for me when it happens on the by and by, while listening to something I care about :)
thanks for free workflow but using too much costom nodes make too many errors bugs and missing nodes for new users. avoid such workflow learn to use mostly the default built in nodes.. this kind of workflow breaks comfyui installation
I'm sorry to hear that you're experiencing issues. I've prioritized using popular nodes, but sometimes less common ones are necessary for specific situations. However, I'm optimistic about the ongoing improvements with ComfyUI Manager. It's continually getting better at maintaining a stable environment for most custom nodes, and we're seeing fewer errors as a result. If you continue to encounter problems, please let me know, and I'll do my best to help
Question to better understand the context of the video: When experimenting with roop / reactor (roop fork) a while ago, I found that they gave me next to perfect face similarity. In my understanding, the reason Matteo created InstandID is because, different from roop, it allows for more similarity in the style? So my question would be: How well do you think roop / reactor would work, with your specific example images? Given that the style of the output images you produce is very similar to the input style, and very realistic? Or, asked more generally: What is the ultimately application that you have in mind? Is it what you show in the example images? Are roop / reactor options for this, and if not, why?
Wow - you put so much work and thought into this, Wei, and you are sharing it all for free!
Thank you so much!
hey can i get a download link of your final workflow?
It's in the video description.
Does this work with style transfer and anime style checkpoints?
I can't get PuLID to run at all as on the Advanced Sampler, every time I try to run this on any workflow I get this message: expected scalar type Half but found BFloat16
I think it's related to VRAM, but I'm wondering if people under 16GB of VRAM can run PuLID at all?
Pulid installation is a little tricky, few of the people I know gave up on running it after trying to install all the nodes, but it works fine for me and I run it on a 4060 which is just 8GB. Ensure you have downloaded all the required models, the flux pulid model also needs to be downloaded, and a few other.
@@lucifer9814 Thanks for trying to help. Yeah, it seems like people have had problems trying to get PuLID to run for them. It's great to know that it can work under 16GB and even work on 8GB. This definitely has to come down to a models problem. I downloaded all the ones AI Entrepreneur provided on Patreon, but I might still be missing some. I will have to look for that Flux PuLID one.
@ragemax8852 no problem mate, I suggest you look up some more videos for pulid, pulid in general is somewhat complicated to install, but trust me a person like me who's got 0 knowledge with coding and whatnot managed to install it 😁
@@lucifer9814 Yeah, it's very complicated, I'm in the same boat as you when it comes to coding and stuff. LOL! 🤣 I'm confident that I can figure this out eventually, I just need to be steered in the right direction. Since there's so many videos, which video specifically helped you the most? I might be able to go from there and get it figured out.
newbie question:
So does ComfyUI just allow discrete masks, without feathering / antialiasing?
When I tried it yesterday, it seemed like I could choose an opacity, but it was the same opacity for the entire mask.
If I can't create feathered masks (or masks with varying opacity) within comfyUI - can I create them elsewhere, import them into Comfy and use them - or can Comfy only deal with masks with the same opacity everywhere?
I usually don't change opacity in mask editor and modify mask using other nodes like "Grow Mask with Blur"
How did you resolve any insightface assertion errors - seems to plague a lot of people?
There are usually two cases:
One is that the folder `antelopev2` is not placed directly under `insightface`, but that there is another folder `antelopev2` under `antelopev2`.
Instead, the `glintr100.onnx` file is corrupt. You can re-download a file from HuggingFace: huggingface.co/DIAMONIK7777/antelopev2/blob/main/glintr100.onnx
There is an EcomID inpainting workflow as well, have u tried that one too?
I haven't. Could you please provide a link?
Can you share the final workflow ?
It's in the video description.
@@my-ai-force I refer to the perfect face match part, I tried to follow your tutorial but I have problems with the SDXL Repaint. The XDSL model generates an image that does not refer to the INPAINT image created in the first part.
I love to find out about nifty technical stuff, that isn't tied to the particular workflow, like anything everywhere node, bus node, or seed everywhere node, is most enjoyable for me when it happens on the by and by, while listening to something I care about :)
thanks for free workflow but using too much costom nodes make too many errors bugs and missing nodes for new users. avoid such workflow learn to use mostly the default built in nodes.. this kind of workflow breaks comfyui installation
I'm sorry to hear that you're experiencing issues. I've prioritized using popular nodes, but sometimes less common ones are necessary for specific situations.
However, I'm optimistic about the ongoing improvements with ComfyUI Manager. It's continually getting better at maintaining a stable environment for most custom nodes, and we're seeing fewer errors as a result.
If you continue to encounter problems, please let me know, and I'll do my best to help
What AI voice are you using?
My own voice😂
@@my-ai-force Great lolll
Question to better understand the context of the video:
When experimenting with roop / reactor (roop fork) a while ago, I found that they gave me next to perfect face similarity.
In my understanding, the reason Matteo created InstandID is because, different from roop, it allows for more similarity in the style?
So my question would be:
How well do you think roop / reactor would work, with your specific example images?
Given that the style of the output images you produce is very similar to the input style, and very realistic?
Or, asked more generally:
What is the ultimately application that you have in mind? Is it what you show in the example images? Are roop / reactor options for this, and if not, why?
Maybe this workflow can answer your questions: openart.ai/workflows/nomadoor/OzeGHAVL2oHdpf5Ziqpj
I think reactor is good enough for faceswap😊