Hi Kazi, I'm thrilled to hear that you're enjoying LivePortrait! I also did a quick test with Hedra retargeting and was very pleased with the results. It's amazing to see how well these tools work together. Keep having fun and exploring new possibilities! 😊
hi, do you know any model for comfy ui so we can create ai avatar speaker? i am searching for a comfy ui model that i give him a photo (my avatar) and a voice file , then comfy ui give me ai avatar speaker with lip+eye+face synce. i know there are websites to do it but i want to make it in my local pc for free. i have pretty good gpu.
Hey great question! For getting started with comfyUI I'd recommend get a decent pc with a dedicated GPU that has at least 12GB of VRam, so nVidia 3080 and above would be great. For the rest of the pc parts, get that's appropriate for that GPU. Check PC Part Picker website for more recommendations.
These are two different workflows. The first one is for just avatar and the second one is for live portrait. Just watch the demonstration in the video :)
my "load instantid Model" node isn't working, i have the same files as yourself but doesn't read what it needs to from the bin file. could you attach new file links for me to download so i can try this?
The main model can be downloaded from HuggingFace and should be placed into the ComfyUI/models/instantid directory. huggingface.co/InstantX/InstantID/resolve/main/ip-adapter.bin?download=true
Great video thanks :). I am getting this though: Error occurred when executing LivePortraitProcess: PyTorch is not linked with support for mps devices do you know how to fix it?
@@amaru_zeas I will do some digging and reply to your query. Not sure why that happened. Do you have insightface set up properly? Does the Reactor node or instantID work on your comfyUI environment?
@@kaziahmedwhat I referred to was the image load (source image). How can I upload a source video that will be animated by another driving video? Hope that you understood what I meant. Thanks :) And great video
Hi Kazi, I'm thrilled to hear that you're enjoying LivePortrait! I also did a quick test with Hedra retargeting and was very pleased with the results. It's amazing to see how well these tools work together. Keep having fun and exploring new possibilities! 😊
Thanks :) yeah hedra is awesome!
Cool tutorial thanks not many give you clear instructions, you did and it worked a treat!
Thank you 😊 I am glad it worked for you 🙌🏽🙌🏽
woooow
Try it out and share your results in the comments!
@@kaziahmedI will do it tomorrow.
The video and the image need to be the same size? I have crop info errors. I tried using the image rezising node and liveportrait cropper.
hi kazi, i dont think it works anymore since the update. crop info error. can you please redo with lip sync ? thanks
Oh, I will check and update, thanks!
hi, do you know any model for comfy ui so we can create ai avatar speaker?
i am searching for a comfy ui model that i give him a photo (my avatar) and a voice file , then comfy ui give me ai avatar speaker with lip+eye+face synce.
i know there are websites to do it but i want to make it in my local pc for free. i have pretty good gpu.
Hi kazi. Can u recomend us which kind of pc do we need for using comfyui? Which vcard? Nvida ?thx
Hey great question! For getting started with comfyUI I'd recommend get a decent pc with a dedicated GPU that has at least 12GB of VRam, so nVidia 3080 and above would be great. For the rest of the pc parts, get that's appropriate for that GPU. Check PC Part Picker website for more recommendations.
could we bring it on live stream ?
Not yet, it's not real time. But hopefully soon!
how to we just output the Avatar instead of combine?
These are two different workflows. The first one is for just avatar and the second one is for live portrait. Just watch the demonstration in the video :)
my "load instantid Model" node isn't working, i have the same files as yourself but doesn't read what it needs to from the bin file. could you attach new file links for me to download so i can try this?
The main model can be downloaded from HuggingFace and should be placed into the ComfyUI/models/instantid directory.
huggingface.co/InstantX/InstantID/resolve/main/ip-adapter.bin?download=true
I cannot find the code while installing the Insightface ID on the Comfy UI terminal. Can you please provide the code here?
.\python_embeded\python.exe -m pip install insightface-downloaded-file-link onnxruntime
@@kaziahmed Thank you so much ❤❤
Great video thanks :). I am getting this though:
Error occurred when executing LivePortraitProcess:
PyTorch is not linked with support for mps devices
do you know how to fix it?
What GPU are you using? This might not work on anything other than nvidia gpus.
@@kaziahmed RTX 4090
@@amaru_zeas I will do some digging and reply to your query. Not sure why that happened. Do you have insightface set up properly? Does the Reactor node or instantID work on your comfyUI environment?
Can you upgrade the load image node to also accept a video input?
For the first workflow (which is for avatar) or the second one (for live portrait)?
@@kaziahmed for the LivePortrait workflow :)
@@kaischaffer5188 The live portrait imports videos, that's how it works. Did you download my workflow? See the demonstration in the video please :)
@@kaziahmedwhat I referred to was the image load (source image). How can I upload a source video that will be animated by another driving video? Hope that you understood what I meant. Thanks :)
And great video
@@kaischaffer5188 oh… umm. Nout sure if that’d work. Check the live portrait github page for more details on this.
what 's gpu card??
GPU = graphics processing unit, basically your graphics card.