head movement can be solved partially with 2 LP nodes back to back. Use rotation only for the first, then do the mouth/expression. I have followed full side head movements and still tracked fine.
I am trying to solve exactly this problem, yours sounds like good advice, but I don't understand exactly how to set up the nodes. By any chance can you help me by sharing your workflow? Thank you very much.
@@italodraperi4154 my you tube profile has links to civitai where i share my workflows. I dont think adding links to youtuber comments are allowed. My live portrait flow has the double LP node set that allows this change.
@@italodraperi4154 apparently not... i have replied but apparently the author of the video keeps deleting them... sorry. Sad as hell this happens on every damn ai video info page i comment on. People sell their workflows and dont let me show free ways... SAD
do you know any model or anything i can give it image (my avatar) + audio file and it give me video of that with lip and eye and face synce? local on pc like for example comfy ui model.
I've been making some fun stuff with Live Portrait for a while, but using it's own app within Pinokio. I am finding that the video goes out of sync with the audio some times. Especially if I go longer than a 5 second video. Any thoughts on how to fix that within Live Portrait? I don't want to have to shift a bunch of audio track clips around to make things only "kind of" line up.
I have problems with the node Load MediaPipeCropper. This ComfyUI ERROR: [ONNXRuntimeError] : 3 : NO_SUCHFILE : Load model from F:\SD\ConfyUI\ComfyUI_windows_portable\ComfyUI\models\liveportrait\landmark.onnx failed:Load model F:\SD\ConfyUI\ComfyUI_windows_portable\ComfyUI\models\liveportrait\landmark.onnx failed. File doesn't exist :(
I installed it via the manager. Updated everything. Refreshed. Turn it on and off. It still misses some nodes. HELP. Do i need to download something else? I just started playing with this app
now i can add the smile to my pics later on
Yes!! 😂
Oy vey😂
🤣🤣🤣
so great. Thanks
I love your work, more tutorial videos please.
Thanks!! 🙏🏽
Awesome! thank you so much for sharing!
Oh hell yes I've got some dials and knobs to control!
🤣👍🏽
Awesome video, things like this open so many doors. Use with other AI tools and editing software and even more can be achieved 😀😀👋🏻👋🏻
Can the advanced vid2vid be done with latest version of Liveportrait GUI instead of Comfyui?
so cool and clean thank you
Really cool and clean tutorial than you. I’m subbing ❤
head movement can be solved partially with 2 LP nodes back to back. Use rotation only for the first, then do the mouth/expression. I have followed full side head movements and still tracked fine.
I am trying to solve exactly this problem, yours sounds like good advice, but I don't understand exactly how to set up the nodes. By any chance can you help me by sharing your workflow? Thank you very much.
@@italodraperi4154 my you tube profile has links to civitai where i share my workflows. I dont think adding links to youtuber comments are allowed. My live portrait flow has the double LP node set that allows this change.
@@italodraperi4154 my profile has my links.
@@italodraperi4154 apparently not... i have replied but apparently the author of the video keeps deleting them... sorry. Sad as hell this happens on every damn ai video info page i comment on. People sell their workflows and dont let me show free ways... SAD
do you know any model or anything i can give it image (my avatar) + audio file and it give me video of that with lip and eye and face synce? local on pc like for example comfy ui model.
thanks for the content !
👍🏽😊
Bro, your content is gem, keep going. How do you think liveportrait can be complemented with shoulder movements and gestures? How to do this?
I've been making some fun stuff with Live Portrait for a while, but using it's own app within Pinokio. I am finding that the video goes out of sync with the audio some times. Especially if I go longer than a 5 second video. Any thoughts on how to fix that within Live Portrait? I don't want to have to shift a bunch of audio track clips around to make things only "kind of" line up.
Hmm I wonder if it has to do with fps. If the video is 30 fps and you’re generating at 15 fps, that will misalign the audio. It could be that.
lol this is cool. thanks for sharing
Hey man thanks. I realizsed that having a video to animate a face causes flickering. It gives unrealistic movements if the face
I have problems with the node Load MediaPipeCropper.
This ComfyUI ERROR:
[ONNXRuntimeError] : 3 : NO_SUCHFILE : Load model from F:\SD\ConfyUI\ComfyUI_windows_portable\ComfyUI\models\liveportrait\landmark.onnx failed:Load model F:\SD\ConfyUI\ComfyUI_windows_portable\ComfyUI\models\liveportrait\landmark.onnx failed. File doesn't exist
:(
I installed it via the manager. Updated everything. Refreshed. Turn it on and off. It still misses some nodes. HELP. Do i need to download something else? I just started playing with this app
Really. Nice. Why cant we get this for full character movement inside a scene 😢
Nice video you have a new SUB have you got the workflow you are using.👍
🙏🏽😁
e i been on the local version for a few months. that just release the video version not too long ago its amazing
It’s so good!
Someone needs to build a keyframe/controller interface
Yes!!!