Yes, this can replace A2F and is a lot more accurate. I am sure improvements will come soon adding more facial expression to the audio driven animations .
Slight emotions are there but you can tweak them is sequencer as I stated. Also will show some wrinkles but with a bang over her forehead it is harder to see.
@@kmanboard According to unreal documentation this is called "Audio Driven Animation for Metahumans" Not lip sync. I showed this in the beginning of the video. I didn't name it just showing the feature...lol
can this replace audio to face from omniverse, I use A2F and its Rest API to create metahuman NPC AI, can this UE Audio Driven feature replace that?
Yes, this can replace A2F and is a lot more accurate. I am sure improvements will come soon adding more facial expression to the audio driven animations .
bro how can i make it real time? or update media file and trigger for process in blueprints?
Hit me on discord for this explanation.
facial animation with no emotions...
Slight emotions are there but you can tweak them is sequencer as I stated. Also will show some wrinkles but with a bang over her forehead it is harder to see.
he's demontrating lipsync even though he said facial animation
@@kmanboard According to unreal documentation this is called "Audio Driven Animation for Metahumans" Not lip sync. I showed this in the beginning of the video. I didn't name it just showing the feature...lol