This is great. I hope soon you will automate most of these steps, especially for characters coming from known 3D tools like metahuman, character creator, etc. I'm not a 3D character guru and it's too difficult to follow all these manual steps. I can of cause spend time, but for most people like me it should be more automated like D-ID or similar tools on the market: create a 3D mesh from 1-5 photos, click few buttons and here you go. I'm kind of stating the obvious :).
I have the model which I imported to A2F (from maya), in A2F I made the animation and exported the cache. In maya I imported just created cache and I got some weird exploded thing. Are there any requirements to model which I can use in A2F? In which format is it better to import mesh to A2F to have the same hierarchy as it is shown in the video?
I donno why for me setup characer with full face gives error .after clicking nothing will happening. but at the buttom just writes KeyError:('tooltip') . anyone knows what is problem?(it is only when I want to use fullface with eyes tonguq etc) my version is 2022.1.0 rc3
Idea, if point ordering of full body happened on top of head downwards. Perhaps there is a very reorder tool to ensure this? Then in blender chop off head and bring in to audio to face. Hopefully point order is retained. To get deformation in blender perhaps a custom connector from omniverse to blender or export face deformation animation as usd crate and bring in to blender to drive verts on whole body mesh. Just ideas.
probably there is some issue with materials like they aren't connected to mesh in omniverse. Or/and lighting. Check these things, maybe it will help you. Good luck!
Great tutorial but a couple of things to add perhaps for improvement. Chapters would be super easy, I had to watch this a couple of times and that would have made things much easier to skip sections as needed. Also, When opening A2F it has an example space, to follow this tutorial it is better to select a new file before beginning to be more accurate to the tutorial itself. In regards to A2F itself, it seems that the tracking is accurate to the male example, but the male example has certain elements that port to other faces. eg. the face is not symmetrical in a very specific way (mouth especially) that ports to other faces perhaps? Would a more symmetrical example lead to more 'universal' tracking results?
Thanks for a very informative video. I tried this on a Daz3D Genesis 8.1 Male character in Blender 3.6. The Gen8.1 eyes, tongue, and lower gum parts are not separated. I was able to separate these manually. I applied the 52 ARKit blend shapes in Audio2Face. It works on Blender except the lower teeth and gums are not moving with the jaw open blend shape. I was able to fix the shape key in blender.
@MrTalkative_ To separate manually, in edit mode, choose a face in the eye, then "select" by material or click control plus L, to select the whole eye. No need to separate the tongue and mouth.
@MrTalkative_ I switched to Wonder Dynamics studio. I will review my audio2face workflow. As far as I can remember, I only added the eyelashes and eyes. I did not configure the tongue and jaws in audio2face. I replaced the "jaw open" blend shape. Made a new one using the addon Daz morph. Then renamed it to match the one used by audio2face or arkit. Then deleted the old blend shape that was not working.
exactly i did what is in this tutorial several times in 2022.1.1,couldnt get result correctly,beards and moustaches eyebrows and eyelashes doesnt work.
@@sinanrobillard2819 i have succeded to implement grooms but eyes couldnt,it gives error on pivot point and i imported static mesh,skeletal mesh did not work.
This is great. I hope soon you will automate most of these steps, especially for characters coming from known 3D tools like metahuman, character creator, etc. I'm not a 3D character guru and it's too difficult to follow all these manual steps. I can of cause spend time, but for most people like me it should be more automated like D-ID or similar tools on the market: create a 3D mesh from 1-5 photos, click few buttons and here you go. I'm kind of stating the obvious :).
Amazing. Thank you. 👍
Please release a tutorial for metahuman workflow. Thanks!
is it possible to import custom meshes into the program to animate/lipsync them?
The eyes, tongue and jaws flips away when I apply the PROXY UI setting, and don't go with the original mesh result...any solution to that?
I have the model which I imported to A2F (from maya), in A2F I made the animation and exported the cache. In maya I imported just created cache and I got some weird exploded thing. Are there any requirements to model which I can use in A2F? In which format is it better to import mesh to A2F to have the same hierarchy as it is shown in the video?
What does Prim mean? I'm struggling to find an explanation of this online or in your documentation. At first i thought it meant; Primitive .. ?
I donno why for me setup characer with full face gives error .after clicking nothing will happening. but at the buttom just writes KeyError:('tooltip') . anyone knows what is problem?(it is only when I want to use fullface with eyes tonguq etc) my version is 2022.1.0 rc3
Looks good. Ist there are a way to transfer these animations from the face only to a whole character with a full body? (preferably in blender)
Is there a way to use audio2gesture with audio2face?
Idea, if point ordering of full body happened on top of head downwards. Perhaps there is a very reorder tool to ensure this? Then in blender chop off head and bring in to audio to face. Hopefully point order is retained. To get deformation in blender perhaps a custom connector from omniverse to blender or export face deformation animation as usd crate and bring in to blender to drive verts on whole body mesh. Just ideas.
I installed the program, but the head never appeared. I waited a long time, but there is no result. And what to do?
Do you have RTX card? That was my problem. Have a look on minimum hardware...
I have a map GTX 1070. is that not what you need?
@@viktorshtam2104 I think you need at minimum rtx 2060 or so.
This also works with the blendshape transfer correct?
Is anyone experiencing the Post Wrap cuda error? Any solution? thanks.
Any idea why my transfer character's head is all black? The original import had materials.
probably there is some issue with materials like they aren't connected to mesh in omniverse. Or/and lighting. Check these things, maybe it will help you. Good luck!
Ok how do you import the face animation into unreal 5 meta humans?
Great tutorial but a couple of things to add perhaps for improvement. Chapters would be super easy, I had to watch this a couple of times and that would have made things much easier to skip sections as needed. Also, When opening A2F it has an example space, to follow this tutorial it is better to select a new file before beginning to be more accurate to the tutorial itself.
In regards to A2F itself, it seems that the tracking is accurate to the male example, but the male example has certain elements that port to other faces. eg. the face is not symmetrical in a very specific way (mouth especially) that ports to other faces perhaps? Would a more symmetrical example lead to more 'universal' tracking results?
Thanks for a very informative video. I tried this on a Daz3D Genesis 8.1 Male character in Blender 3.6. The Gen8.1 eyes, tongue, and lower gum parts are not separated. I was able to separate these manually. I applied the 52 ARKit blend shapes in Audio2Face. It works on Blender except the lower teeth and gums are not moving with the jaw open blend shape. I was able to fix the shape key in blender.
How did you do that
@MrTalkative_ To separate manually, in edit mode, choose a face in the eye, then "select" by material or click control plus L, to select the whole eye. No need to separate the tongue and mouth.
Thanks, but I know that
I mean how are you able to make the tongue, eyes and jaws shape keys work perfectly in Audio2face?
The jaws, tongue and eyes have been the major problems with this software
@MrTalkative_ I switched to Wonder Dynamics studio. I will review my audio2face workflow. As far as I can remember, I only added the eyelashes and eyes. I did not configure the tongue and jaws in audio2face. I replaced the "jaw open" blend shape. Made a new one using the addon Daz morph. Then renamed it to match the one used by audio2face or arkit. Then deleted the old blend shape that was not working.
exactly i did what is in this tutorial several times in 2022.1.1,couldnt get result correctly,beards and moustaches eyebrows and eyelashes doesnt work.
I believe that grooms are not yet supported in A2F.. Same problem here
@@sinanrobillard2819 i have succeded to implement grooms
but eyes couldnt,it gives error on pivot point and i imported static mesh,skeletal mesh did not work.
@@sahinerdem5496 Is the Groom you imported static mesh? I tried to import MetaHuman grooms but A2F couldnt render them. Any advices you could give me?
@@sinanrobillard2819 yes i exported them as static meshes. and converted groom to polygonal mesh.
Audio2face doesn't recognize my audio files
Not seeing the new release on the app
It's there now. You might need to relaunch Omniverse Launcher
great tutorial but the lip smacking makes me want to scratch my ears off :)