It would be so cool if Nvidia would release real documentation for this software, I mean what's the point of releasing specialized software like this if no one is able to use it, is it just to impress investors with graphics accelerated AI tech? These tutorials are fine but, without knowing how the software works it's impossible to know how to fix the dozens of things that will go wrong while following this video, things that could be discussed in the documentation. I am aware that there is some documentation but, it literally just tells you the names of the buttons and names of the various areas of the interface. I understand that a button that says "Proxie UI" is the Proxie UI button, how about explaining what the hell that even mean or what it does.
Please make the Blendshape Export quality same as Audio2Face :( The facial animation quality always looks better in Audio2Face, but when you export the JSON animation to Unreal Engine, Maya, Blender, etc, the quality is lost and it doesn't look the same :/
@NVIDIA Omniverse I would like to know if it's possible to use Audio2Face App to rig more toony looking characters with more exaggerated facial features like cartoon animals, can we get some examples?
Can you please fix the eyelids? Both Mark and Claire have creases in their eyelids when closed. Mark’s is quite severe. And it would be awesome if the eyelids were shut for placement of the landmarks, similar to the mouth being opened.
For some reason, the dots showing the locations of my correspondence aren't showing up properly. No matter where I click on the faces, the dots all show up in the exact same location way under the faces. Any idea how to fix this?
What is the best way to get the full body animation to my metahuman? I tried to do audio 2 body in machinima and then combine it with the streaming from audio2face into unreal engine. But this is complicated and does not work that good as the body skeleton in machinima is quite different than the ones in UE5. Like Audio2Face + body would be amazing. What is my best bet to get this?
I hate they none of the videos in this channel explain the skin mesh fitting. You say just put some points but never explain what points are important. It's why audio2face hasn't caught on for commercial use.
This program is incredibly difficult to use, and the quality doesn't match the complexity. Tutorials lack sincerity, making it hard to understand anything. The program itself is so unstable that running it is a challenge. I'm using the 4070ti graphics card. Nvidia, please focus on producing graphics cards rather than distributing such a program.
It would be so cool if Nvidia would release real documentation for this software, I mean what's the point of releasing specialized software like this if no one is able to use it, is it just to impress investors with graphics accelerated AI tech? These tutorials are fine but, without knowing how the software works it's impossible to know how to fix the dozens of things that will go wrong while following this video, things that could be discussed in the documentation. I am aware that there is some documentation but, it literally just tells you the names of the buttons and names of the various areas of the interface. I understand that a button that says "Proxie UI" is the Proxie UI button, how about explaining what the hell that even mean or what it does.
Please make the Blendshape Export quality same as Audio2Face :( The facial animation quality always looks better in Audio2Face, but when you export the JSON animation to Unreal Engine, Maya, Blender, etc, the quality is lost and it doesn't look the same :/
hey how do you export omniverse facial blendshape animation with custom body and head rig in one skeleton into unreal engine?
@NVIDIA Omniverse I would like to know if it's possible to use Audio2Face App to rig more toony looking characters with more exaggerated facial features like cartoon animals, can we get some examples?
When I click "SETUP CHARACTER" the claire is selected by default. How can I select mark?
Can you please fix the eyelids? Both Mark and Claire have creases in their eyelids when closed. Mark’s is quite severe. And it would be awesome if the eyelids were shut for placement of the landmarks, similar to the mouth being opened.
THIS IS AMAZING
For some reason, the dots showing the locations of my correspondence aren't showing up properly. No matter where I click on the faces, the dots all show up in the exact same location way under the faces. Any idea how to fix this?
Which Omniverse cards work well? I have a 1070 but I'm feeling like it needs to be an RTX is that correct?
rtx required. you can check the download page
coooool
What is the best way to get the full body animation to my metahuman?
I tried to do audio 2 body in machinima and then combine it with the streaming from audio2face into unreal engine.
But this is complicated and does not work that good as the body skeleton in machinima is quite different than the ones in UE5.
Like Audio2Face + body would be amazing.
What is my best bet to get this?
why not use the same skeleton?
Do you have a solution or even a tutorial link for that?
@@chen8078
How do I import the character in the first place? Just file import fbx or?
I can only answer for Character Creator characters, but for CC you export as USD, and there is an A2F option.
there is no way explained how to export to maya or Unreal Engine.
I hate they none of the videos in this channel explain the skin mesh fitting. You say just put some points but never explain what points are important.
It's why audio2face hasn't caught on for commercial use.
This program is incredibly difficult to use, and the quality doesn't match the complexity. Tutorials lack sincerity, making it hard to understand anything. The program itself is so unstable that running it is a challenge. I'm using the 4070ti graphics card. Nvidia, please focus on producing graphics cards rather than distributing such a program.
Good! can you make a cat?
all this complicated BS and the lipsync is not even realistic.