this is not where we ended up in the last video, we did export the blender shapes but what after that? how did u get back to audio to face with the neat ending like that
When I placed the rain_blendshapes_usdSkel.usd it wasn't visible. I was able to import into blender, like previous video, and had the blendshapes. But when I brought it into the scene to do the blendshape conversion, it was invisible. Tried adding a shader. Even started over 4 times and still came back invisible. Edit: so it wasn't invisible, it was just really small. In the property of the objected I had to add a transformop>translate, scale and rotate.
I used Genesis8.1 Male character. It worked all the way until the last part. The file to be used in the last part should be the one named ExportSkul and it must be scaled 100%, rotated -90 in X. If not scaled to 100, you won't be able to see the mesh at all. Thanks for this video.
@@rettbull9100 I used the Daz to Blender Bridge addon. Make sure to add all arkit and necessary morphs in Daz "send to Blender" . I separated the eyes using audio2face. No need to separate tongue and jaws. In Blender, re create the "jaw open" shape key using Daz morphs. Rename and replace the one from audio2face.
It's not clear. The skel file is the same one you exported to blender. Export the full A2F cache on the data conversion tab. You can import it into Blender as a *.usd then use a 'copy rotation' constraint to get your model moving because the cached assets are have animated eyes/teeth-gums. Make sure you set the 'origin to geometry' in your original model's eyes.
@@josiahgil Yes. It's the same process. The cached usd models have working tongues and jaws. Copy Rotation and Location constraints work but you have to make your models use their own centres as origins. There's a video from nvidia on how to do it with a reallusions character from within Audio2Face but it's almost impossible to follow.
@@josiahgil I found a better way. Set up your shape keys using Faceit (Blender addon) and make sure they are set to 'ArKit'. Then there an example 'Mark' project in Audio2Face that spits out Arkit ready jason files. Faceit as an importer that loads everything including jaw and eye movement. My object doesn't have a tongue. So I'm not sure about that part. Once the character is set up you can append addition animations or just delete replace the existing one.
@ian2593 thanks for the info. Before I jump into it though, would i need to change the names of the blendshapes made with faceit to match the names used by a2f for the json file to work? Does it work automatically? Also are the blendshapes faceit makes as good as the a2f shapes? I'm just wondering before I go all in.
@josiahgil Hi. I tested it against the one I got working and it's just as good. You select arkit shape keys via faceit and a2f has an arkit example template ready. So only need to use your preferred wav and export the json.
Is there a way to use Text to Lips sync? Similar to Adobe Character Animate lips sync functionality, incase the lips sync does not work well, the user can get a transcript of the Audio file and place visemes in the appropriate spot to refine the lips sync? Second question, is there a way to use a camera to mocap the eye blinks, eye looks, and eyebrow to a 3D head?
it would be incredibly simpler if you could simply use this program's default face to create the animation of the face, and then export this in an "action" to blender, go to the blender model from which you transferred the shapekeys from the previous tutorial, and put the action on it. without having to create all these links
I think this video need a third part because still we need configure the eyes, teeth, tongue, even eyebrows, hair and eyelashes.
this is not where we ended up in the last video, we did export the blender shapes but what after that? how did u get back to audio to face with the neat ending like that
How we can animation lower denture, eyes and tongue in Blender? Those are not moving in the video.
Good question, please answer NVIDIA!
Good question! I wanna know as well
When I placed the rain_blendshapes_usdSkel.usd it wasn't visible. I was able to import into blender, like previous video, and had the blendshapes. But when I brought it into the scene to do the blendshape conversion, it was invisible. Tried adding a shader. Even started over 4 times and still came back invisible.
Edit: so it wasn't invisible, it was just really small. In the property of the objected I had to add a transformop>translate, scale and rotate.
the jaw is not moving with mouth. does not look good.
Very important and not mentioned in the video. I get same issue for my character. How to solve?
@@valleybrook I wanted to try the app but setting up the character is a very long process and the end result is terrible.
dude, you skip so many steps.
I used Genesis8.1 Male character. It worked all the way until the last part. The file to be used in the last part should be the one named ExportSkul and it must be scaled 100%, rotated -90 in X. If not scaled to 100, you won't be able to see the mesh at all. Thanks for this video.
You got a daz character to work? how? exported to blender to a2f? I tried and it didn't work got an error on the skin.
@@rettbull9100 I used the Daz to Blender Bridge addon. Make sure to add all arkit and necessary morphs in Daz "send to Blender" . I separated the eyes using audio2face. No need to separate tongue and jaws. In Blender, re create the "jaw open" shape key using Daz morphs. Rename and replace the one from audio2face.
You did not end the last video showing how the audio plays with the lipsync. YOU SKIPPED IT! how do you get the meshes to lipsync to the audio!
It's not clear. The skel file is the same one you exported to blender. Export the full A2F cache on the data conversion tab. You can import it into Blender as a *.usd then use a 'copy rotation' constraint to get your model moving because the cached assets are have animated eyes/teeth-gums. Make sure you set the 'origin to geometry' in your original model's eyes.
Did you get the jaw and tongue working?
@@josiahgil Yes. It's the same process. The cached usd models have working tongues and jaws. Copy Rotation and Location constraints work but you have to make your models use their own centres as origins. There's a video from nvidia on how to do it with a reallusions character from within Audio2Face but it's almost impossible to follow.
@@josiahgil I found a better way. Set up your shape keys using Faceit (Blender addon) and make sure they are set to 'ArKit'. Then there an example 'Mark' project in Audio2Face that spits out Arkit ready jason files. Faceit as an importer that loads everything including jaw and eye movement. My object doesn't have a tongue. So I'm not sure about that part. Once the character is set up you can append addition animations or just delete replace the existing one.
@ian2593 thanks for the info. Before I jump into it though, would i need to change the names of the blendshapes made with faceit to match the names used by a2f for the json file to work? Does it work automatically? Also are the blendshapes faceit makes as good as the a2f shapes? I'm just wondering before I go all in.
@josiahgil Hi. I tested it against the one I got working and it's just as good. You select arkit shape keys via faceit and a2f has an arkit example template ready. So only need to use your preferred wav and export the json.
Is there a way to use Text to Lips sync? Similar to Adobe Character Animate lips sync functionality, incase the lips sync does not work well, the user can get a transcript of the Audio file and place visemes in the appropriate spot to refine the lips sync? Second question, is there a way to use a camera to mocap the eye blinks, eye looks, and eyebrow to a 3D head?
it would be incredibly simpler if you could simply use this program's default face to create the animation of the face, and then export this in an "action" to blender, go to the blender model from which you transferred the shapekeys from the previous tutorial, and put the action on it.
without having to create all these links
damn, does this work?
@@DoubleYolk no, it was a suggestion
So...what does it even look like when it's all done?!