Omniverse Audio2Face and Blender | Part 2: Loading AI-Generated Lip Sync Clips

Поділитися
Вставка
  • Опубліковано 30 жов 2024

КОМЕНТАРІ • 29

  • @brehiner25
    @brehiner25 Рік тому +14

    I think this video need a third part because still we need configure the eyes, teeth, tongue, even eyebrows, hair and eyelashes.

  • @NileshShahu-j8p
    @NileshShahu-j8p Рік тому +4

    this is not where we ended up in the last video, we did export the blender shapes but what after that? how did u get back to audio to face with the neat ending like that

  • @jeanallien
    @jeanallien Рік тому +7

    How we can animation lower denture, eyes and tongue in Blender? Those are not moving in the video.

    • @valleybrook
      @valleybrook Рік тому +1

      Good question, please answer NVIDIA!

    • @mj4vr_jqcn
      @mj4vr_jqcn 8 місяців тому +1

      Good question! I wanna know as well

  • @rettbull9100
    @rettbull9100 Рік тому +1

    When I placed the rain_blendshapes_usdSkel.usd it wasn't visible. I was able to import into blender, like previous video, and had the blendshapes. But when I brought it into the scene to do the blendshape conversion, it was invisible. Tried adding a shader. Even started over 4 times and still came back invisible.
    Edit: so it wasn't invisible, it was just really small. In the property of the objected I had to add a transformop>translate, scale and rotate.

  • @Frostbitecgi
    @Frostbitecgi Рік тому +4

    the jaw is not moving with mouth. does not look good.

    • @valleybrook
      @valleybrook Рік тому

      Very important and not mentioned in the video. I get same issue for my character. How to solve?

    • @bigbrotherr
      @bigbrotherr 3 місяці тому

      @@valleybrook I wanted to try the app but setting up the character is a very long process and the end result is terrible.

  • @theshizon
    @theshizon Рік тому +7

    dude, you skip so many steps.

  • @AppleExpeditionProductions
    @AppleExpeditionProductions Рік тому

    I used Genesis8.1 Male character. It worked all the way until the last part. The file to be used in the last part should be the one named ExportSkul and it must be scaled 100%, rotated -90 in X. If not scaled to 100, you won't be able to see the mesh at all. Thanks for this video.

    • @rettbull9100
      @rettbull9100 Рік тому

      You got a daz character to work? how? exported to blender to a2f? I tried and it didn't work got an error on the skin.

    • @AppleExpeditionProductions
      @AppleExpeditionProductions Рік тому

      @@rettbull9100 I used the Daz to Blender Bridge addon. Make sure to add all arkit and necessary morphs in Daz "send to Blender" . I separated the eyes using audio2face. No need to separate tongue and jaws. In Blender, re create the "jaw open" shape key using Daz morphs. Rename and replace the one from audio2face.

  • @rettbull9100
    @rettbull9100 4 місяці тому

    You did not end the last video showing how the audio plays with the lipsync. YOU SKIPPED IT! how do you get the meshes to lipsync to the audio!

  • @ian2593
    @ian2593 6 місяців тому +1

    It's not clear. The skel file is the same one you exported to blender. Export the full A2F cache on the data conversion tab. You can import it into Blender as a *.usd then use a 'copy rotation' constraint to get your model moving because the cached assets are have animated eyes/teeth-gums. Make sure you set the 'origin to geometry' in your original model's eyes.

    • @josiahgil
      @josiahgil 6 місяців тому

      Did you get the jaw and tongue working?

    • @ian2593
      @ian2593 6 місяців тому

      @@josiahgil Yes. It's the same process. The cached usd models have working tongues and jaws. Copy Rotation and Location constraints work but you have to make your models use their own centres as origins. There's a video from nvidia on how to do it with a reallusions character from within Audio2Face but it's almost impossible to follow.

    • @ian2593
      @ian2593 6 місяців тому

      @@josiahgil I found a better way. Set up your shape keys using Faceit (Blender addon) and make sure they are set to 'ArKit'. Then there an example 'Mark' project in Audio2Face that spits out Arkit ready jason files. Faceit as an importer that loads everything including jaw and eye movement. My object doesn't have a tongue. So I'm not sure about that part. Once the character is set up you can append addition animations or just delete replace the existing one.

    • @josiahgil
      @josiahgil 6 місяців тому

      @ian2593 thanks for the info. Before I jump into it though, would i need to change the names of the blendshapes made with faceit to match the names used by a2f for the json file to work? Does it work automatically? Also are the blendshapes faceit makes as good as the a2f shapes? I'm just wondering before I go all in.

    • @ian2593
      @ian2593 6 місяців тому

      @josiahgil Hi. I tested it against the one I got working and it's just as good. You select arkit shape keys via faceit and a2f has an arkit example template ready. So only need to use your preferred wav and export the json.

  • @hotsauce7124
    @hotsauce7124 Рік тому

    Is there a way to use Text to Lips sync? Similar to Adobe Character Animate lips sync functionality, incase the lips sync does not work well, the user can get a transcript of the Audio file and place visemes in the appropriate spot to refine the lips sync? Second question, is there a way to use a camera to mocap the eye blinks, eye looks, and eyebrow to a 3D head?

  • @oinventario3926
    @oinventario3926 Рік тому +2

    it would be incredibly simpler if you could simply use this program's default face to create the animation of the face, and then export this in an "action" to blender, go to the blender model from which you transferred the shapekeys from the previous tutorial, and put the action on it.
    without having to create all these links

  • @thejetshowlive
    @thejetshowlive 5 місяців тому

    So...what does it even look like when it's all done?!