Tutorials: how to use the plugin

Поділитися
Вставка
  • Опубліковано 4 чер 2024
  • MetaHuman SDK is an automated AI solution to generate realistic animation for characters. This Unreal Engine plugin allows you to create and use lip sync animation generated by our cloud server.
    We have prepared a detailed tutorial describing how to use our plugin:
    -integrate TTS
    -add audio to lip sync
    -add audio to lip sync streaming
    -integrate a chat bot
    -combine everything into a single combo request
    The tutorial presents the version of U.E. 5.1, which supports all previous versions of the Unreal Engine.
    Try it yourself and share your impressions in the comments.
    Timecode:
    00:00 Intro
    00:30 Create new project
    01:06 Choosing an Avatar
    01:42 Text To Speech
    03:56 Audio to Lip Sync
    07:15 Audio to Lip Sync Streaming
    12:07 ChatBot Integration
    14:08 How to use combo request
    16:44 Custom Rig Integration
    Link to our discord: discord.com/invite/kubCAZh37D....
    Link to our website: metahumansdk.io/
    Get the plugin for free from Unreal Engine marketplace: www.unrealengine.com/marketpl...
    Official documentation: docs.metahumansdk.io/metahuma...
    #unrealengine #metahumanman #MetaHumanSDK #digitalavatar

КОМЕНТАРІ • 209

  • @arielshpitzer
    @arielshpitzer 10 місяців тому

    It's updated. i think i saw a diffrent video looking almost the same. amazing work !

  • @AltVR_YouTube
    @AltVR_YouTube Рік тому +12

    Thanks for this perfect tutorial! You should really consider making these videos publicly findable. Other versions that are paid will show up in results, but not this SDK. Also, it would be awesome if these could be uploaded in 1440p or 4K in the future for better blueprint text readability

  • @user-qw3cq1bg9s
    @user-qw3cq1bg9s Рік тому +4

    This is mind blowing!!!!!!

  • @honglabcokr
    @honglabcokr Рік тому +1

    Thank you so much!

  • @TheAIAndy
    @TheAIAndy 11 місяців тому +1

    LOVE this tutorial, thank you so much! I am wondering if you would consider making a tutorial on how you got them to sit as a presenter, including face & body animation + studio + camera angles? Also... I don't know if this is out of reach, but can you get the hands to gesture based on the loudness or audio waves? Love your plugin, trying to do a bunch of cool things with it. thank you so much for these & newest tutorials!

    • @metahumansdk
      @metahumansdk  11 місяців тому +2

      Hi!
      We used regular control rig to add poses in the sequencer timeline and make body animation manually in this tutorial

    • @TheAIAndy
      @TheAIAndy 10 місяців тому

      @@metahumansdk haha as a beginner I have no idea what that means 😂 I’ll try to find a tutorial searching some of the words u said

    • @metahumansdk
      @metahumansdk  9 місяців тому +1

      When you add MetaHuman to the level sequence you can see that he have control rig and you can set any position for all parts of the MetaHumans body.
      Here you can get more information about control rig docs.unrealengine.com/5.2/en-US/control-rig-in-unreal-engine/

  • @mn04147
    @mn04147 Рік тому +1

    thanks for your greak Plugin!

  • @dome7415
    @dome7415 Рік тому +1

    awesome thx!

  • @user-dm1iy6nm8b
    @user-dm1iy6nm8b Рік тому +1

    Hi, thank you for this detailed tutorial! Im an trying to create lipsync only from text input without using the bot. I want to avoid the delay due to the TTS function as much as possible. Is this possible to create a buffer to send chunk of sound to the ATL while TTS is working? (like you did with the ATLstream). (Im kind of a beginner in this field).

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi! Currently our plugin just send full message to TTS services but you can separate text and send smaller parts manually.

  • @flytothetoon
    @flytothetoon Рік тому +3

    Lipsync looks perfect! In the description of your plugin said that "Support different face emotions". Is it possible with MetaHuman SDK to generate emotions by audio speech - like with nVidia Omniverse? Is it possible even to create with MetaHuman SDK the facial animation with blinking eyes?

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi Fly to the Toon!
      You can select in the ATL eye blinking, also it works for ATL nodes.

  • @jumpieva
    @jumpieva Рік тому +1

    The thing I have a problem with is that the facial animations are getting more realistic, but the stilted non human sounding audio is not reconciling well. Is this an option that will be fine tuned enough to make it for cinematics/close up dialogue?

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi! You can choose different TTS options such as Google, Azure and others.

  • @ffabiang
    @ffabiang Рік тому

    Hi, thank you so much for this video, it is really useful. Can you share some facial idle animations for our project to play while the TTS->Lipsync process is being made? Or do you know where can we find some of those?

    • @metahumansdk
      @metahumansdk  Рік тому +4

      Hi ffabian, you can use wav file without sound to generate facial animation from our SDK then use it for your project as idle😉

    • @ffabiang
      @ffabiang Рік тому

      ​@@metahumansdk Hi, when I import an empty audio file (1 min long) and use the "Create Lipsync Animation" option I get a facial animation that is almost perfect but the metahuman's mouth is opening continuously and moving as if he is about to say something, is there a parameter that can fix that?

  • @user-zp6jb5dw1l
    @user-zp6jb5dw1l Рік тому

    Excuse me, is the facial expression in your video generated by Metahuman SDK automatically while speaking? Or was it processed by other software? When using ChatGPT for real-time voice-driven input, can the model achieve the same level of facial expressions as yours? Thank you.

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi! You can choose different emotions at the moment of lip sync generation from audio (speech to animation stage)

  • @realskylgh
    @realskylgh 10 місяців тому

    Great, does the combo do ATL Strinming things as well?

    • @metahumansdk
      @metahumansdk  10 місяців тому

      Hi!
      We are working on it. If all goes fine we add it in the nearest releases on 5.2

  • @uzaker6577
    @uzaker6577 10 місяців тому

    Nice tutorial, very intresting and useful. I'm wondering is there any solution for ATL speed? Mine works slow, it takes near 10 seconds to generate animation.

    • @metahumansdk
      @metahumansdk  10 місяців тому

      Hi!
      Delay highly depends on the network connection and length of the sound.
      Can you share more details in our discord community about ATL/Combo nodes and sound files that you using in your project ?
      We will try to help.

  • @damncpp5518
    @damncpp5518 4 дні тому

    im with ue 5.3.2 and play animation node is not found. I only get Play animation with finished event and play animation time range with finished event...They are not suitable with getface node and metahuman sdk combo output animation

  • @lukassarralde5439
    @lukassarralde5439 10 місяців тому

    Hi. This is a great video tutorial. Could you please share how to do this setup PLUS adding a TRIGGER volume to the scene? Ideally, I would like to have a firstperson or third person character game that wehn goes to the VOLUME TRIGGER, the TRIIGER willl start the meytahumanSDK to talk. Can you show us how to do that in the BP? Thank you!!

    • @metahumansdk
      @metahumansdk  10 місяців тому

      Well, i think you can sstart from the audio triggerst provided by UE documentation docs.unrealengine.com/4.26/en-US/Basics/Actors/Triggers/
      I'll ask to the team about cases for games may be we can create tutorial about it.

  • @borrowedtruths6955
    @borrowedtruths6955 9 місяців тому

    I must be missing something, I have to delete the Face_ControlBoard_CtrlRig in the sequencer after adding the Lipsync Animation, or the Metahuman character will not animate. I have no control over the face rig. Is there a way to have both?

    • @metahumansdk
      @metahumansdk  9 місяців тому

      Hi! In the Sequencer Control rig overrides animation so you need to turn off Control rig or delete it if you want to use prepared animation on the avatar's face or on the body.

  • @user-pf2se2df8v
    @user-pf2se2df8v 11 місяців тому

    Is it possible to display the finished digital human package, including its lip sync animation and perhaps GPT integration, on a mobile device. Would the rendering by client or server side?

    • @metahumansdk
      @metahumansdk  11 місяців тому

      Hi! It depends on your solution. You can make a stream and make render on a server or you can make an app that will use client's device resources.

  • @corvetteee1
    @corvetteee1 7 місяців тому

    Quick question. How can I add an idle animation to the body? When I've tried it so far, the head comes off of the model. Thanks for any help!

    • @metahumansdk
      @metahumansdk  7 місяців тому

      Hi!
      You need to add node Slot - Default Slot between ARKIT input and Blend Per Bone node and make blend through Root bone. Here is one of discussion about it in our discord server discord.com/channels/1010548957258186792/1155594088020705410/1155844761056460800
      Also we showed other but more difficult way with State Machines ua-cam.com/video/oY__OZAa0I4/v-deo.html&lc=UgzNwmwaQIB3hOhKE7F4AaABAg

  • @AICineVerseStudios
    @AICineVerseStudios 8 місяців тому +1

    Hi There , the Plugin is great and it really works well , however, after 10 to 15 generations of facial animations, I am getting error message that I ran out of tokens. Also from your website its not clear if this is a paid service or not. Now even for testing , how many tokens does one has ? and if the tokens will runout , what to do about it then? Can this plugin be used in a production grade application, although I am just doing a POC as of now but I want to be sure about your offering.

    • @metahumansdk
      @metahumansdk  8 місяців тому +1

      Hi!
      At the moment there is no limits. Probably your token was generated before we present personal account. We make few announces in our discord about tokens that were not linked to personal accounts at the space.metahumansdk.io/ no longer work.
      Here is the video about token attachment or generating new in the personal account: ua-cam.com/video/3wmmaE-8aoE/v-deo.html&lc=UgxrVCl4HvIS5P9loWR4AaABAg&ab
      If it doesn't help please tell us and we try tio help with your issue.

  • @realskylgh
    @realskylgh 10 місяців тому

    I have a question, When using ATL Stream, the moment the sound wave comes in, the digital human will pause for 3 or 4 seconds. It should be preparing for animation. How to avoid this strange pause?

    • @metahumansdk
      @metahumansdk  10 місяців тому

      Hi! We are working on the delays for now but on current version 3-4 seconds for the 1st chunk is nirmal situation.

  • @v-risetech1451
    @v-risetech1451 Рік тому

    Hi,
    when i try to do same things from last tutorial, i can t see mh_ds_mapping in my project. Do you know anything about this for solve?

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi V-Risetech!
      Please select Show Engine Content in Content Browser settings it should help.
      We also send screenshot to the same request in our discord: discord.com/channels/1010548957258186792/1067744026469601280/1068066997675495504

  • @danD315D
    @danD315D Рік тому +2

    Is it possible for audio to lip sync to work on other 3d character models, rather than meta human ones?

    • @metahumansdk
      @metahumansdk  Рік тому +1

      Hi!
      Sure it is! You can find in the plugins files face example which is a custom mesh. Use ARKit or FACS rigged model to use animations from the MetahumanSDK.

  • @ahmedismail772
    @ahmedismail772 Рік тому

    it's so useful and informative thank you very much, I have a small question can we add another languages to the list I didn't find the (EChat language enum)

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi! You can use most languages from Azure or Google TTS by voice ID of it. An example of use with our demo scenes that included in the MetahumanSDK plugin you can find here (updated) ua-cam.com/video/cC2MrSULg6s/v-deo.html

    • @ahmedismail772
      @ahmedismail772 Рік тому

      @@metahumansdk the link guide me to private video

    • @metahumansdk
      @metahumansdk  Рік тому

      @Ahmed Ismail my bad, replaced it to the correct link ua-cam.com/video/cC2MrSULg6s/v-deo.html

  • @ragegohard9603
    @ragegohard9603 Рік тому

    👀 wow !

  • @kreamonz
    @kreamonz 15 днів тому

    hello! I generated a face animation and audio file (the time in the video is 5:08), I go into it, this file is only 125 frames, although the audio lasts much longer. In the sequencer, I add audio and generated animation and the animation is much shorter, and when stretching the track, the animation repeats from the beginning. Please tell me how to adjust the number of frames per second?

    • @kreamonz
      @kreamonz 15 днів тому

      I mean, how to edit the number of sampled keys/frames

  • @guilloisvincent2286
    @guilloisvincent2286 Рік тому

    Would it be possible to put a TTS (like MaryTTS) or an LLM (like llama) in the c++ code, to avoid network calls and that it is free?

    • @metahumansdk
      @metahumansdk  Рік тому

      You can find detailed instructions on how to use on the official websites of MaryTTS and Llama LLM. It would be great if you could share your final project with us.
      If we speak about internet avoidance currently our SDK works only with internet connection but you can generate pool of facial animations for your project and then use that animations offline.

  • @devpatel8276
    @devpatel8276 Рік тому

    Thanks a lot for tutorial! I have a problem, combo request has a longer delay, how can we do the audio to lip sync streaming (the dividing chunks mechanism thing) using combo request?

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi! To use the generated audio in parts, first you need to call the Text To Speech function and then call the ATL stream function.

    • @devpatel8276
      @devpatel8276 Рік тому

      @@metahumansdk And that can't be done by combo right?

    • @metahumansdk
      @metahumansdk  11 місяців тому

      You can add the same pipeline but connect it to other head so you can use few metahumans in the same time.

  • @SaadSohail-ug9fl
    @SaadSohail-ug9fl Місяць тому

    Really good tutorial! Can you also tell me how to achieve body and head motion with facial expressions while metahuman is talking? Just like you have talking metahumans in your video

    • @metahumansdk
      @metahumansdk  28 днів тому

      Hi!
      You can generate animation with emotions from our plugin or use additive blending to add your own emotions directlly to selected blend shapes.

  • @rajeshvaghela2772
    @rajeshvaghela2772 9 місяців тому

    great tutorial.I got a perfect lip synch,but only one issue is the animation doesn't stop after the sound completes,can you help me out?

    • @metahumansdk
      @metahumansdk  9 місяців тому

      Hi!
      Please share your blueprints to our Discord server discord.gg/MJmAaqtdN8 or to the mail support@metahumansdk.
      You also can check out included demo scenes in the UE content browser All>Engine>Plugins>MetahumanSDK Content>Demo

  • @bruninhohenrri
    @bruninhohenrri Місяць тому

    Hello, how can i use the ATLStream animation with an Animation Blueprint ? Metahumans have a postprocessing AnimBP, so if a run the raw animation basically it messes up with the body animations

    • @metahumansdk
      @metahumansdk  Місяць тому

      Hi!
      Please try to start from Talk Component. This is the easiest way to use Streaming options.
      Here is tutorial about it ua-cam.com/video/jrpAJDIhCFE/v-deo.html
      If you still have some issues please visit our discord discord.gg/MJmAaqtdN8

  • @NeoxEntertainment
    @NeoxEntertainment 7 місяців тому +1

    Hey great totorial but i cant find the mh_dhs_mapping in the PoseAsset of the Node Make ATL Maappings info at 8:41 and i guess thats why the lip sync dont work on my end
    does anyone knows where i can find it ?

    • @metahumansdk
      @metahumansdk  7 місяців тому +1

      Hi!
      Please open Content Browser settings and enable Engine and Plugins content as on the screenshot
      cdn.discordapp.com/attachments/1148305785080778854/1148984020798021772/image.png?ex=65425cc1&is=652fe7c1&hm=e75cc52cd3ece4f43e143a87745fd25fd2b78032fa09c3b2d931bf50e68a0b45&

  • @NiksCro96
    @NiksCro96 3 місяці тому

    Hi, is there a way to do audio input as well as text input. Also is there a way for answer to be written as text in widget blueprint.

    • @metahumansdk
      @metahumansdk  3 місяці тому

      Hi!
      You can send 16-bit PCM wave to the ATL/Combo nodes on the Lite, Standart and Pro tariffs, if you using Chatbot tariff plan you can use ATL Stream or Combo Stream nodes.
      I also recommend you to use Talk Component because it make your work with plugin much easier. We have tutorial about Talk Component here ua-cam.com/video/jrpAJDIhCFE/v-deo.html

  • @hardikadoshi3568
    @hardikadoshi3568 3 місяці тому

    I wonder if there is anything similar for Unity platform as well? Would be great if there is support available as the avatars look great.

    • @metahumansdk
      @metahumansdk  3 місяці тому

      Hi! At the moment we are only working with Unreal Engine. We may consider other platforms in the future, but there are no specifics about other platforms yet.

  • @enriquemontero74
    @enriquemontero74 Рік тому

    hello one question this is compatible with eleven labs api?? or voice notes? thanks

    • @metahumansdk
      @metahumansdk  Рік тому +1

      Hi!
      If they produce 16 bit wav files you can easely use it with our MetahumanSDK plugin.

  • @CanCan-gy5hh
    @CanCan-gy5hh Рік тому

    Hi, I want the metahuman to voice the text I entered in the field below. but only sound working, no face animation. can you help me how can i solve it?

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi!
      You can try to use our demo scenes which included in the plugin content and compare level blueprints, also you can koin our Discord community and share more details about your issue: discord.gg/MJmAaqtdN8

  • @luchobo7455
    @luchobo7455 11 місяців тому

    Hi I really need your help, in 6:29 i drag and drop my BP_metahuman but is not showing up in the blueprint, don't know why

    • @metahumansdk
      @metahumansdk  11 місяців тому

      Hi!
      You need to use metahuman from the Outliner of your scene but not directly from the Content Browser.

  • @charleneteets8227
    @charleneteets8227 10 місяців тому

    When I try to put a idle animation the head will break off to respond and won't idle with the body! Not sure how to proceed. It would be great if you had a video on addle a idle animation next.

    • @metahumansdk
      @metahumansdk  10 місяців тому

      Hi!
      You can try this video to fix the head ua-cam.com/video/oY__OZAa0I4/v-deo.html&lc=Ugz9BC

  • @asdfasdfsd
    @asdfasdfsd 11 місяців тому

    Why it doesn't show 'plugins' and 'engine' folders like yours after i created a new blank project?? If i need to add them manually, how and where to get them?

    • @metahumansdk
      @metahumansdk  11 місяців тому

      You need to mark it in the settings of Content Browser window

  • @jaykunwar3312
    @jaykunwar3312 9 місяців тому

    can we make a build(exe) by using metahumansdk in which we can upload audio and metahuman start speaking and body idle animation?? please help

    • @metahumansdk
      @metahumansdk  9 місяців тому

      Hi!
      Sure, we released demo project with all that functions yesterday and we share it in our discord: discord.com/channels/1010548957258186792/1068067265506967553/1143934803197034637

  • @juanmacode
    @juanmacode Рік тому

    Hi, I have a project and I'm trying to do the lip sync in real time, but I get this error, does anyone know why: Can't prepare ATL streaming request with provided sound wave!

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi! Could you please specify how you are generating the soundwave and provide logs if possible?

  • @unrealvizzee
    @unrealvizzee Рік тому

    Hi, I have a non Metahuman character with ARKit expressions (from Daz studio). How can I use this plugin with my character ?

    • @metahumansdk
      @metahumansdk  Рік тому

      You need to use skeleton of your avatar in the ATL node and arkit mapping mode.
      You can find an examples of level blueprints in the plugin files that included in every plugin version. In most of them we use custom head.

  • @anveegsinha4120
    @anveegsinha4120 3 місяці тому +2

    2:12 hi, I dont see the Create Speech from text. I have added the API key as well.

    • @metahumansdk
      @metahumansdk  3 місяці тому

      Hi!
      Did you try it on a wav file?

  • @sumitranjan7005
    @sumitranjan7005 Рік тому

    this is great plugin with more detailed functionality also is it possible to integrate our own custom chatbot api? if yes please share a video

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi! You can use any solution just connect your node with text outtput to the TTS node and then use regulat pipeline with ATL.
      As example you can use this tutorial when we use OpenAI plugin for chatbot ua-cam.com/video/kZ2fTTwu6BE/v-deo.html

  • @rafaeltavares6162
    @rafaeltavares6162 Рік тому

    hello, i followed all the steps, but my Metahuman has a problem with the reproduction of the voice. in sentiesi when I enter the game my character starts talking and after a few seconds the audio starts again, it's as if there were 2 audios one above the other.
    I don't know if this has happened to anyone else.
    Can you give me some advice to solve this problem?

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi!
      Is it possible to share a blueprint in our discord server?
      Also you can try to use state machine and synchronize face animation with audiofile as shown in this video: ua-cam.com/video/oY__OZAa0I4/v-deo.html

  • @ai_and_chill
    @ai_and_chill Рік тому

    how do we get our animations to look as good as the one in this video for the woman in front of the blue background. the generated animations are good, but not as expressive as her. it looks like you're still using the lip sync animation code, but you're having her eyes stay on focus with the viewer. how are you doing that?

    • @metahumansdk
      @metahumansdk  Рік тому +1

      We use postprocess blueprint for eye focus locations. An example you can find here: discord.com/channels/1010548957258186792/1089932778981818428/1089940889192898681
      And for animation we use EPositive emotion so it looks more expressive in our opinion.

  • @SKDyiyi
    @SKDyiyi 11 місяців тому

    Hello, your plugin is very useful. I am using a self-designed model with ARKit. However, I have encountered a problem. I can generate facial movements smoothly, but I lack neck movements. Is there a solution to this? My model does not split the head from the body.

    • @metahumansdk
      @metahumansdk  11 місяців тому

      Hi! If your avatar have not separated model you can blend an animation for the body and neck with our facial animation.

    • @SKDyiyi
      @SKDyiyi 11 місяців тому

      @@metahumansdk Yes I do do that now. Meaning if I don't separate my head from my body I won't be able to generate neck action automatically through the plugin?

    • @metahumansdk
      @metahumansdk  11 місяців тому

      You can mark Neck Movement in the ATL node to add it to the animation in MetahumanSDK plugin

  • @blommer26
    @blommer26 6 місяців тому

    Hi great tutorial. in the minute 05:07, while I tried to create lipsync animation from my audio, UE5 5.1.1 created the file (with the extension .uasset) but it did not show up in my assets. Any idea?

    • @metahumansdk
      @metahumansdk  6 місяців тому

      Hi!
      Can you please share more details, it would be great if you can attach log file of your project (the directory looks like this ProjectName\Saved\Logs\ProjectName.log) and send it to us for analysis in our discord discord.gg/MJmAaqtdN8 or to the support@metahumansdk.io

    • @Ali_k11
      @Ali_k11 4 місяці тому

      h have same problem

    • @metahumansdk
      @metahumansdk  4 місяці тому

      Hi!
      @Ali_k11, can you give some details about your issue?

  • @phantomebo6537
    @phantomebo6537 6 місяців тому

    I generated the LipSync Animation just like at @19:00 and the animation preview seems fine. but when i drag and drop it into the MetaHuman Face the animation doesnt work. Can someone tell me what am i missing here

    • @metahumansdk
      @metahumansdk  6 місяців тому

      Hi!
      Please make sure that you selected animation mode as Animation Asset and your animation generated for Face Archetype skeleton with metahuman's mapping mode.
      More details you can find in our documentation: docs.metahumansdk.io/metahuman-sdk/reference/metahuman-sdk-unreal-engine-plugin/audio-to-lipsync
      Also you can ask for help in our Discord discord.gg/MJmAaqtdN8

  • @skyknightb
    @skyknightb Рік тому

    Looks like server is off or out of reach for some reason, the api url shows different errors when trying to access it, be it generating the audio file or using an already generated one to create the lipsync animation or is the api url wrong?

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi Skyknight!
      Can you tell little more about errors to our support on support@metahumansdk.io?

    • @skyknightb
      @skyknightb Рік тому

      @@metahumansdk I'm already getting support on your discord, thanks :D

  • @Bruh-we9mv
    @Bruh-we9mv 3 місяці тому

    Nice tutorial! However, if I input a somewhat large text, it stops midway. What could be the issue? I've tested stuff, and as it seems the node "TTSText to Speech" has a time limit on sound. Can I somehow remove that?

    • @Bruh-we9mv
      @Bruh-we9mv 3 місяці тому

      @@domagojmajetic9820 Sadly no, if I find anything I will write here

    • @metahumansdk
      @metahumansdk  3 місяці тому

      At the moment limits for free tariff is 5 sec to generate animation. You can use it for two days for free but the limit is 5 second of generated animation.

    • @gavrielcohen7606
      @gavrielcohen7606 2 місяці тому

      @@metahumansdk Hi, great tutorial. I was wondering if there is a payed version where we can exceed the 5 second limit?

    • @metahumansdk
      @metahumansdk  2 місяці тому

      @gavrielcohen7606 hi!
      Shure! At the moment registration at our website is temporary unavailable so please let us know if you need one at the support@metahumansdk.io 😉

  • @boyce-wei
    @boyce-wei Рік тому

    Hello, why do I follow your steps, at 12:03, the sound ends but the mouth moves on and doesn't stop

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi! Could you please clarify if you are experiencing any performance issues?

  • @skeras1171
    @skeras1171 Рік тому

    Hi,
    When i try to choose mh_dhs_mapping_anim_poseasset in Struck ATLMappingsInfo, I can't see this pose asset. How can i create or how can i find this asset? Can you help be that subject? Thank's in advance, have a good work.
    Best regards.

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi @skeras!
      You need to mark for showing Engine Content and Plugins Content in the Content Browser

    • @skeras1171
      @skeras1171 Рік тому

      @@metahumansdk Done,Thanks.

  • @funkyjeans8667
    @funkyjeans8667 3 місяці тому

    it only seems to able to generate 5 second lipsync animation. Am i doing something wrong or longer animation is a paid option.

    • @metahumansdk
      @metahumansdk  3 місяці тому

      If you use a trial tariff plan you can generate 5 seconds of ATL per one animation only.

  • @theforcexyz
    @theforcexyz 8 місяців тому

    hi, im having problem at 2:32, when i generate my text to speech it does not appear in my folders :/

    • @metahumansdk
      @metahumansdk  8 місяців тому

      Hi!
      Can you please check that your API token is correct in the project settings?
      If your API token is correct please send us your log file to the discord discord.gg/MJmAaqtdN8 or mail support@metahumansdk.io

  • @Relentless_Games
    @Relentless_Games Місяць тому +1

    Error: fill api token via project settings
    First time using this sdk, how can I fix this?

    • @metahumansdk
      @metahumansdk  Місяць тому

      Please contact us through e-mail support@metahumansdk.io we will help you with token.

  • @Ali_k11
    @Ali_k11 4 місяці тому

    when i try the sdk on UE 5.3 i get no tts permission error,what's the matter?

    • @metahumansdk
      @metahumansdk  4 місяці тому

      Hi!
      TTS available for Chatbot tariff plan only.
      You can find more details about tariffs in your personal account at the space.metahumansdk.io/#/workspace or in our discord in this message discord.com/channels/1010548957258186792/1068067265506967553/1176956610422243458

  • @user-zp6jb5dw1l
    @user-zp6jb5dw1l Рік тому

    How to synchronize facial expressions with mouth movements? Could you provide a tutorial on this? Thank you

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi! You can select facial expressions when generating from audio to lip sync (speech to audio conversion stage), and they will be synchronized automatically.

    • @user-zp6jb5dw1l
      @user-zp6jb5dw1l Рік тому

      Hi! Is the 'Explicit Emotion' option selected in the 'Create MetaHumanSDKATLInput' tab?

    • @user-zp6jb5dw1l
      @user-zp6jb5dw1l Рік тому

      I selected 'Ehappy' and it works, but selecting 'Eangry' doesn't have any effect. Do you have any solutions or tutorials for this issue? Thank you!

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi! Can you please clarify, is the avatar not displaying the desired emotion or is the expression of the avatar not matching the chosen emotion.

  • @krishnakukade
    @krishnakukade 10 місяців тому

    I'm beginner in Unreal Engine and don't know how to render the animation video, i tried multiple ways but not seems to work, can anyone tell me how to do this? or any resources please...

    • @metahumansdk
      @metahumansdk  10 місяців тому

      Hi!
      You can use this official documentation from the UE developers docs.unrealengine.com/5.2/en-US/rendering-out-cinematic-movies-in-unreal-engine/

  • @borrowedtruths6955
    @borrowedtruths6955 10 місяців тому +1

    When I add the voice animation to the face, the head detaches, and the audio begins immediately. I have a walk cycle from mixamo in the sequencer and would like to have it start at a certain time in the time frame.
    Can you help with these two issues? Thank you.

    • @metahumansdk
      @metahumansdk  10 місяців тому

      Hi!
      We recommend you to use this tutorial ua-cam.com/video/oY__OZAa0I4/v-deo.html
      Please be careful at the 3-28 timestamp because many people skip this moment and fix didn't work for them 😉
      If you need more advice please contact us in discord discord.gg/MJmAaqtdN8

    • @borrowedtruths6955
      @borrowedtruths6955 10 місяців тому

      @@metahumansdk Thanks for the reply, I do have another question though. How do I add facial animations without a live link interface, i.e., a cell phone or head camera. Unless I'm mistaken, I have to delete the face widget to add the speaking animation to the sequencer. In either case, I appreciate the help.

    • @metahumansdk
      @metahumansdk  10 місяців тому

      @borrowedtruths6955 , our plugin generate facial animation from the sound (16-bit PCM wav or ogg). So you didn't need to use any device for mocap, just generate animation and add it to your character or use blueprints to do it automatically.
      We also showed it in our documentation docs.metahumansdk.io/metahuman-sdk/reference/metahuman-sdk-unreal-engine-plugin/v1.6.0#in-editor-usage-1

    • @borrowedtruths6955
      @borrowedtruths6955 10 місяців тому

      @@metahumansdk Thanks, I appreciate your time.

    • @ayrtonnasee3284
      @ayrtonnasee3284 4 місяці тому

      i have the same problem

  • @leion44
    @leion44 Рік тому

    When will it be available for UE.2?

    • @metahumansdk
      @metahumansdk  Рік тому +1

      We planned to release the MetahumanSDK plugin forUnreal Engine 5.2 this month.
      Our release candidate for UE 5.2 available from this link drive.google.com/uc?export=download&id=1dR30LXOwS1eEuUQ9LdQk9441zBTODzCL
      You can try it right now 😉

  • @AlejandroRamirez-ep3wo
    @AlejandroRamirez-ep3wo Рік тому

    Hi, does this support Spanish or Italian?

    • @metahumansdk
      @metahumansdk  Рік тому +1

      Hi Alejandro Ramírez!
      You can use any language you want because animation is created from sound.

  • @qinjason1199
    @qinjason1199 Рік тому

    The wave that the editor can play, error after using ATL input : -- LogMetahumanSDKAPIManager: Error: ATL request error: {"error":{"status":408,"source":"","title":"Audio processing failed","detail":"Audio processing failed"}} where should i check?

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi, Qin Jason!
      It looks like you try to use TTS and ATL in the same blueprint. This is known issue and we working on it.
      Currently you can try to use combo node or generate animation manually in the project. Feel free to share more details in our discord server discord.com/invite/MJmAaqtdN8

    • @qinjason1199
      @qinjason1199 Рік тому

      TTS accessed from other cloud services,but it's really in the same blueprint.Would splitting into multiple blueprints avoid this problem?

  • @user-or1ky6zh2p
    @user-or1ky6zh2p Рік тому

    Hi,I want to add some other facial movements when talking how can I do it like blinking etc.

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi! You can bland different facial animations in an animation bluprint. Also at the stage Speech To Animation you can choose to generate eye and neck animations.

    • @user-or1ky6zh2p
      @user-or1ky6zh2p Рік тому

      @@metahumansdk Hello, I want to read the WAV audio file under a certain path on the local computer when the game is running, and then use a plug-in to drive MetaHuman to play the audio and synchronize the mouth shape. I found a blueprint API, Load Sound from File, can this read a file from a local path? Does the File Name in this API refer to the file name of the read file? So where is the path of the read file? Can you set the path of the file you want to read?

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi! Yes, this function can read the path to the local file. In this parameter you must specify the path to your audio file.

    • @user-or1ky6zh2p
      @user-or1ky6zh2p Рік тому

      Hello, I would like to ask a question, the animation generated by text only has the mouth animation, how can I integrate this generated mouth animation with my other facial animations to make its expression more vivid? I wanted to fuse it at run time, and what I didn't understand was how to do this while the program was running

    • @metahumansdk
      @metahumansdk  Рік тому

      You can try to use blend for animations that you want to combine.
      You can get more details about blend mode in the official documentation for Unreal docs.unrealengine.com/5.2/en-US/animation-blueprint-blend-nodes-in-unreal-engine/

  • @boyce-wei
    @boyce-wei Рік тому

    At 10:11 in the video, when I scroll over it shows that the type of "CurrentChunk' is not compatible with Index, I don't know what's wrong

    • @boyce-wei
      @boyce-wei Рік тому

      10:10

    • @boyce-wei
      @boyce-wei Рік тому

      hello can you help me with this problem

    • @ffabiang
      @ffabiang Рік тому

      hi, make sure CurrentChunk is of type integer aswell as index

    • @boyce-wei
      @boyce-wei Рік тому

      @@ffabiang thank you

  • @Matagirl001
    @Matagirl001 Рік тому

    I cant find the ceil

  • @aihumans.official
    @aihumans.official Рік тому

    where I can connect my dialogflow chatbot? api key??

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi! At the moment, our plugin uses GPT chat, you can try to connect any chat bot yourself using the example of our integration. It will be great if you share the result with us.

  • @immortal3164
    @immortal3164 7 місяців тому

    I want the metahuman to start talking only when im close to him, how i can achieve that?

    • @metahumansdk
      @metahumansdk  7 місяців тому

      Hi!
      You can try to use trigger events that start do something when trigger is activated. In the unreal documentation you can find more information about it docs.unrealengine.com/4.26/en-US/Basics/Actors/Triggers/

  • @dyter07
    @dyter07 Рік тому +1

    Well, this 2000 years kater joke was good. I am waiting just 3 hours now to have the Metahuman loaded, LOL

  • @umernaveed6936
    @umernaveed6936 Рік тому

    Hi, Guys.I have been trying to figure this out for a week now the problem is how can we attach dynamic facial expressions and body gestures with chat gpt responces. Eg if the text returned is happy then the character should make a happy face and if he is angry then it should be an angry face. can someone help me with this

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi! Emotions are selected when you creating audio tracks from the text are selected in a special drop down menu. Please try

    • @umernaveed6936
      @umernaveed6936 Рік тому

      @@metahumansdk can you elaborate a little on this as i am still stuck

    • @umernaveed6936
      @umernaveed6936 11 місяців тому

      @@metahumansdk Hi, man can you guide me on how i can create the emotions as i am still stuck on the facial expression parts and the explicit emotions when setting the metahuman character

    • @metahumansdk
      @metahumansdk  10 місяців тому

      Hi!
      Sorry for the late answer.
      We shared blueprint that can help to focus yeys on something here: discord.com/channels/1010548957258186792/1131528670247407626/1131993457133625354

  • @abhishekakodiya2206
    @abhishekakodiya2206 Рік тому +3

    not working
    for me plugin doesn't genrates any lipsync anim

    • @metahumansdk
      @metahumansdk  Рік тому +1

      Please, send us more details to the our discord server or mail support@metahumansdk.io
      We will try to help with your issue

    • @mistert2962
      @mistert2962 Рік тому +1

      Do not use too long audio files. 5 minutes of audio will make that SDK not work. But 3 minutes will work. So the solution is: Split your audio in 3 minute parts.

  • @kirkr
    @kirkr Рік тому

    Is this still working? Says "unavailable" on the Unreal Marketplace

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi! That was marketplace servers maintenance, now plugin is available to download.

  • @arianakis3784
    @arianakis3784 4 місяці тому

    I say go to the moon for a walk, and as soon as I spoke, I called to return, hahhahaaaa

  • @benshen9600
    @benshen9600 11 місяців тому

    When will the combo request support Chinese?

    • @metahumansdk
      @metahumansdk  11 місяців тому

      Hi!
      Currently we using google assistance only for answers in the combo requests so it depends on google supported languages developers.google.com/assistant/sdk/reference/rpc/languages
      I can't promise that we will add new language soon but we have plans to make our solution more friendly to all countries.

  • @dreamyprod591
    @dreamyprod591 Місяць тому

    is there any way to integrate this on a website

    • @metahumansdk
      @metahumansdk  Місяць тому

      Sure, you can try to make a pixel streaming project for example.

  • @rachmadagungpambudi7820
    @rachmadagungpambudi7820 10 місяців тому +1

    how to give flashing mocap?

    • @metahumansdk
      @metahumansdk  10 місяців тому +1

      We didin't use mocap, our plugin generate animation from the sound

    • @rachmadagungpambudi7820
      @rachmadagungpambudi7820 10 місяців тому

      I like Your Plugin 🫡🫡🫡👍 thank you

  • @sumitranjan7005
    @sumitranjan7005 Рік тому

    can we get sample code git repo?

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi! You can find plugin files in the engine folder \Engine\Plugins\Marketplace\DigitalHumanAnimation

    • @sumitranjan7005
      @sumitranjan7005 Рік тому

      @@metahumansdk sample code of the project not the plugin to get started

    • @metahumansdk
      @metahumansdk  Рік тому

      We also have some demo level blueprints with some cases of use that included in every plugin version so you can use it as a project.
      You can find that in the demo folder of plugin.

  • @BAYqg
    @BAYqg Рік тому

    Unavailable to buy in Kyrgyzstan =(

    • @metahumansdk
      @metahumansdk  Рік тому

      Hi!
      Please check that
      1. Other plugins is available
      2. If you try to use our site make sure that EGS louncher is started
      3. EGS louncher is updaterd

  • @anveegsinha4120
    @anveegsinha4120 3 місяці тому +2

    I am getting error 401 no ATL permission

    • @metahumansdk
      @metahumansdk  3 місяці тому

      Hi!
      It should depends on the tariff plan. If you are using trial version you have limit to generate maximum 5 seconds per animation.
      If you are at the Chatbot tariff plan you need to use ATL Stream but not regular ATL.
      Regular ATL available on the Liet, Standard and Pro tariffs.

    • @BluethunderMUSIC
      @BluethunderMUSIC 2 місяці тому

      @@metahumansdk That's not really true cos I am getting the SAME error and I tried with sounds ranging from 0.5 seconds to 8 seconds. How do we fix this because it's impossible to do anything now.

    • @metahumansdk
      @metahumansdk  2 місяці тому

      Can you please send us logs to our discord discord.gg/MJmAaqtdN8 or support@metahumansdk.io?
      We will try to help you with this issue but we need more details about your case.

  • @mahdibazei7020
    @mahdibazei7020 22 дні тому

    Can I use this on Android?

    • @metahumansdk
      @metahumansdk  21 день тому

      Hi!
      We didn't support mobile platforms but you can try to rebuild our plugin with kubazip for android. It might work, but I can't guarantee it.

  • @mohdafiqtajulnizam9421
    @mohdafiqtajulnizam9421 8 місяців тому

    Please update this to 5.3 ....please!?

    • @metahumansdk
      @metahumansdk  8 місяців тому

      Hi!
      Work in progress 👨‍🔧

  • @user-nn7mg3bp4u
    @user-nn7mg3bp4u Рік тому +2

    my head is detached now

    • @metahumansdk
      @metahumansdk  Рік тому +1

      Hi Популярно в България !
      You need to use Blend Per Bone node in the Face AnimBP to glue head to the body when both parts are animated.

    • @Enver7able
      @Enver7able Рік тому

      @@metahumansdk How to do this?

    • @Fedexmaster91
      @Fedexmaster91 Рік тому

      @@metahumansdk great plugin, everythings works fine for me but Im having also this issue, when playing the generated face animation the head detach from the body

    • @Fedexmaster91
      @Fedexmaster91 Рік тому

      @@Enver7able I found this video on their discord channel:
      ua-cam.com/video/oY__OZAa0I4/v-deo.html&ab_channel=MetaHumanSDK

    • @user-nn7mg3bp4u
      @user-nn7mg3bp4u Рік тому

      @@metahumansdk thanks!

  • @commanderskullySHepherdson
    @commanderskullySHepherdson 9 місяців тому

    was pulling my hair out wondering why I couldnt get the plugin to work, then realised I hadnt generated a token! 🙃

    • @metahumansdk
      @metahumansdk  9 місяців тому

      Hi!
      Thank you for the feedback! New version of MetahumanSDK plugin is on mopderation now and this one have more useful messages about token. We hope this changes will make plugin's behavior more predictable

  • @EnricoGolfettoMasella
    @EnricoGolfettoMasella 11 місяців тому

    The girls need some love dude. They look so sad and depressed :P:P...

  • @inteligenciafutura
    @inteligenciafutura Місяць тому

    se debe pagar para usarlo, no funciona

    • @metahumansdk
      @metahumansdk  28 днів тому

      Hi!
      Can you please share more details about your issue?
      Perhape this tutorial can help you ua-cam.com/video/cC2MrSULg6s/v-deo.html

  • @inteligenciafutura
    @inteligenciafutura Місяць тому

    spanish?

    • @metahumansdk
      @metahumansdk  28 днів тому

      MetahumanSDK is language independent. We are generate animation from a sound but not from a visemes.