PLEASE if you havent already I would LOVE a clean up tutorial I guess it wouldnt be the most entertaining vid but its something that I personally could learn so much from. Thank you!
hmm i am in 2022.2.1 now and 'source shot' with the presets is not there. it's a bunch of individual emotion sliders but i assumed the presets had a lot of these preconfigured. any insight on this?
how can I implement lip-sync for chatbot? I have tried using MetaHumanSDK, but now they have make it a paid plugin. I needed an alternative, is there any other option available?
So right now you have to use maya to import into unreal 5 or is that used for cleanup? Unlike u real 4 .. you could just make a folder in content and import facial animation.. but it’s not the same in UE5 .. uhhh so complicated
Is there a way to apply improved lip syncing onto a metahuman that was already recorded inside unreal engine? I used the take recorder to record the metahuman and audio, but the lip syncing is poor.
@@SmallRobotStudio Just in unreal, it would be too messy to do the touchups through exporting the model with animations and then re-importing them I think. Due to the way I've recorded, I don't think it saved an animation file for the mesh that I could tweak (I haven't looked yet I was in a rush when capturing), when I want to play back the takes, I open the animation sequence and then it spawns in the metahuman and plays back the animation that way.
This is just an "out of the box" metahuman so it's the standard groom that it comes with. I'm working on a tutorial for doing custom grooms and exporting them to Unreal for use with Metahumans so look out for that next week hopefully :)
Someone needs to build a plugin that matches the shape mouth settings in UE5 Metahumans to the English phonemes that allows you to import an audio file or TTS synthesis and automatically rig the character.
@@SmallRobotStudio Ah, that one! I thought it was a different process. If I can use that tutorial right after this tutorial, it'll be easier than I thought. thanks!
PLEASE if you havent already I would LOVE a clean up tutorial I guess it wouldnt be the most entertaining vid but its something that I personally could learn so much from. Thank you!
Great workflow, well done. Thank you for sharing.
Great TIPS ... thanks for sharig
Awesome, thanks for the tutorial!
How do you set the localhost? it doesn't appear to me
hmm i am in 2022.2.1 now and 'source shot' with the presets is not there. it's a bunch of individual emotion sliders but i assumed the presets had a lot of these preconfigured. any insight on this?
i cant see any wav files when i try to load my own
how can I implement lip-sync for chatbot? I have tried using MetaHumanSDK, but now they have make it a paid plugin. I needed an alternative, is there any other option available?
Perfect
@Small Robot Studio Hello how're you? can you make .exe file that Lip Sync with Metahumans we are ready buy this project.
So right now you have to use maya to import into unreal 5 or is that used for cleanup? Unlike u real 4 .. you could just make a folder in content and import facial animation.. but it’s not the same in UE5 .. uhhh so complicated
Not sure if you can go from Audio2Face right into Unreal currently - I wouldn't personally as I prefer to do animation cleanup in Maya.
How to add this type of animation in flutter apps
So there is no way to do this on a Mac? 🙁
Does anyone know if there is a comparable script/work flow for 3ds max?
Not yet. I'm a 3ds Max user (since 2009) and I'm pretty sure they'll create a script for Blender before. Just our luck -__-
Is there a way to apply improved lip syncing onto a metahuman that was already recorded inside unreal engine? I used the take recorder to record the metahuman and audio, but the lip syncing is poor.
Just within Unreal or are you doing the touch-ups in Maya?
@@SmallRobotStudio Just in unreal, it would be too messy to do the touchups through exporting the model with animations and then re-importing them I think. Due to the way I've recorded, I don't think it saved an animation file for the mesh that I could tweak (I haven't looked yet I was in a rush when capturing), when I want to play back the takes, I open the animation sequence and then it spawns in the metahuman and plays back the animation that way.
There must be a solution to drive the mouth movement with proximity chat.
Interesting. I wonder how did you manage to apply metahuman grooming to your mesh
This is just an "out of the box" metahuman so it's the standard groom that it comes with. I'm working on a tutorial for doing custom grooms and exporting them to Unreal for use with Metahumans so look out for that next week hopefully :)
@@SmallRobotStudio Thank you! Thought you've managed to put your scanned and retopologyzed head back to Metahuman in Unreal.
how do you import the audio 2 face file to maya?
Check the tutorial I released before this one on the topic
Someone needs to build a plugin that matches the shape mouth settings in UE5 Metahumans to the English phonemes that allows you to import an audio file or TTS synthesis and automatically rig the character.
You could always create blendshapes on the controller to automatically get those visemes
Working on one now, my friend.... Should hopefully hit the marketplace at some point
@@KelechiApakama cool man .. can’t wait.
Have anyone tried to export it from Maya to Unreal (like, right after completing this tutorial)?
I have another tutorial on the channel for how to do this - look at the Maya and Unreal Metahumans import/export video
@@SmallRobotStudio Ah, that one! I thought it was a different process. If I can use that tutorial right after this tutorial, it'll be easier than I thought. thanks!
@@Amelia_PC yep just skip to the part where I export the facial controller rig
@@SmallRobotStudio Thanks! (yup, it wouldn't make sense to export the facial controller because the face animation should be loaded on Sequencer :D )
Is there any automatic / programmatic way to control face rig / lip sync on the fly when an audio file is provided ?
In audio2face? Not to my knowledge, this is pretty much the breadth of it
@@SmallRobotStudio Thanks for your reply and did I say, your video is great help ! :)