Hi Sarge, Big fan of your tutorials, they're clear and super helpful. I'm currently trying to get AWS Polly and the Oculus Viseme package working together in Unity. However, my lipsync isn't aligning with the AWS Polly speech, despite following your steps and testing on both Unity versions 2021.3.26f1 and 2022.2.19f1. I've also adjusted the blendshape values as you advised. There's a video on the RPM Unity-dev forum that shows what I'm encountering. Your insights would be greatly appreciated! Thanks for the excellent content you provide. Best, Ned
@@sgt3v Hi Sarge, thank you for your response. I shared the link to the video I shared on discord twice, but I assume it is being removed automatically. Is there anyway else I can share it with you?
hello! great video and it works great on windows! The problem is that oculus lipsync doesn't work with android for some reason. I want to use it for a meta quest 2 project. Is there any work around or alternative? Thank you.
HI, would you please tell me how to add sentimental analysis to my avatar? if i can do it with rigtht blend shapes,where are they in the package? i can't find where are the shapes in my package and from where i can change it according to emotions. kindly help!
Hi Sarge, first of all great video and many thanks for this showcase :) I am also having a question regarding the ready player me model: At the moment it seems the structure changed and there is no 'Renderer_Avatar' available anymore on the model. Instead there are only the renderers for the single body parts. Do you know if this is expected? Did I miss any preparation step? I couldn't find any related article inside the documentation.
I followed the tutorial exactly but when i change the sliders of my blendshapes visemes the face of my character stays the same. I don’t know what to do…
@@sgt3v i always new about lipsync in oculus but just asumed it was a big system that only worked with oculus built in avatar system nice to know that is not the case
I know that Nvidia has also a lipsync with Audio2Face. Don't think it is working with Unity yet. Did you try it and how is it comparing to Oculus Lipsync?
When i add my audio source to OVR Lip sync context it then just do lip sync but does not play sound and when i remove audio source from context then it produce sound without lip sync
Hello. A very great informative video. I have a query. I followed the exact same steps but the data (script) file is not present in my avatars inspector field. And how can I bring ReadyPlayerMe in my packet manager? I tried using various methods. But it doesn't work. Due to this lip syncing is not working. How to sort this issue.
Hi Sarge, Thank you for the great video, I am able to replicate what you had shown using Unity 2022.3.2f1, now I am trying to make the camera background transparent and to do that i did following steps. 1. set the Camera -> Clear Flags to Solid Color and background to (0,0,0,0) setting alpha to 0 2. in the Player Setting made the render over native UI to 1 and build the android app, my app background is still showing black color background and it is not transparent my unity version Unity 2022.3.2f1 any idea or support, would be really helpful
As I was going through AI NPC 2, I updated the Ready Player Me avatar for face anims and it disappeared. I tried reimporting and it doesn't do anything and even though I closed without saving, it saved so my NPC isn't there. What do I do?
when exporting to WebGL I am getting *DllNotFoundException: Unable to load DLL 'OVRLipSync'. Tried the load the following dynamic libraries:* this error. while playing in unity its running smoothly
@@sgt3v Thanks for your answer. For AWS Polly I created an endpoint in the same server I host the WebGL project. If you are interested, I can share the detail by email, or any other way.
I'm getting the following error in the Console: DllNotFoundException: OVRLipSync assembly: type: member:(null) OVRLipSync.Initialize () (at Assets/Oculus/LipSync/Scripts/OVRLipSync.cs:267). I believe all the resources are in place. I'm trying to determine if it's this portion of code: AudioSettings.GetDSPBufferSize(out bufferSize, out numbuf); It can't get the bufferSize or the numbuf.
@@sgt3v Thank you for the quit reply. I have reloaded the package a few times and always restarted the project. I'm using a Mac Studio. Here's the path to the plugins folder: Assets/Oculus/LipSync/Plugins/MacOSX/OVRLipSync.bundle. Maybe the path is the issue. Similar to the problem with writing the MP3 file. The AWS Polly section is working fine and I can see all the BlendShapes. Also, the eyes are moving and blinking. So close, but yet so far.
when I am adding avatar config it is showing error that *Avatar postprocess Failed* . but without add that avatar config it is not showing any error after that i am not getting lips blend option. I have followed every step in video but still getting error. Can any one help with this?
Checkout my new project Neocortex for simplest way to integrate Smart NPCs in your games
neocortex.link
Hi Sarge,
Big fan of your tutorials, they're clear and super helpful. I'm currently trying to get AWS Polly and the Oculus Viseme package working together in Unity. However, my lipsync isn't aligning with the AWS Polly speech, despite following your steps and testing on both Unity versions 2021.3.26f1 and 2022.2.19f1. I've also adjusted the blendshape values as you advised.
There's a video on the RPM Unity-dev forum that shows what I'm encountering. Your insights would be greatly appreciated! Thanks for the excellent content you provide.
Best,
Ned
Hi Ned, could you link the video. Oculus lib uses audio emitted to generate the face anim, there is nothing extra in there.
@@sgt3v Hi Sarge, thank you for your response. I shared the link to the video I shared on discord twice, but I assume it is being removed automatically. Is there anyway else I can share it with you?
@@NedzzoneXR replied in RPM discord.
@@sgt3v thank you 🙏🏻☺️
Could you please make a video where you can change facial expression of your avatar like happy, sad and neutral based on the type of text.
Please
great tutorial, thanks! I can use the oculus lipSync also on other NOT vr device like android or ios devices?
Thanks Man you are Awesome!
hello! great video and it works great on windows! The problem is that oculus lipsync doesn't work with android for some reason. I want to use it for a meta quest 2 project. Is there any work around or alternative? Thank you.
HI, would you please tell me how to add sentimental analysis to my avatar?
if i can do it with rigtht blend shapes,where are they in the package? i can't find where are the shapes in my package and from where i can change it according to emotions.
kindly help!
Hi Sarge,
first of all great video and many thanks for this showcase :)
I am also having a question regarding the ready player me model: At the moment it seems the structure changed and there is no 'Renderer_Avatar' available anymore on the model. Instead there are only the renderers for the single body parts. Do you know if this is expected? Did I miss any preparation step? I couldn't find any related article inside the documentation.
Make sure you have correct Avatar Config with Texture Atlasing.
@@sgt3v That was actually the issue. Its working now also with the lipsync. Thank you so much! 🙂
I followed the tutorial exactly but when i change the sliders of my blendshapes visemes the face of my character stays the same. I don’t know what to do…
Great video, thanks!I was wondering how to add viseme blendshapes on avatars not from ready player one. such as from character creator 4?
I do not know if other systems support them.
@@sgt3v thanks for the fast reply anyway, i was kind of able to figure out how to get viseme on the CC4 avatars. :)
How many blend shape create for this lip sync?
great video, i was using the lipsync that comes with ready player me but its pretty janky, this works allot better thank you for sharing :)
The component that comes with RPM is just an audio amplitude to mouth open blend shape component, covers the very minimum case with no plugins :0)
@@sgt3v i always new about lipsync in oculus but just asumed it was a big system that only worked with oculus built in avatar system nice to know that is not the case
I know that Nvidia has also a lipsync with Audio2Face. Don't think it is working with Unity yet. Did you try it and how is it comparing to Oculus Lipsync?
Haven't got my hands of a2f yet, but looking forward to try.
When i add my audio source to OVR Lip sync context it then just do lip sync but does not play sound and when i remove audio source from context then it produce sound without lip sync
Hi Shani, in the video I specifically mention those settings. Make sure to watch it all.
Hello. A very great informative video. I have a query. I followed the exact same steps but the data (script) file is not present in my avatars inspector field. And how can I bring ReadyPlayerMe in my packet manager? I tried using various methods. But it doesn't work. Due to this lip syncing is not working. How to sort this issue.
You can check the full instructions here: ua-cam.com/video/Cg4k-XPBC2Q/v-deo.html
Bro i have question to you , how it will sychronuse over the network @sgt3v
You can use a networking system such as netcode or photon to transmit the float values.
@sgt3v I am currently using fishnet
@@sgt3v I am currently using fishnet
Hi Sarge,
Thank you for the great video, I am able to replicate what you had shown using Unity 2022.3.2f1, now I am trying to make the camera background transparent and to do that i did following steps.
1. set the Camera -> Clear Flags to Solid Color and background to (0,0,0,0) setting alpha to 0
2. in the Player Setting made the render over native UI to 1
and build the android app, my app background is still showing black color background and it is not transparent my unity version Unity 2022.3.2f1
any idea or support, would be really helpful
Is your AWS Polly working in Android build ? Mine is not I got error whenever I provide path like jar: files://
As I was going through AI NPC 2, I updated the Ready Player Me avatar for face anims and it disappeared. I tried reimporting and it doesn't do anything and even though I closed without saving, it saved so my NPC isn't there. What do I do?
You should still have the avatar in Ready Player Me/Avatars folder but I'm afraid changes you made on it are not saved in there.
@@sgt3v This is the error I get: ModelImportError - Failed to import glb model from bytes. Object reference not set to an instance of an object
can I add mixamo character into ready player me assets instead of their characters?
Any character with ARKit blendshapes should work.
can we add facial expressions using text sentiment analysis on our ready player me avatar?
You can do it using the right blendshpes.
@@sgt3v How can i pick that visemes with my text sentment analysis. Any reference link would help
when exporting to WebGL I am getting *DllNotFoundException: Unable to load DLL 'OVRLipSync'. Tried the load the following dynamic libraries:* this error. while playing in unity its running smoothly
Oculus LipSync is not for WebGL builds. Only Android and Windows would work.
@@sgt3v is there any other option to do lipsync for free in webGL from unity c#
Thanks Sarge ❤
One question: Is it working on any platform, including WebGL?
Hi GiZmVs, unfortunately neither AWS Polly nor Oculus LipSync work in Unity WebGL builds at the moment.
@@sgt3v Thanks for your answer. For AWS Polly I created an endpoint in the same server I host the WebGL project. If you are interested, I can share the detail by email, or any other way.
@@GiZmVs Hello, can you help me about that?
@@KaanErayAKAY Sure! I sent you an email. Let's talk through it. Cheers.
I am also looking for webgl based solution. Can you help me ? Please
I'm getting the following error in the Console: DllNotFoundException: OVRLipSync assembly: type: member:(null)
OVRLipSync.Initialize () (at Assets/Oculus/LipSync/Scripts/OVRLipSync.cs:267). I believe all the resources are in place. I'm trying to determine if it's this portion of code: AudioSettings.GetDSPBufferSize(out bufferSize, out numbuf); It can't get the bufferSize or the numbuf.
Maybe reloading the project helps, it seems like Unity does not see the imported package.
@@sgt3v Thank you for the quit reply. I have reloaded the package a few times and always restarted the project. I'm using a Mac Studio. Here's the path to the plugins folder: Assets/Oculus/LipSync/Plugins/MacOSX/OVRLipSync.bundle. Maybe the path is the issue. Similar to the problem with writing the MP3 file. The AWS Polly section is working fine and I can see all the BlendShapes. Also, the eyes are moving and blinking. So close, but yet so far.
@@williamowens6049 could be, I am not a mac user and haven't tested these plugins.
@@sgt3v I went with a different LipSync solution and it works flawlessly.
@@williamowens6049 I also faced the same issue on my mac. Which different LipSync solution did you use?
when I am adding avatar config it is showing error that *Avatar postprocess Failed* . but without add that avatar config it is not showing any error after that i am not getting lips blend option. I have followed every step in video but still getting error. Can any one help with this?
I was able to fix this by unchecking 'use mesh opt compression' in the avatar config
Can they lipsync an audio file from a GPT respnse?
Just covered that in a video yesterday. ua-cam.com/video/TnmbyP5_R90/v-deo.html
Great 👍👍👍👍
Oculus Lipsync SDK does not support IOS, does it?
I do not know if it would or would not work in IOS, could not find any notes about that.
고맙습니다^^