A month ago, I was dreaming of creating a project like this one to automate video creation. But I was not able to do that. Now you give me a good starting point. ❤️❤️❤️❤️
This video has been on my list for a while and I truly appreciate the time you put into explaining these concept. I still haven’t been able to dive deep into the code yet but do you reckon this can be implemented also to create a lip-synced animation experience for speech-to-speech interactions with LLMs? The tech for low-latency text-to-speech & speech-to-text is already there but I wonder if you deem it possible to have a low latency solution also for outputting the required phoneme data within the browser. Again, I haven’t deep-dived yet but curious about your opinion on this.
@@WawaSensei please can you make a video on gsap scroll trigger for full page scroll? In your portfolio tutorials the solution you use doesn't scale well for multiple sections
Hey, I followed your tutorial but for some reason, the shoulders are all tucked in weirdly when I am using the mixamo animations. They look just fine on the website but not when rendering them after downloading the file from mixamo. Any idea?
Hey, you can check my second lipsync tutorial maybe I do it differently. Maybe it depends on the param used on how we export the fbx, let me know if that helps 🙌
@wawa-sensei thanks a lot for shring this rare stuff on youtube. Learning a lot from this. Can you upload also please video for Metahuman avatar rather than Ready Player Me. Thanks!!
Love it but a question: In the elevenlabs-nodejs module textToSpeech you specify 'responseType: "stream"',. I don't understand why not "ArrayBuffer" and use responseType: "stream", in the textToSpeechStream? Btw there's a bit of duplicate code in lines 93-96 in the index.js.
Thank youuu! 🙏 I'm finishing writing my getting started with react three fiber course which include a lesson for optimizing projects 🙌 I might do it a light version for free in the future too (But basically it covers aspects you have in R3F documentation docs.pmnd.rs/react-three-fiber/advanced/scaling-performance and the performance pitfalls too)
Thank you very much for the video you shared, but I encountered this problem when expanding the function. I tried to use eyeBlinkLeft in the model to control blinking, but in LeftEye, the animation did not take effect. Is there any good solution?
Hello Sensei, I followed your steps to export y as up and z as forward to Mixamo. However, the final animated character I run has the head facing downwards. I'm not sure where I went wrong in the process and would appreciate your assistance.
Hey be careful about your threejs and r3f versions (compare with my package.json) Also check the process in my virtual gf video maybe I made it differently there
Thank you 🙌 You're right, I didn't try to see what it would look like but should be very easy to do! I will try and update the code if it's better 😊 In the meantime if you want to try it on your own, instead of setting the value to 0 or 1, you can use THREE.MathUtils.lerp to smoothly transition from the current value of the viseme towards 1 or 0
Hello Sensei, thanks for the video. But I'm having an issue after adding the animations. Console says: three.propertybinding: trying to update node for track: armature.quaternion but it wasn't found. And the model is rotated by 360 in z axis from its feet as pivot. I tried rotating and setting it's position but the result is not satisfactory
Hello, you're welcome! About this error: three.propertybinding: trying to update node for track: armature.quaternion but it wasn't found. I also have it with mixamo + ready player me, it doesn't cause an issue to me, but I don't know where it comes from yet. About your model rotation, be sure to use the same threejs version / r3f / drei than me, and that you correctly exported the fbx with the right settings (shown in the video) to generate your mixamo animations 🙏
Hello Sensei, thanks for the video. Your video is very helpful for me. I have an idea to call the api to get the audio file, then automatically genarate the json file from Rhubarb LipSync. But I can't find any documentation that can automatically run Rhubarb LipSync in reactjs. Can you help me? I will be very grateful to you.
Hello, Nice if you go this way! I can probably make a V2 of this video to show it in the near future. You shouldn't run Rhubarb client side as it's a binary your users won't have. It needs to be your server and then you return the JSON it generates. (So run the shell command using nodejs)
Thanks for this amazing tutorial. I am unable to play both animation and lipsync together, either one of them is playing at a time. do you know what could be the issue ?
Hi, in my case avatar is rotated horizontally for some reason. And when I have uploaded it into Mixano, there was a half-sphere block in the bottm half of my avatar. Did somebody faced this?
Is this code compatible with other tools other than Ready Player Me for avatar creation as long as the file is in .glb format? I tried using a custom .glb avatar (very similar to the one in this video) but the page was frozen and eventually crashed.
Hey it is but you must have a mixamo rig attached to your 3D model (If it’s not you can learn how to do it in this video ua-cam.com/video/mdj7Z3PCxRg/v-deo.html)
Hi, You can use this library github.com/spite/ccapture.js/, not sure it would work full backend (you can try) but you could pre-record them and use it in front too 🙏
Hi , this project was literally helpful for me. I was thinking, is there any possibility to add real time interaction using text to speech services and rhubarb oculus visemes?
My friends, I think you will appreciate the next video, it will include ElevenLabs and ChatGPT 😊 Here's an early teaser of the facial expressions 👉 twitter.com/wawasensei/status/1711328416837029970
i made a female avatar but for some reason i no matter how much i try to position her, her head is stuck to the ceiling of the page. I followed your code to the T. Any advice 😕
@@NanoGi-lt5fc no please download the rhubarb file and then extract the zip file then put that zip file in the root of the project and then go to the rhubarb path in your project and on that path , type the command for rhubarb but note that , you need to type rhubarb instead of ./rhubarb
Hello, I really enjoying your videos! I am just wondering if you have any videos without started project to clone? So it's easier to understand everything from scratch! I would be really glad if you answer! Have a great day!
Hello, thanks a lot! Sure, the first lessons of my course are free and include all the steps to create that starter project lessons.wawasensei.dev/courses/react-three-fiber Hit preview and you're ready to go 🙏
Of course yes! It would be a bit long to get the answer but you'd need to have the following in your backend: Call open AI (ChatGPT) with the question > generate audio with eleven lab API > have the Rhubarb Lip Sync on your server and run it on the audio > then play it in your frontend!
Hey, thank you very much! Well a backend is exactly like your local machine, you can have Rhubarb there and execute the command. I show it in this video how to create the backend ua-cam.com/video/EzzcEL_1o9o/v-deo.html
Check your package.json with the one from my repository here github.com/wass08/r3f-vite-starter Old drei versions have an issue with the url used for environment presets
Hey, the logic is the same than the visemes, there are morphTargets you can play with to open/close the eyes and by animating them smoothly you can make the avatar blink. I'm considering making a second more advanced version of the lipsync, would it interest you? What would you like to see inside?
@@WawaSensei more advance video great. I'm sure all your subscriber are early waiting for that. i want to see how emotion and more advance(multiple animation during a single dialog) gesture can be controlled. 😇
A month ago, I was dreaming of creating a project like this one to automate video creation. But I was not able to do that.
Now you give me a good starting point. ❤️❤️❤️❤️
Wow, I'm so happy to help!
Don't hesitate if you have questions 🙌
This video has been on my list for a while and I truly appreciate the time you put into explaining these concept. I still haven’t been able to dive deep into the code yet but do you reckon this can be implemented also to create a lip-synced animation experience for speech-to-speech interactions with LLMs? The tech for low-latency text-to-speech & speech-to-text is already there but I wonder if you deem it possible to have a low latency solution also for outputting the required phoneme data within the browser. Again, I haven’t deep-dived yet but curious about your opinion on this.
Best tutorial I've seen in a long time. Wish you knew some alternatives to ready player one, but not your fault. Amazing tutorial.
Thanks a lot! I'm considering paying artists on Fiverr for some new content
@@WawaSenseiI have headshot and cc4, I can create a character for you if you’ll show how to lipsync them
really like you didactics and calm when explaining.
🙏 happy to read it, glad it resonates with you! As I do my best, such kind comments mean a lot 🤗
The transition between animations is not smooth as it hit the T-pose first. Any solution?
Hey, its great, but i am trying to implement the same thing but its not working, how i can do it in threejs
Wow another unique idea.! Really great video. Thanks.
😻 Thanks, wish you'll give lip sync a try!
Fantastic tutorial! very well done!
Thanks a lot! 🙌
Thanks for your consistency
Thanks for your support! 🙌
@WawaSensei any other alternative we can choose for avatar other than Ready Player Me? I need more realistic avatars
Free ones I don't know, but you can purchase any you like on the different 3D marketplaces
Can I apply this to blender or daz studio avatars?
thank you man, clear and precise tutorial, thnak you for shared
Thanks for the motivation 🫶
How did you create shape keys (o "morph targets") in the avatar?
thanks for making these videos, they help a lot
My pleasure! 🙌
Thank you for your feedback!
@@WawaSensei please can you make a video on gsap scroll trigger for full page scroll? In your portfolio tutorials the solution you use doesn't scale well for multiple sections
My avatar somehow lost eyes, in mixamo its there but on UI its not appearing. what could be the possible cause ?
Wonderful !! Thanks man !! Great Job !!🎉🚀
Hey, I followed your tutorial but for some reason, the shoulders are all tucked in weirdly when I am using the mixamo animations. They look just fine on the website but not when rendering them after downloading the file from mixamo. Any idea?
Hey, you can check my second lipsync tutorial maybe I do it differently. Maybe it depends on the param used on how we export the fbx, let me know if that helps 🙌
@@WawaSensei Is it this one?
ua-cam.com/video/pGMKIyALcK0/v-deo.htmlsi=MGrvPgVROT9tHLk8
Oh sorry, this one ua-cam.com/video/EzzcEL_1o9o/v-deo.html 🙌
@wawa-sensei thanks a lot for shring this rare stuff on youtube. Learning a lot from this. Can you upload also please video for Metahuman avatar rather than Ready Player Me. Thanks!!
Thank you! Are you sure it's usable with Threejs and not only within Unreal?
@@WawaSensei honestly I am not sure, but I know we can download file into glb format thus technically this should be possible.
Love it but a question:
In the elevenlabs-nodejs module textToSpeech you specify 'responseType: "stream"',. I don't understand why not "ArrayBuffer" and use responseType: "stream", in the textToSpeechStream? Btw there's a bit of duplicate code in lines 93-96 in the index.js.
Hello Thank you for This Video ... But I m looking for other lip sync library than rhubarb, is there any library ?? please help ...
Another class act by our champion wawa
Owww 🥰
Thanks a lot for your huge support everytime 🙌
Once again great video ❤. Can you make a video on techniques for optimizing react three fiber projects
Thank youuu! 🙏
I'm finishing writing my getting started with react three fiber course which include a lesson for optimizing projects 🙌
I might do it a light version for free in the future too
(But basically it covers aspects you have in R3F documentation docs.pmnd.rs/react-three-fiber/advanced/scaling-performance and the performance pitfalls too)
Thanks for your work!
This is really nice video. Thank you for sharing.
Glad you enjoyed it, thaaaaanks 🙌
Thank you very much for the video you shared, but I encountered this problem when expanding the function. I tried to use eyeBlinkLeft in the model to control blinking, but in LeftEye, the animation did not take effect. Is there any good solution?
You're welcome! Please join us on Discord and share your project to be able to help you 🙌
Wow this project is amazing thank you
You are welcome 😊
Wish you'll give it a try!
Hey! Wawa Sensei!
I'm facing an error while yarn dev
It is not working at all, there are only errors. Please help me!
I'm trying hard from last 3 days
Hello Sensei, I followed your steps to export y as up and z as forward to Mixamo. However, the final animated character I run has the head facing downwards. I'm not sure where I went wrong in the process and would appreciate your assistance.
Hey be careful about your threejs and r3f versions (compare with my package.json)
Also check the process in my virtual gf video maybe I made it differently there
can this be deployed in core JS or Angular app ?
Great tutorial! Is there a way to smooth transition between visemes? It looks it's instant now and it feels a bit off. Thanks!
Thank you 🙌
You're right, I didn't try to see what it would look like but should be very easy to do!
I will try and update the code if it's better 😊
In the meantime if you want to try it on your own, instead of setting the value to 0 or 1, you can use THREE.MathUtils.lerp to smoothly transition from the current value of the viseme towards 1 or 0
@@WawaSensei Thanks for the detailed answer. I would be glad if you update the code 😊I've tried something with lerp but I didn't get what I want.
@@WawaSensei I've raised PR for it. I got some helps from chatGPT and the result looks better :)
@@hamitaksln github.com/wass08/r3f-lipsync-tutorial/blob/main/src/components/Avatar.jsx
Done! You were right the result is way better this way 🙌
@@WawaSensei Ah I just saw your commit. Thanks for your time and work
Arigato Gojaaimasta Sensei ❤❤
You’re welcome 🫶
Hello Sensei, thanks for the video.
But I'm having an issue after adding the animations.
Console says:
three.propertybinding: trying to update node for track: armature.quaternion but it wasn't found.
And the model is rotated by 360 in z axis from its feet as pivot.
I tried rotating and setting it's position but the result is not satisfactory
Hello, you're welcome!
About this error: three.propertybinding: trying to update node for track: armature.quaternion but it wasn't found.
I also have it with mixamo + ready player me, it doesn't cause an issue to me, but I don't know where it comes from yet.
About your model rotation, be sure to use the same threejs version / r3f / drei than me, and that you correctly exported the fbx with the right settings (shown in the video) to generate your mixamo animations 🙏
@@WawaSensei Yeah, I figured it out. Again thanks for the video😊
can u please tell me how to use the aditional blend shapes of oculus visemes ? i want my avatar to close his eyes in a natural way
Hey, please check this video, it's what I did:
ua-cam.com/video/EzzcEL_1o9o/v-deo.html
in my case it is not working
Hello Sensei, thanks for the video.
Your video is very helpful for me.
I have an idea to call the api to get the audio file, then automatically genarate the json file from Rhubarb LipSync. But I can't find any documentation that can automatically run Rhubarb LipSync in reactjs. Can you help me?
I will be very grateful to you.
Hello, Nice if you go this way!
I can probably make a V2 of this video to show it in the near future.
You shouldn't run Rhubarb client side as it's a binary your users won't have. It needs to be your server and then you return the JSON it generates. (So run the shell command using nodejs)
Hey, is it possible to make this character as an augmented reality project with three.js?
Yes! I've helped on this project aivah.ai/ and that's what they are using!
Thanks for this amazing tutorial. I am unable to play both animation and lipsync together, either one of them is playing at a time. do you know what could be the issue ?
Did you download the model correctly with the query string parameters?
how i apply voice recognition and voice synthesis
Can I do this in React Native ? Please let me know
I try to install step by step with your auto but I have the error : Avatar non defined in the Experience.jsx file
Do you have a component named Avatar that you imported/exported correctly?
Does this lip sync work well for all languages?
Hi, in my case avatar is rotated horizontally for some reason. And when I have uploaded it into Mixano, there was a half-sphere block in the bottm half of my avatar. Did somebody faced this?
use y forward when the time of export from blender
@WawaSensei Hello Sensei, thanks for the video, you are great. Can I use this avatar in a project for my clients?
Yes, of course! Look at Ready Player Me terms of service part 7 readyplayer.me/terms
Is this code compatible with other tools other than Ready Player Me for avatar creation as long as the file is in .glb format? I tried using a custom .glb avatar (very similar to the one in this video) but the page was frozen and eventually crashed.
Hey it is but you must have a mixamo rig attached to your 3D model
(If it’s not you can learn how to do it in this video ua-cam.com/video/mdj7Z3PCxRg/v-deo.html)
@@WawaSensei Thank you!
Hello sir, how can I make it generate a video with the animations directly from js, and make an mp4? sending only the video to the website, thanks
Hi,
You can use this library github.com/spite/ccapture.js/, not sure it would work full backend (you can try) but you could pre-record them and use it in front too 🙏
Thanks you! I have tried that library and it only generates the video on the client side@@WawaSensei do you know other option?
great tutorial, but the morph changes too quick, would a lerp be helpful?
Hey thank you!
Yes I had changed the code and demo based another comment about it 🙌
Let me know what you think !
Hi , this project was literally helpful for me. I was thinking, is there any possibility to add real time interaction using text to speech services and rhubarb oculus visemes?
@ShivamKumar-cu3lb im working on a similar project, would love to have a discussion
My friends, I think you will appreciate the next video, it will include ElevenLabs and ChatGPT 😊
Here's an early teaser of the facial expressions 👉 twitter.com/wawasensei/status/1711328416837029970
@@sharonthomas4010 Hi, sure let me know how we can connect?
i made a female avatar but for some reason i no matter how much i try to position her, her head is stuck to the ceiling of the page. I followed your code to the T. Any advice 😕
Hey feel free to share a codesandbox on the discord
for lipsync- write command like rhubarb instead of ./rhubarb (only for windows user)
Still not working hey did u add the rhubarb to path to env
@@NanoGi-lt5fc no please download the rhubarb file and then extract the zip file then put that zip file in the root of the project and then go to the rhubarb path in your project and on that path , type the command for rhubarb but note that , you need to type rhubarb instead of ./rhubarb
Thanks for helping 😍
You're the best ❤❤
No, you are! I'm only second 😍
Perfect!
You are perfect 😍
Hey,
Do you know why audio doesn't work on mobile?
Hey, is it because you didn't do any interaction with the screen before playing the audio? Maybe add a button to be pressed before playing the audio
Hi sir, I want to use this same in my android app but by using kotlin ...can i use it if yes then how ??
Hum.. You can use it! The simplest solution would be to embed in a webview, or rewrite for Kotlin following my logic
Hello, I really enjoying your videos! I am just wondering if you have any videos without started project to clone? So it's easier to understand everything from scratch! I would be really glad if you answer! Have a great day!
Hello, thanks a lot!
Sure, the first lessons of my course are free and include all the steps to create that starter project
lessons.wawasensei.dev/courses/react-three-fiber
Hit preview and you're ready to go 🙏
@@WawaSensei thank you so much! I am really enoying your videos, everything seems so easy, after your explanation!
Anyway to connect it to a OpenAI API endpoint, so that we can ask it a question and it can answer.
Of course yes!
It would be a bit long to get the answer but you'd need to have the following in your backend:
Call open AI (ChatGPT) with the question > generate audio with eleven lab API > have the Rhubarb Lip Sync on your server and run it on the audio > then play it in your frontend!
Outstanding! But your approauch with Rhubarb lipsync just runs on local, how to run it on server side, any clue? Thanks! 😄
Hey, thank you very much!
Well a backend is exactly like your local machine, you can have Rhubarb there and execute the command. I show it in this video how to create the backend ua-cam.com/video/EzzcEL_1o9o/v-deo.html
Amazing...
Thanks a lot 😊 !
thank you for such a video
Glad you liked it! 🙏
my avatar appears behind the background how can i fix it?
Hey, did you try adjusting the positions?
@@WawaSensei yeah i did, but got another error lol now the lipsync dont work lol
is possible to make text to speech with lip sync?
Hey, what do you mean?
when i enter preset="sunset" then i am getting errors
Check your package.json with the one from my repository here github.com/wass08/r3f-vite-starter
Old drei versions have an issue with the url used for environment presets
❤️❤️❤️❤️❤️❤️❤️
😻
Super mega excellet
✌🥰
I want video in avatar's background please help
You can with useVideoTexture -> lessons.wawasensei.dev/courses/react-three-fiber/lessons/image-and-video-textures
how to make it eye blinking?
Hey, the logic is the same than the visemes, there are morphTargets you can play with to open/close the eyes and by animating them smoothly you can make the avatar blink.
I'm considering making a second more advanced version of the lipsync, would it interest you? What would you like to see inside?
@@WawaSensei more advance video great. I'm sure all your subscriber are early waiting for that. i want to see how emotion and more advance(multiple animation during a single dialog) gesture can be controlled. 😇
hhhhhh 💯💯💯💯