- 15
- 32 634
echo performances
Canada
Приєднався 13 сер 2020
echo performances is a service provider that specializes in delivering high quality facial animations, whether it is fully keyframed or based on an existing performance capture. We are experienced with both cartoon and realistic facial rigs.
We also provide consultations to help you build your pipeline, to overcome production challenges, and to meet your deadlines.
In the future, we hope to expand our available services to include performance capture, rigging, and sculpting.
To follow our other social channels, check here:
linktr.ee/echoperformances
We also provide consultations to help you build your pipeline, to overcome production challenges, and to meet your deadlines.
In the future, we hope to expand our available services to include performance capture, rigging, and sculpting.
To follow our other social channels, check here:
linktr.ee/echoperformances
Guys’ Night | A MetaHuman Animator short film
This short film serves as an exploration of several new technologies, combining the new Metahuman Animator, ChatGPT, and Respeecher.
▶ Here's a quick chronological breakdown of my process:
• The script was written with ChatGPT using my own prompts, multiple drafts, then manually edited by me. You can check the prompts and the script's evolution here: bit.ly/guys_night_chatgpt_script
• The head and facial rig were created in Metahuman Creator by combining parts of 3 template heads.
• The performance was recorded on an iPhone 11 using the Live Link Face app.
• The voice-over track was generated using Respeecher (Ritter US -3.0).
• Facial animations are generated using the new Metahuman Animator feature of UE 5.2.
• Animation polish was done entirely in UE5’s Sequencer (no reliance on any DCC).
• The result had to be screen-captured using OBS. The movie render queue estimated 6 hours to render and I was getting shader compiling issues.
• The video was edited using DaVinci Resolve.
[ CHAPTERS ]
0:00 Guys' Night
2:06 Side-by-side comparison
--------------------------
▶▶ RESOURCES
Big thanks to the following channels for their knowledge sharing!
▷ @UnrealEngine
ua-cam.com/video/WWLF-a68-CE/v-deo.html
▷ @CinecomCrew
ua-cam.com/video/knf7AH2iqHA/v-deo.html
▷ @Jsfilmz
ua-cam.com/video/TkYKYMllL-A/v-deo.html
ua-cam.com/video/em9J9lcFULY/v-deo.html
▷ @ibrews
ua-cam.com/video/P15h2Hb-wso/v-deo.html
ua-cam.com/video/4lVrs78CoC8/v-deo.html
ua-cam.com/video/Rym1QpD_Wr4/v-deo.html
▷ @UnrealApprentice
ua-cam.com/video/qLt6VIwqWOQ/v-deo.html
ua-cam.com/video/TOeA-c7JubY/v-deo.html
ua-cam.com/video/lERKslkPKYg/v-deo.html
--------------------------
▶▶ MUSIC
@BensoundMusic
Royalty Free Music: Bensound.com/royalty-free-music
License code: Z627KVAEZBQJKKJR
www.bensound.com/royalty-free-music/track/enigmatic
▶ Here's a quick chronological breakdown of my process:
• The script was written with ChatGPT using my own prompts, multiple drafts, then manually edited by me. You can check the prompts and the script's evolution here: bit.ly/guys_night_chatgpt_script
• The head and facial rig were created in Metahuman Creator by combining parts of 3 template heads.
• The performance was recorded on an iPhone 11 using the Live Link Face app.
• The voice-over track was generated using Respeecher (Ritter US -3.0).
• Facial animations are generated using the new Metahuman Animator feature of UE 5.2.
• Animation polish was done entirely in UE5’s Sequencer (no reliance on any DCC).
• The result had to be screen-captured using OBS. The movie render queue estimated 6 hours to render and I was getting shader compiling issues.
• The video was edited using DaVinci Resolve.
[ CHAPTERS ]
0:00 Guys' Night
2:06 Side-by-side comparison
--------------------------
▶▶ RESOURCES
Big thanks to the following channels for their knowledge sharing!
▷ @UnrealEngine
ua-cam.com/video/WWLF-a68-CE/v-deo.html
▷ @CinecomCrew
ua-cam.com/video/knf7AH2iqHA/v-deo.html
▷ @Jsfilmz
ua-cam.com/video/TkYKYMllL-A/v-deo.html
ua-cam.com/video/em9J9lcFULY/v-deo.html
▷ @ibrews
ua-cam.com/video/P15h2Hb-wso/v-deo.html
ua-cam.com/video/4lVrs78CoC8/v-deo.html
ua-cam.com/video/Rym1QpD_Wr4/v-deo.html
▷ @UnrealApprentice
ua-cam.com/video/qLt6VIwqWOQ/v-deo.html
ua-cam.com/video/TOeA-c7JubY/v-deo.html
ua-cam.com/video/lERKslkPKYg/v-deo.html
--------------------------
▶▶ MUSIC
@BensoundMusic
Royalty Free Music: Bensound.com/royalty-free-music
License code: Z627KVAEZBQJKKJR
www.bensound.com/royalty-free-music/track/enigmatic
Переглядів: 529
Відео
echo performances | facial animation reel (2023)
Переглядів 264Рік тому
This is a compilation of a few short films and experiments that we've worked on over the last 2 years. You'll find the full videos on our channel. Make sure to check them all out! Turning 40 - ua-cam.com/video/qbV8_huXuzs/v-deo.html Old Scores - ua-cam.com/video/W5rvR7lXf28/v-deo.html Old Scores (BTS) - ua-cam.com/video/2vO4tXjdtOY/v-deo.html MetaHuman Simon - ua-cam.com/video/c6PE9U0mUhA/v-deo...
Reviewing NVIDIA Broadcast's Eye Contact Effect
Переглядів 5 тис.2 роки тому
For this short experiment, I decided to test NVIDIA Broadcast's new "Eye Contact" effect in version 1.4.0.29. Give it a try and let me know what you think! Were you satisfied with the results? Were you brave enough to use it for a live presentation or meeting? Leave a comment! To download NVIDIA Broadcast, visit: www.nvidia.com/en-us/geforce/broadcasting/broadcast-app/ #nvidiabroadcast #eyecont...
Gaming 101 | Faceware Portal x Mesh to MetaHuman
Переглядів 6942 роки тому
This new experiment combines several new technologies. Faceware Portal - Faceware's new Neural Net Facial Tracking Mesh to MetaHuman - Using a scan to generate a MetaHuman Unreal Engine 5 - Rendered in UE5's sequencer #Faceware #MetaHuman #UnrealEngine5 #FacialAnimation #selfportrait - - - - - - - - - - - - - - - - - - The quote was taken from this episode of "Eidos Vox Pop": ua-cam.com/video/5...
My Background | Faceware Portal x Mesh to MetaHuman
Переглядів 4712 роки тому
This new experiment combines several new technologies. Faceware Portal - Faceware's new Neural Net Facial Tracking Mesh to MetaHuman - Using a scan to generate a MetaHuman Unreal Engine 5 - Rendered in UE5's sequencer #Faceware #MetaHuman #UnrealEngine5 #FacialAnimation #SelfPortrait - - - - - - - - - - - - - - - - - - The quotes were taken from this episode of "Inside Unreal" (recorded live on...
Mesh To MetaHuman Preview | MetaHuman Simon
Переглядів 6122 роки тому
Experimenting with the latest 'Mesh to MetaHuman' plugin for UE5. #metahuman #ue5 #unrealengine #metaverse 0:00 Idle Animation 0:38 Facial Poses 1:05 Body Poses 1:51 Expression Loops 3:30 Environment lighting 6:04 Facial ROM / Red Lantern To set this up yourself, check out these tutorials: @UnrealEngine - ua-cam.com/video/xOVyme4TFZw/v-deo.html @FeedingWolves - ua-cam.com/video/q1KNTaFZH8Q/v-de...
Old Scores - Preview
Переглядів 2143 роки тому
This is a 30s preview of our short film "Old Scores": ua-cam.com/video/W5rvR7lXf28/v-deo.html Follow the performers on Twitch, Instagram, and TikTok: Alex Weiner - linktr.ee/officiallyaweiner Jon McLaren - linktr.ee/JonMcLaren And to follow our own social channels: linktr.ee/echoperformances #Faceware #MetaHumans #UnrealEngine5
Old Scores - Behind The Scenes
Переглядів 1673 роки тому
This is a '"Behind The Scenes" look at "Old Scores", a short film using Unreal Engine's MetaHuman Creator and Faceware Analyzer / Retargeter. The goal here is to showcase the performers side-by-side with their character counterparts, without camera cuts. In case you missed it, check out the short film here: ua-cam.com/video/W5rvR7lXf28/v-deo.html - - - - - - - - - - - - - - - - - - To follow th...
How to use Faceware with MetaHumans in Maya & Unreal Engine 5
Переглядів 14 тис.3 роки тому
In this video, we break down the process used to create the following 2 recent short films: Turning 40 - ua-cam.com/video/qbV8_huXuzs/v-deo.html Old Scores - ua-cam.com/video/W5rvR7lXf28/v-deo.html If you're familiar with facial animation or you have a general curiosity about working with Faceware and MetaHumans, then running them all in Unreal, this video could be for you. Feel free to use the...
Old Scores - Faceware x MetaHumans - Experiment n04
Переглядів 1,1 тис.3 роки тому
This is a fourth experiment using Unreal's MetaHuman Creator. Similar to experiment #3, the goal was to continue "stress testing" the Facial Rig and processing it through Faceware Analyzer and Retargeter in Maya. The biggest differences here are the use of an original performance, retargeted onto 2 custom MetaHuman models. The turnaround from recording session to final product was 12 days (7 ac...
Turning 40 - Faceware x MetaHumans - Experiment n03
Переглядів 1,4 тис.3 роки тому
This is a third experiment using Unreal's MetaHumans. While there is currently a "goldrush" to get results with MetaHumans and real-time solvers, I wanted to take the time to portray a more nuanced performance. The goal was to "stress test" the Facial Rig using a static cam performance and processing it through Faceware Analyzer and Retargeter (in Maya). This offline approach of Tracking and Re...
Bacon - Faceware Studio x MetaHumans - Experiment n02
Переглядів 1,2 тис.3 роки тому
This is a second experiment using Faceware Studio, Glassbox's Live Client, and Unreal's MetaHumans to generate real-time animation. The character used in this example is a custom build using MetaHuman Creator. My 2 cents: While the results in Faceware Studio are pretty faithful to the original performance, a lot seems to get lost when brought into Unreal. The imperfections might be caused by la...
Rams - Faceware Studio x MetaHumans - Experiment n01
Переглядів 4893 роки тому
This is a first attempt at using Faceware Studio, Glassbox's Live Client, and Unreal's MetaHumans to generate real-time animation. The MetaHuman in this example is called "Ettore". Original clip taken from here (6:06): ua-cam.com/video/Z4Dl6y9o04g/v-deo.html #Faceware #Unreal #MetaHumans Want to give it a try? To get started, please follow these instructions: Click here (facewaretech.odoo.com/s...
how do you make a character look like he has a dimples?
good stuff
Do u need to have a gpu ?
I believe NVIDIA Broadcast only works with specific graphic cards. You can find the System Requirements here: www.nvidia.com/en-us/geforce/broadcasting/broadcast-app/
Hello! I like your video about eye contact. Please tell me which model graphic card you have. Thank you!
Free?
Yes! Nvidia Broadcast is free.
😂wtf crazy experiment 😂😂 thanks bro
Cool! And thanks for the shout out
What camera setup are you using? I tried the Eye Contact with a DSLR 10 feet away from me and it changes my gaze to look about a foot or 2 above the camera - even when I'm looking directly at it.
Interesting. In this video, I’m using the Razer Kiyo Pro webcam.
1050ti can len GTX2060 bu tru hop ly
😋
Waiting for more tutorials
Hi Kingsley! Last year, I worked on a much more elaborate course for Faceware in collaboration with Unreal's Online Learning platform. It went live in November, but I presented it just today during a webinar. The VOD should go live with a week or 2, but in the meantime you can check out the course itself: www.unrealengine.com/en-US/blog/online-course-metahuman-facial-animation-with-faceware-analyzer-and-retargeter And in case you missed it, I also hosted a previous Faceware webinar to present the facial animation pipeline that I helped develop for "Marvel's Guardians of The Galaxy": ua-cam.com/video/_jZe3KgO61Q/v-deo.html
@@echoperformances wooo thanks
My pleasure! I hope it helps you with your own projects!
Hey! In my first reply I mentioned my most recent webinar. In case you wanted to check it out, here’s the recording: ua-cam.com/video/4x15o_ik0Qg/v-deo.html
Hi, Your video is amazing, so I have a problem for Faceware retargeter, When I open performance and select .fwr file and .xml I don't get any think poses like you, Can you help me about this? Thank you
Hello, thanks for your message. The first time you introduce Retargeter to a new character rig, you can either create poses using the Character Setup’s Expression Set (50 default/generic poses), or you can create custom poses specific to your actor’s performance. You can start with 5-10 poses per face group, retarget to fill your timeline, evaluate your animation, then add corrective poses where needed, and repeat the process. Don’t forget to hit the “Update” button to register your controller values in Retargeter before hitting the “Retarget” button. For your initial poses, I recommend adding poses on the same frames that you had created Training Frames in Analyzer. It’s not required, but generally you’ll find most of your extreme or most defined expressions.
@@echoperformances Thank you so much
My pleasure! Good luck and have fun!
This is pretty interesting
Hey, any more tips/tuts on Faceware? Deepfake? Enjoyed the last one thanks!
Hi William! Last year, I worked on a much more elaborate course for Faceware in collaboration with Unreal's Online Learning platform. It went live in November, but I'll be hosting a webinar on February 4th to talk about it (see links below). Make sure to tune in if you'd like to post your questions live during the Q&A portion. In case you missed it, I had hosted a previous Faceware webinar to present the facial animation pipeline I helped develop on "Marvel's Guardians of The Galaxy": ua-cam.com/video/_jZe3KgO61Q/v-deo.html Unreal Online Learning - Intro to Faceware: www.unrealengine.com/en-US/blog/online-course-metahuman-facial-animation-with-faceware-analyzer-and-retargeter Faceware Webinar announcement Tweet: twitter.com/FacewareTech/status/1613975237665619994
Hello again, thank you for your videos, and great work with Guardians Galaxy. I want to know now for you what is better if need quality animation: Portal+retargeter or Analyzer+retargeter, what you think better? And maybe you can explain more about retargeter, because i think hard part in your videos skipped, then you create some poses to improve quality retargeter, and i really intersted how it works and how i can repeat it myself)
It's a bit early to make a final statement on which combination is better. I believe using Portal will be beneficial for a large volume of videos to track, but that Analyzer will give you more accurate tracking whenever a particular performance justifies the manual adjustments. If you'd like to learn more about the Retargeter workflow, you can check out the following course that I published on the Unreal Online Learning platform: dev.epicgames.com/community/learning/courses/lw1/unreal-engine-faceware-retargeter
@@echoperformances wow, thank you, i dont know about its course, its exactly what i need)
It seems it went a bit under the radar without much promotion. It’s actually the second of a 2-part course. The first part covers Analyzer. If you find it beneficial, feel free to share it with others any way you can.
@@echoperformances you need upload this on youtube, because many people dont know about stuff on site unreal, or cant find. Its gold knowledge you create.
Unfortunately, the course was built with Unreal’s online learning team, so it doesn’t belong to me. While it’s great their courses are free, it’s up to them to promote them.
This is great. Is that faceware's neural net processing?
Correct! The video is tracked using the new Faceware Portal, then animated using Faceware Retargeter in Maya.
Great video. I was having a really hard time with facial animations. I am using Faceware Studio, but it seems to be more geared towards real time, and there is no way to export those face animations to maya (that I know of); so now I am considering using Analyzer and Retargeter. The only comment I have is on body animation exports; You only need to select DHI_Body:root, export selected, and keep the same settings. The root_drv is a rig that connects the head and body rig and keeps them in one place.
also, whats the price tag for these tools from Faceware? as far as i understand i only need the analyser+ retargeter correct?
To get prices for Analyzer and Retargeter, I recommend you browse the options here and contact a Sales Rep to obtain a quote: facewaretech.com/pricing/
@@echoperformances i will do that and see if its affordable as a freelancer. Just wanted to make sure that analyser + retargeter are all i need to follow this exact workflow!?
Correct! This tutorial covers Analyzer and Retargeter. Faceware has recently introduced their neural-net cloud-based tracker called Portal. This removes the need to manually track your footage, however it’s still relevant to understand how Analyzer works. Other than that, you’ll need a license of Autodesk Maya. You can check out their Indie pricing here: makeanything.autodesk.com/maya-indie
fantastic stuff! Does the Maya retargeter plugin also work if i dont have a joint driven setup but only a character with 52 ARkit blendshapes for example?
Hi David! I’ve always used a control rig that drives my blendshapes and/or joints, but in theory, Retargeter’s character setup can drive any animatable parameters (transforms or otherwise). In the character setup window, you can try selecting the mesh that has the blendshapes and hitting the “Update” button to display its blendshape parameters.
Amazing!! I love seeing some of the process and reference too!
Awesome, man!! Love it!!!
Very cool!! 😃
Amazing, thank you for tutorial, but maybe I didn't understand something, but the important part is setting up a meta-human in maya to transfer information from the analyzer, could you tell me more about this setting? In the video, you just select a file with this setting
Hi POKATILO, apologies for the late reply. The step you're describing is the Character Setup. This is the process in which we register our facial rig's controllers and assign them to their corresponding face groups. I've been working on a course that goes into a lot more detail about the entire Faceware workflow, including the Character Setup. I expect it should go live in a few weeks, but in the meantime, you can check out this video that should help you get started: ua-cam.com/video/_VrxWvIb0Rc/v-deo.html
@@echoperformances Thank you! Now I have studied FaceWare more and I can say that I asked the question incorrectly. I am more interested in how you set up the Metahuman poses in Retargeter to better match the video reference, do I understand correctly that you moved the controls manually from the Meta-human in Maya or did you use ready-made meta-human poses from Unreal?
Do tutorial on using Faceware Live Link with UE5.
Btw, have you noticed that maya 2022 screws up the color of metahuman skin when you bring them in? No biggie, cause they aren’t used, but it’s weird. :) Some other things I’ve found: Faceware Retargeter crashes in Maya 2022 a lot with metahumans, so I’ve gone back to maya 2020. And not using very latest FW Retargeter as that also crashes lots. Faceware Retargeter conflicts with Facegood - they can’t both be installed. Facegood does a better job than Faceware but is super buggy and primarily in Chinese and poorly translated. (Also v difficult to install due to windows defender and Norton hating it; and possibly contains viruses / malware - who knows)
Very useful insight. Thanks! To be honest, the reasons you've provided are why I've hesitated to try out FG thus far. I've been a long-time user of Retargeter and I suppose I've been fortunate to not encounter crashes that often, granted I'm still working in Maya 2020. For what it's worth, I generally work on facial animations in a standalone scene with only 1 character, no body animation, and no environments to keep things as light as possible. When adding, copying, or updating poses in Retargeter, best to avoid undoing the previous operation with CTRL-Z. I would say that's probably my most consistent crash. I hope that helps!
This is a great overview - I’ve been working on my own, with my own tips and tricks, but it takes so long and I keep scrapping it and starting again 😂 There are so many bugs in UE and Faceware and Maya, it’s a frustrating and CRASH-PRONE process for me. :)
Hi Toby! Thanks for the comment. It can definitely be frustrating to encounter bugs and crashes along the way. My best advice would be to save your scenes often and regularly hit the "Update" button in Retargeter to store your changes independently of your scene.
Your face animations are lovely! More and more I'm finding that faceware is the king of facial mocap. For your body animation, wouldn't it have been easy to use iclone? and get an actorcore animation?
Those are great suggestions. Facial animation is really my expertise, but I’ve started exploring different options for body animations. I’m hoping to incorporate some of them in my next projects rather than keyframing.
Hello! Really great job. How to connect with you? I want to buy your time
Hello Daniel, if you visit our channel, under the "About" section, you’ll find an email address where you can send your inquiries.
Can you make another tutorials for the faceware and metahuman in maya and how to animate the body using mixamo?
Hi Belal, thanks for stopping by. I work primarily with creating facial animations, including the pipelines and tools that support it, so I would need to explore Mixamo before I could create a tutorial for it. Have you found any particularly interesting videos that you would recommend?
@@echoperformances I will be happy if you make a detailed video to use facware because I'm having some problems with it..and about animating the body, you can watch these videos, some of them are related to the topic ua-cam.com/play/PLlJ0LuZkvf-XueBlSDGPhpMmX7pV3SczH.html
@@B.BAKDASH Thanks for the playlist! I'm currently working on a more detailed and more "official" Faceware course that should be available in March. Keep an eye on my Twitter page for more details as the date approaches.
This is awesome! Great job. So cool seeing Jon and Alex doing this. The animation is top notch. Hope to see more! It seems like you really enjoy this
I think it’s lacking facial expressions. Lip sync appears okay, but the upper eyes area and face doesn’t look believable as in how a real human would express themselves regarding your MetaHuman mocap. That’s why I stick with the Live Link app. It works out them facial expressions much better.
So cool and detailed ! Thank you a lot ! I would love to work on that too. But I am working on a mac. Do you think, there is a chance to do so ? It seems, that Faceware does not exist for mac...
To be honest, I haven't met any Faceware users working on a mac. Perhaps their support team could recommend you an alternate solution to get their products running on a mac. You could enquire by contacting them through this page: facewaretech.com/contact-us/ Thank you for your comment! I hope that helps.
@@echoperformances Thanks a lot for your recommendation ! Gives me the next steps ! :)
Awesome, thanks for sharing. Is it possible to bake a first pass then use maya's animation layer to layer in extra tweaks on top of baked animation then merge layers before exporting the control rigs as fbx to UE?
Precisely. In fact, with Retargeter you can continue to add as many in-between poses to improve your transitions and iterate as many times as you want to fill in the blanks. After you feel your animation is satisfactory, you can edit the curves directly to iron out some jitters or fine tune some subtle details. You could also add an animation layer on top for your changes, just keep in mind that Retargeter only evaluates keys on the base layer.
@@echoperformances That's awesome. I am working on a pipeline/solution that allows me to get base first pass animation from our Facemotion3d ios app to metahuman rig inside maya. After the firsr pass I want to apply more details on visimes and other suttle nuances and tongue anination via layers then merge it before expirtibg. Your Faceware animation is one if the best I have seen. :) Would love to show you my progress and also integrate Faceware pipeline into my workflow. Are you subscribed to the pro-version of analyzer and retargetter? I would really not want to not use the storable training frames function even if I have to pay close to four times the amount compared to the base version!
@@facemotion3d971 Thanks for the praise! What you describe in your first lines is "spot on". Regardless of the solver (Faceware, ARKit, ...), that approach will give you a solid foundation, but if you're looking for high quality facial animations, any automated method won't get you 100% of the way there. A polish pass is always recommended, which includes the tongue if it's not part of your solve. To answer your question, I am using the pro-version which offers batch support for Analyzer and Retargeter. This includes the ability to export Tracking Models from Analyzer and Shared Poses for Retargeter. If you're looking to experiment with these features, I would recommend you look into Faceware's PLE license (Personal Learning Edition).
@@echoperformances The PLE faceware studio which I already tested is not same as analyzer n retargetter is it? Didnt like the quality of the studio. I think there is a trial version of analyzer n try them. :)
@@facemotion3d971 You're correct. Faceware Studio is their real-time solution, whereas Analyzer and Retargeter offers a lot more customization. The batch feature I mentioned in my previous reply is mostly beneficial for large volumes or for repeat use of the same actor/character. If your project justifies it, you might want to consider the cost of the higher license with batch support versus the cost of an animator setting up each job "from scratch".
Looks incredible. No way arkit get even close to that. Training frames in Faceware can be reused on the pro version no? I wonder if that can lower the number of training frames to be made for other additional video tracking
Absolutely! What you're describing is called a "Tracking Model" in Analyzer. Storing your training frames for each of the face groups means you can benefit from your previous poses in future tracking jobs.
@@echoperformances Thanks for response. Faceware announced price decrease for the Faceware studio live for indies. However the studio version has lower overall quality compared to the offline version. what do you think?
Both options cater to different teams' needs, depending on the scope of the production and its animation pipeline. Faceware Studio comes at a lower cost for Indie developers and can provide very quick results with no tracking or pose training required. Using Unreal Engine's Take Recorder, these animations can serve as a decent baseline for blocking and timing, then they can be edited/polished afterwards. On the other hand, the "offline" approach of Analyzer and Retargeter requires an upfront cost to set up your actors and characters, but offer a lot more fidelity in the performance. They can also be batch processes to handle large volumes of animations for bigger productions.
@@echoperformances Thanks for quick response. The offline option costs ten times more than the studio version but it provides much better results seeing from your examples. :) The studio live version is lesser quality than the iphone arkit tech I think, but it does have asymetrical brows which arkit cannot do. I am also looking into Facegood's Avatary and their infrared HMC which along with the software provides very high quality that rivals or even surpasses FW offline results at a much lower cost. They are also coming out with a ML live version that is very high quality as well. Do you have any more recent FW works youve done? I really like your work
@@facemotion3d971 Once you start digging into the multiple options available in this field, you'll find that a lot of them have similarities. It boils down to each user's personal comfort level, project scope, pipeline scalability, and output quality expectations. On my end, I'm working on a new short film, an online course, and a conference presentation. Juggling a lot, but I hope to post new content to the channel in the next few weeks.
AWESOME Tutorial! Well done! Thank you! I thought Tough Guy's performance was the most believable and I'm trying to figure out why. Does it have anything to do with camera height/angle and distance from the actor? I'm wondering if Nervous Guy's performance would look move believable if Jon did it. I'm also wondering what it would look like if you swapped MetaHumans. If you had to redo all of this, what would you have done differently or told the actors? Thanks!
In Over 40, I think putting a microphone in front of her face and giving her headphones like in the video would have made the performance more believable. In your video it looks like she's in a live interview setting and her eyes don't seem to be looking at the interviewer. As a result, the eye movements just seem random. What do you think?
@@pointandshootvideo Thanks for the feedback! I really appreciate your notes. It's funny how either character seems to appeal more or work better for different people. I can't put my finger on it either. Maybe it's a matter of relatability or familiarity of the model? I'm currently working on another short film with the same 2 actors. I'll probably give them new models to better fit the story. The biggest thing I learned from working on "Old Scores" is that animating the skeleton directly in FK was a bad idea. A proper character rig will go a long way. There are a few autorig scripts built for MetaHumans available for pretty cheap. I'll also try to get actual body animators to lend a hand this time around. As for the mic idea and the wandering eyes in "Turning 40", I completely agree. Trying to replicate the performer's original environment will definitely help to sell the final result.
try avatary bro :DD
Your team is definitely on my radar. It would be fun to experiment with it soon.
Yes very well done, every step thank you! Most every other video misses something here and there
Oh. But then I get stuck on the character setup file (which I found in the comments). Can you go over how to create the character setup file? Thanks!
Thanks for the feedback! I’m glad you found the tutorial helpful. And thanks for the suggestion to make a tutorial specific to the Character Setup. I’ll try to find some time to record one.
Really helpful tutorial video, thank you for making this!
That’s great to hear! Your feedback is really appreciated.
Awesome work, love the subtleties in the lip sync!
Wow, this is a very amazing tutorial. thanks for this. Please I wanna repeat the same process onto a meta human, but I don't seam to understand the character setup. Please could you help with the xml file you created for the metahuman so I could just import it and test with it and better understand how it works, since all meta humans use the same rig. Please, I will really appreciate that. Hope to hear from you shortly
Thanks for the comment! Feel free to share this video and the channel with others. Perhaps I’ll create a more in-depth tutorial specific to create a Character Setup file. For now, let me know if this one works for you: bit.ly/MH_FWR_CharSetup_01
@@echoperformances Woww. thanks so much bro for this I really do appreciate this. I will try it out. Please if you don't mind how may I reach out to you, I would love to discuss somethings with you. Please I would really love for us to connect I wanna have a discussion with you. thanks and God bless Bro
@@matthewisikhuemen8907 In the "About" section of our channel, you can find an e-mail address to contact us with your questions.
@@echoperformances Oh thanks so much, I will send an email. thanks once again, I really appreciate
Thanks for this tutorial , I have a problem character setup in maya ,It is possible to put the file Metahuman_CharSetup_01 , thank you very much
Thanks for inquiring. Here you go: bit.ly/MH_FWR_CharSetup_01 Let me know if that works for you.
@@echoperformances Thank you very much . You've been really helpful.
Glad to help! Let me know if you encounter other blockers along the way.
Not sure if you ever use Iclone. But the only issue I have with Metahumans is that in only allows you to render in Unreal (or publish) according to the Unreal disclaimer. I know a lot of people including myself would be curious about a tutorial using retargeter with an iclone or Daz generated human.
I’ve never tried iClone or Daz, but I could add those to my list of future projects. Thanks for the suggestion! As for publishing your work with MetaHumans outside of Unreal, I believe Epic offers the options to license MetaHumans to be used outside of Unreal. As long as it’s used within Unreal, it comes included for free.
Ah, good to know! Thanks for the reply!
Great tutorial. Curious if Metahumans exports to 3dmax yet? Last time I checked it was still not working in bridge. Or if it doesn’t have that option working yet, is it possible to export a metahumans rig in Maya to 3dmax via fbx with face animation.
Hi Arvind. That’s an interesting proposition. I haven’t thought of experimenting since I work primarily in Maya. If you do get around to testing it, let me know.
Very well spoken, perfectly balanced between thorough tech and conceptual reasons behind the choices. Thanks for sharing!!
Thanks for the feedback! I really appreciate it.
Very informative tutorial. Thanks!
so cool! :o
Amazing. We can clearly see that you have fun doing your own personnal projects. This clearly demonstrates your undeniable talent by using your connections, knowledge and resourcefulness regarding new tools. 1000 thumbs up
THIS. IS. INCREDIBLE. I'm blown away at how quickly you put this together... you are one talented dude!!! Let's do it again sometime! THANK YOU 🙏🙏🙏
The mouth movements need to improve but this technology is very cool hey!
Cómo hiciste para que mueva la cabeza en unreal?