BlendArMocap - Realtime Hand, Face and Pose Detection in Blender using Mediapipe
Вставка
- Опубліковано 5 лип 2024
- Requires Blender 2.8+ or 3.0+ with elevated privileges.
Get BlendArMocap:
github.com/cgtinker/BlendArMocap
Report Bugs:
github.com/cgtinker/BlendArMo...
Want to support?
/ cgtinker
Timecodes
0:00 - Introduction
1:16 - Set up
1:52 - Preview - Фільми й анімація
I love it how People like u are making mocap available for everyone.
Thank you!
This works so light on my pc ..and its not even powerful one, I think you and blender should work together and make this feature as a normal addon, both of you would benefit alot !
agree... @cgtinker are a game changer
How to download in blender bro please tell me as soon as possible
I want to thank you so very much. I have just installed it, but I truly believe that you are going to save me a lot of time in mocap and animation. I love the face detection and all the rest too. Full body is unbelievable. Thanks again!!!
Somehow UA-cam algorithm gave me your video in my feed and I cannot complain this what a good choice. Thanks you for making content and your work available for everyone, keep it up!!
Fascinating dude, following this project very closely ❤️
CG thank you so much, this is a real gift to the community; if i can make it work in iclone I will absolutely donate, and hope others do, too
This is a real game changer for the Blender community. Thank you for this great app!
Thank you so much for your effort. The work you are doing really makes people's lives easier.
This is amazing. Blender continues to blow my mind. Thanks for this!!
Omg I just thought about something like that before I got to sleep yesterday.
I was like "man it would be awesome if I could just stand in front of a camera and make a walk animation inside of blender myself"
AND HERE IT IS
Totally going to test that out, thank you!
CG, Thank you very much! You are such a patient and kind guy!!!
My words can't describe how much respect I have for this man
agree,, this man so generous
Hey man, really like your video! Found out about MediaPipe yesterday and today about your work.
I've been developing a Blender addon (solely face mocap) for myself over the past few weeks too.
It uses some heavy maths with projections from 3D to 2D and vice versa, so I guess it is by far a lot
easier this way, both implementation and maintenance wise. Am excited to compare it with your
work, after I'm done changing to MediaPipe too!
Keep up the good work man, it's motivating to have others around with similar hobbies, hehe :)
thanks! keep it up ;p
using mediapipe is not to easy as all points are in global space, so it's still lot's of work. therefore the facial animation part seems very similar to 2D - when you check it out you'll see.. :D
mind to share you git?
@@cgtinker You're probably right, I shouldn't assume anything before actually facing it :P
As for my Repo, I didn't create one yet... But I think it's a good time to set up one. I'll share
it with you in the future and will be looking forward to feedback!
Thankyou man, i tried AI mocap services but they are soo expensive. Needed something like this very badly. I'm glad you doing this.
Woow, This is Amazing! Thank you for this addon this is huge timesaver for indie-game development!
absolutely wonderful stuff youve made. thank you!
Keep it up man this is great!
I really admire you, your work has saved many enthusiasts like me, I will support you with a cup of coffee, hope you develop it more like the sound for the mouth joint, I Will wait and buy from you, wish you a lot of health, I love it , thank you
thanks a lot!
I'll keep it on - have already some updates in mind :)
This is fantastic! I tried to do this a year ago, but just don't have the skills. Thank you so much!
Thanks! To actually use the data on a rig is quiet though! Took me way longer than initially expected.
Fantastic !
Is this rig usable as vrm for live tracking ?
This is great! Please keep going!
Amazing work man keep it up
Wow, this is amazing! Thanks for this work
update again! thank you! you're amazing!
This looks great!
Merci! really thanks from france for your help and you're work you're amazing
Wow, voll der Hammer. Grüße aus Berlin und Danke!
Thanks for this! I want to create the code for motion capture myself in the nearest future (I am studying it now), but decided to try using addons as well until.
Amazing work! You fantastic
subscribed, it's an awesome sounding addon!
Thank you making this mockap,
And sorry for ,
Downloding free,
Because I am on learning face, also I don't have money for supporting you.
Thanks buddy for this,
Hope your channel is grow
thanks, hope you have a gd learning experience within the blender community :)
This is just what I need. Thinking of doing some free educational health videos
what a great job worth millions
Great effort, thanks for sharing 😊
Maaaan, thank you a lot for your content
[02:58] Das metarig unsichtbar zu machen und das rig sichtbar, hat mich etwas verwirrt^^,
Dachte wieso bewegt es sich nicht mit wie im Video?
Super Sache
gutes feedback, danke :)
werde es in Zukunft zur Seite schieben.
THANK YOU!!! you're AMAZING!
Thank you sir for this amazing add on
lots of love lots of respect
I just tested the addon. It's perfect. It's great..
great work brother
Very exciting !
eres un pan de dios y buena persona, gracias por ayudarme a simplificar el workflow :3 un abrazo
Very good sir keep creating
this is awesome!! Great video!
Quick question: Could we use hand gestures to drive animation. (If I do index finger pointing up (1# pose)", can I scale an object?
haha funny idea. uhm well obv it isn't made for that but guess you kinda can. you'd have to set up drivers for that but that shouldn't be too hard.
for this case, a simple sample: create a cube, add a driver to the scale, use the x-rotation value of the "index mcp driver" to drive the scale.
Really thanks a lot 😊.
Such a life saver 😋
Great App mate!
Took downloaded everything works
Thank you so much
Man, it's so fucking crazy! It's really important addon for animation! Thank you!
Keep going bro😘😘
Since the empties are not parented there is no hierarchy. Is it possible to parent the empties to each other, for example the hand, shoulder be parented to the forearm, forearm parented to arm, and arm parented to shoulder, and so on. How would I go about doing that?
Their positions are all in global space, so not really. Best would be great to write a script
@@cgtinker I found an addon on the Blender market called Empties to Bones, which can convert empties to bones and then be used with Auto-Rig Pro's Remap tool to transfer bone animations: However, in order for the addon to function, the empties must be parented using the hierarchy described above. Because I don't know how to write a script, I'm guessing the addon won't work. I was hoping to find a way to retarget my animation onto Mixamo or other rigs besides rigify.
@@recreatingcontents5795 you know what, I like the idea of an empty hierarchy and a ground-truth rig. I'll implement that.
@@cgtinker I'm excited to see how it works, as well as the Holistic (Experimental) and Leg animation transfer (Experimental) with foot locking. I appreciate how people like you are making mocap accessible to everyone. It is not an easy task, so keep up the good work. Thank you for your hard work in making this happen!
thank you so much for this.
A very big help for us. I did everything as in the lesson. Although I am a complete zero in blender. There was simply a great desire to revive the model. I have only one question - Is it possible to somehow make the model repeat the movements in real time instead of the Transfer? so as not to record the movements, but immediately the model repeated in the render window. I'm really looking forward to this opportunity. I will be ready to thank and support the project with money
Yea but needs a fast machine!
If you just record for a second and directly transfer the character is linked to the tracking results. If you then start recording again, you will see realtime results on the character.
amazing man!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Respect!
amazing.
amazing app
Amazing
Amazing !
thankyou ❤
Wow ❤️🔥
yay!
Great tutorial, very well explained. Thank you. I'm trying to use the add-on on an animation made with the rokoko live plug-in and a mixamo rig. However, when I try to transfer the detection, it doesn't work. It places the animation to the side of my figure. Any help would be greatly appreciated
I guess I don't fully understand.
What's the side of your figure? Also, by default it just transfers motion to generated humanoid rigify rigs
@cg tinker ok I understand now, my rig is not a rigify rig it's a mixamo one
Thanks!
I guess it only works on "Rigify". So, can't the scanned points be transferred as tracks? So we can connect these tracks to the bones in our armature. Also, isn't the full body (body + face + hands) rendered? Maybe this is possible with a triple camera system. But even now being able to do this in real time is exciting. So thank you for offering such a plugin for free.
Que incrível
I’m in the blender rabbit hole foreal now
Hi CG! Fisrt of all fantastic add-on!
I'm a complete novice at this stuff and im trying to map face-mocap from BlendARmocap on to an already existing face model that I rigged using your "blendartrack - ar face tracking to rigify rig transfer" video. I can capture the motion data and transfer it to a Metarig like you demonstrate in this tutorial but Im running in to errors when I try to transfer the same data to my pre-existing rigged model.
Is it possible to do this, and if so, how?
Many thanks.:)
Thank you for this update. I noticed in the video the legs did not move after you did the pose transfer, may i ask why? thanks again, really appreciate it.
well uhm I didn't press the "experimental leg transfer button".. :P
@@cgtinker oh ok thanks.
Thank you so much. Is it possible to make the movement of the face together with the movement of the hands, at the same time?
Yes, absolutely
ua-cam.com/video/qtHf84YJvhk/v-deo.html
Is there any way that you could do a video on how to transfer the motion captured data to a Daz3d character, such as exporting a bvh file or is it possible to export an fbx file that is "mixamo compliant", so that it can be used on a Daz3d character? Thanks in advance
Super awesome. Would love if it was possible to use Video Files aß sources for the Tracking. But otherwise very cool
Very nice add-on, thanks a lot ! Is there any way to set the keyframes of animation on bones ? Transfer animation applies the animation but it stays dependent of the cgt_DRIVERS parent and can't be exported
u can bake the animation, just go in pose mode, select all bones. To bake:
POSE > Animation > Bake Action
Make sure to have "visual keying", "clear constraints" selected. everything else is optional.
That's awesome. May I ask though why it's only working with the old face rig? Are there benefits to the old rig or do you plan to support the new face rig as well? Thanks again!
I plan to support it in the future. Main reason is that I usually develop in blender v2.9 and port from there to other versions.
Thank you for your efforts and making it available for everyone. I'd like to know if we could transfer the face animation to unreal engine for metahumans, If you could make a tutorial about it that would be great. Thank you!
Amazing! Can the plugin also handle image sequences as input or just input from webcam? If we can link it to footage that would be a great tool for visual effects. Thanks in advance!
you can import a video ;)
It is recommended that multiple channels can be input at the same time and can be captured together. The face is not supposed to consider using the blend shape key.
what do you mean?
@@cgtinker translation problem.
1 If multiple cameras can record and analyze at the same time, it is convenient for the characters to perform without repeatedly recording hand movements and body movements.
2 I have watched a lot of ai expression tracking, in fact, Ziva also drives blender's shape keys by analyzing the facial shape, so that some factors of unstable tracking can be supplemented by shape keys
@@cgtinker Of course, I prefer to be able to perform accurate tracking later, so the effect should be better than the effect of real-time tracking
@@dayspring4709 thanks for the feedback!
Thank you. Can you more video as full body specific for person when walk
Would help a lot
Hello There, Love the addon. I am studying this topic as well. I am not as good, but may I ask if there will be a possible way to use pre-recorded videos as an option for the feed of mediapipe?
Not yet, but in the future yep.
@@cgtinker thank you for the reply.
круто!
The only limitation I see is the webcam for this addon. I have an iMac and my room is not as big to capture my full body. So it would very difficult to adjust the 27inch iMac for the proper position. I wish you add pre-recorded video option to shot motions with our phones from any places 🙏💫
Great work.
I have a question:
I followed the steps you mentioned in the video. The mediapipe tracking looks fine. However, when transfering the animation, the hands are backwards and when I bend my fingers they bend towards the back of the hand. Do you know what might be the reason for this problem?
Thanks
Most likely due to bone rolls. I've covered that in the 'rigging rigify metarig' tutorial.
Cool addon. How can you use Plask with your facetracking?
this is amazing but do u mind doing a video where u connect the armature to a humen mesh.
I'm modelling a char right now, will prolly take it for the vid! :)
Thank you for making this! I can't even use rigfy yet, but when I can, I would definitely try to use this add-on. Thank you!!
Just curious. To be able to make such a useful app, what did you have to learn? C language? Python? Many more? And how many years did you need to accumulate knowledge to reach here?
I'm working on a tutorial series and plan to include how to work with rigify :)
The detection is driven by mediapipe, the add-on is mainly about working with the results. The though part is to find proper ways to transfer the data to rigs and make the process easy and reliable for different users, rigs and operating system. (The tracking data are only points in global space without any rotations, so a lot of linear algebra and character animation knowledge is required in this one).
It's not about the language, when you develop for some time you can read most of them as the syntax is usually the same. I'm mainly using C# and Python. The add-on here is 100% Python. While python is very slow compared to C, it's possible to use numpy in python which brings the computational power of C.
To be honest, I'm still learning and learned a lot when making this add-on while it took me about 3-4 months by now. I'm self taught and started to mainly focus on development about 3 years ago.
On the road I was kinda surprised about one thing: You can make very useful and cool stuff in very short time (like a weekend or few hours). However, to make something mainly for yourself that's really easy. When making tools for lots of users the time effort skyrockets..
@@cgtinker Thank you very much for your answer and planning to create a tutorial on rigify!
I am really a beginner for Blender, and studying and struggling everyday.
I am hoping to be able to use Blender freely in the near future.
@@daysmiscellaneous9569 yea it's though to get into 3D but a lot of fun! :)
I wish u could use non-rigify bones for this to work.
It is a really an awesome add on ....i was thinking and what if you make it import a video file from the pc and track from it with high precision. It will be good for pose tracks and the legs b/c we will record in different env't and can take the animation in other footages. Any ways thanks alot😊
Honestly, I've been trying to look into figuring out how to do this. It almost seems like most of the code is there with this addon. Just gotta input a video file, instead of Webcam footage. Theoretically, it shouldn't be that hard to implement. I'll look into it when I have time this week.
This is so cool and easy. Thank you. Is there a way to bring in outside data for detection?
outside data?
Amazing tutorial. Do you know a way to do it with Auto Rig Pro?
I cant copy and paste flip positions so for example say if I do a fist on one hand and I want to copy it to the other because only one hand is in the shot how would I go about doing that? Also how do I delete motion so there is only one posed frame?
This is such a great tool and wonderful guide as well.This question shows my ignorance of MediaPipe API, but does your addon install all the functionality to do the machine learning locally or is your addon making calls to MediaPipes API? Could this addon work without an internet connection once installed?
Yep once the dependencies are installed no internet connection is required
@@cgtinker That is fantastic news. Excited to try this out.
Hi! Is it possible to get more fps than that? I mean, it's so jittery, and I don't know if it's from my webcam or what. Or I must using better webcam? Thankss!
Great add-on. I have a question. How do I stop the capture in real time, son I can focus on the recorded movements? This is because it keeps capturing and I can´t close the video window. Thanks for your attention. I know you probably have like a ton of requests like this.
No problem. Did you try to press 'Q'?
@@cgtinker It worked like a charm. Muchas gracias. :)
hi sir how can i set up in window laptop i already install add on but no stream detection cam
How would i bake the motion from driver to keyframe
This is awesome. Is there any plan on being able to use video footage a a source instead of a video cam?
yep! there's a dropdown (video / webcam)
Which model are you using inside mediapipe for pose tracking? I find blaze pose the most stable one
Hey there, I’m having a problem sir, I can’t get mediapipe to say true in the dependencies