This works so light on my pc ..and its not even powerful one, I think you and blender should work together and make this feature as a normal addon, both of you would benefit alot !
Somehow UA-cam algorithm gave me your video in my feed and I cannot complain this what a good choice. Thanks you for making content and your work available for everyone, keep it up!!
Omg I just thought about something like that before I got to sleep yesterday. I was like "man it would be awesome if I could just stand in front of a camera and make a walk animation inside of blender myself" AND HERE IT IS Totally going to test that out, thank you!
I want to thank you so very much. I have just installed it, but I truly believe that you are going to save me a lot of time in mocap and animation. I love the face detection and all the rest too. Full body is unbelievable. Thanks again!!!
[02:58] Das metarig unsichtbar zu machen und das rig sichtbar, hat mich etwas verwirrt^^, Dachte wieso bewegt es sich nicht mit wie im Video? Super Sache
Since the empties are not parented there is no hierarchy. Is it possible to parent the empties to each other, for example the hand, shoulder be parented to the forearm, forearm parented to arm, and arm parented to shoulder, and so on. How would I go about doing that?
@@cgtinker I found an addon on the Blender market called Empties to Bones, which can convert empties to bones and then be used with Auto-Rig Pro's Remap tool to transfer bone animations: However, in order for the addon to function, the empties must be parented using the hierarchy described above. Because I don't know how to write a script, I'm guessing the addon won't work. I was hoping to find a way to retarget my animation onto Mixamo or other rigs besides rigify.
@@cgtinker I'm excited to see how it works, as well as the Holistic (Experimental) and Leg animation transfer (Experimental) with foot locking. I appreciate how people like you are making mocap accessible to everyone. It is not an easy task, so keep up the good work. Thank you for your hard work in making this happen!
Hey man, really like your video! Found out about MediaPipe yesterday and today about your work. I've been developing a Blender addon (solely face mocap) for myself over the past few weeks too. It uses some heavy maths with projections from 3D to 2D and vice versa, so I guess it is by far a lot easier this way, both implementation and maintenance wise. Am excited to compare it with your work, after I'm done changing to MediaPipe too! Keep up the good work man, it's motivating to have others around with similar hobbies, hehe :)
thanks! keep it up ;p using mediapipe is not to easy as all points are in global space, so it's still lot's of work. therefore the facial animation part seems very similar to 2D - when you check it out you'll see.. :D mind to share you git?
@@cgtinker You're probably right, I shouldn't assume anything before actually facing it :P As for my Repo, I didn't create one yet... But I think it's a good time to set up one. I'll share it with you in the future and will be looking forward to feedback!
Thankyou man, i tried AI mocap services but they are soo expensive. Needed something like this very badly. I'm glad you doing this.
2 роки тому+1
I really admire you, your work has saved many enthusiasts like me, I will support you with a cup of coffee, hope you develop it more like the sound for the mouth joint, I Will wait and buy from you, wish you a lot of health, I love it , thank you
Thank you making this mockap, And sorry for , Downloding free, Because I am on learning face, also I don't have money for supporting you. Thanks buddy for this, Hope your channel is grow
It is recommended that multiple channels can be input at the same time and can be captured together. The face is not supposed to consider using the blend shape key.
@@cgtinker translation problem. 1 If multiple cameras can record and analyze at the same time, it is convenient for the characters to perform without repeatedly recording hand movements and body movements. 2 I have watched a lot of ai expression tracking, in fact, Ziva also drives blender's shape keys by analyzing the facial shape, so that some factors of unstable tracking can be supplemented by shape keys
A very big help for us. I did everything as in the lesson. Although I am a complete zero in blender. There was simply a great desire to revive the model. I have only one question - Is it possible to somehow make the model repeat the movements in real time instead of the Transfer? so as not to record the movements, but immediately the model repeated in the render window. I'm really looking forward to this opportunity. I will be ready to thank and support the project with money
Yea but needs a fast machine! If you just record for a second and directly transfer the character is linked to the tracking results. If you then start recording again, you will see realtime results on the character.
Thanks for this! I want to create the code for motion capture myself in the nearest future (I am studying it now), but decided to try using addons as well until.
The only limitation I see is the webcam for this addon. I have an iMac and my room is not as big to capture my full body. So it would very difficult to adjust the 27inch iMac for the proper position. I wish you add pre-recorded video option to shot motions with our phones from any places 🙏💫
this is awesome!! Great video! Quick question: Could we use hand gestures to drive animation. (If I do index finger pointing up (1# pose)", can I scale an object?
haha funny idea. uhm well obv it isn't made for that but guess you kinda can. you'd have to set up drivers for that but that shouldn't be too hard. for this case, a simple sample: create a cube, add a driver to the scale, use the x-rotation value of the "index mcp driver" to drive the scale.
Great tutorial, very well explained. Thank you. I'm trying to use the add-on on an animation made with the rokoko live plug-in and a mixamo rig. However, when I try to transfer the detection, it doesn't work. It places the animation to the side of my figure. Any help would be greatly appreciated
Installed it but every time i click 'start detection' it crashes Blender. The generated crash report is long but this stuck out "Termination Reason: Namespace TCC, Code 0x0" - any ideas?
can't get it to work unfortunately I've tried on windows and Mac, 3.0 and 3.1, I am running with elevated privileges on both, it says ...returned non zero exit status 1, seems like an awesome add-on though thank you for your hard work on it
Bummer, whats your operating system? I think 10.15+ is required. If you are using a silicon mac, make sure to download the intel version of blender. the silicone blender version is not supported.
I guess it only works on "Rigify". So, can't the scanned points be transferred as tracks? So we can connect these tracks to the bones in our armature. Also, isn't the full body (body + face + hands) rendered? Maybe this is possible with a triple camera system. But even now being able to do this in real time is exciting. So thank you for offering such a plugin for free.
I've been trying to install it but I can't get mediapipe 0.8.10 installed on my m1 mac. It says "ERROR: Could not find a version that satisfies the requirement mediapipe>=0.8.10 (from versions: none) ERROR: No matching distribution found for mediapipe>=0.8.10". I see some people on github have this issue as well. I don't know where to go from here. Can you help? I am new to all this so I don't know what I am doing wrong
Thank you for this update. I noticed in the video the legs did not move after you did the pose transfer, may i ask why? thanks again, really appreciate it.
That's awesome. May I ask though why it's only working with the old face rig? Are there benefits to the old rig or do you plan to support the new face rig as well? Thanks again!
on my blender 3.3.1 whatever it's launched in administrator right or not when I tranfert the hand mocap to the 'rig', I can't see the bones move like in your tuto, only the 'rig' moves. Can you say what I was missing? I saw at timestamp 3:12 when you hide the 'metarig' it left another bones. For me I don't have these bones.
Hello There, Love the addon. I am studying this topic as well. I am not as good, but may I ask if there will be a possible way to use pre-recorded videos as an option for the feed of mediapipe?
This is such a great tool and wonderful guide as well.This question shows my ignorance of MediaPipe API, but does your addon install all the functionality to do the machine learning locally or is your addon making calls to MediaPipes API? Could this addon work without an internet connection once installed?
Amazing! Can the plugin also handle image sequences as input or just input from webcam? If we can link it to footage that would be a great tool for visual effects. Thanks in advance!
great job,, but may i ask you.. last time i try blendartrack,, i finish install the add on but there is only step for installing the add on... any tips??
I don't really understand. blendartrack doesnt require external dependencies. When setting up blendarmocap, it's required to run blender as admin and click it "install dependencies" button in the add-ons preferences
@@cgtinker thanks for your answer,, i'm sorry for my late reply.. now i can use blendartrack without the issue... your software really helpfull for the community... after my own project finish and get the pay,, hope i can be your patreon :D... sorry for my bad english,, greeting from Indonesia
Great add-on. I have a question. How do I stop the capture in real time, son I can focus on the recorded movements? This is because it keeps capturing and I can´t close the video window. Thanks for your attention. I know you probably have like a ton of requests like this.
Hello, I just bought the addon and I downloaded version 152, but on Blender V3.3 the addon does not even install, there is an error message every time I click on "install dependencies". I spent more than 5 hours but I can't install it on Blender V3.3 at all. I need your help. Thanks.
It is a really an awesome add on ....i was thinking and what if you make it import a video file from the pc and track from it with high precision. It will be good for pose tracks and the legs b/c we will record in different env't and can take the animation in other footages. Any ways thanks alot😊
Honestly, I've been trying to look into figuring out how to do this. It almost seems like most of the code is there with this addon. Just gotta input a video file, instead of Webcam footage. Theoretically, it shouldn't be that hard to implement. I'll look into it when I have time this week.
Hi CG Tinker, would love to try out your Add on, but cannot install the dependencies although I opened blender as admin (on MacOS). Would be grateful for a hint!
cannot really tell based on the information. however, I'm working on a standalone executable to get rid of dependencies in blender.. will take a while though
I love it how People like u are making mocap available for everyone.
Thank you!
My words can't describe how much respect I have for this man
agree,, this man so generous
This works so light on my pc ..and its not even powerful one, I think you and blender should work together and make this feature as a normal addon, both of you would benefit alot !
agree... @cgtinker are a game changer
How to download in blender bro please tell me as soon as possible
Somehow UA-cam algorithm gave me your video in my feed and I cannot complain this what a good choice. Thanks you for making content and your work available for everyone, keep it up!!
This is a real game changer for the Blender community. Thank you for this great app!
Keep it up man this is great!
CG thank you so much, this is a real gift to the community; if i can make it work in iclone I will absolutely donate, and hope others do, too
This is just what I need. Thinking of doing some free educational health videos
Omg I just thought about something like that before I got to sleep yesterday.
I was like "man it would be awesome if I could just stand in front of a camera and make a walk animation inside of blender myself"
AND HERE IT IS
Totally going to test that out, thank you!
This is amazing. Blender continues to blow my mind. Thanks for this!!
I want to thank you so very much. I have just installed it, but I truly believe that you are going to save me a lot of time in mocap and animation. I love the face detection and all the rest too. Full body is unbelievable. Thanks again!!!
Fascinating dude, following this project very closely ❤️
[02:58] Das metarig unsichtbar zu machen und das rig sichtbar, hat mich etwas verwirrt^^,
Dachte wieso bewegt es sich nicht mit wie im Video?
Super Sache
gutes feedback, danke :)
werde es in Zukunft zur Seite schieben.
Hey there, I’m having a problem sir, I can’t get mediapipe to say true in the dependencies
This is fantastic! I tried to do this a year ago, but just don't have the skills. Thank you so much!
Thanks! To actually use the data on a rig is quiet though! Took me way longer than initially expected.
Fantastic !
Is this rig usable as vrm for live tracking ?
brother is this legit ? im mindblown ,you should be up in the sky man , u clearly dont get enough recognition wich u deserve
Since the empties are not parented there is no hierarchy. Is it possible to parent the empties to each other, for example the hand, shoulder be parented to the forearm, forearm parented to arm, and arm parented to shoulder, and so on. How would I go about doing that?
Their positions are all in global space, so not really. Best would be great to write a script
@@cgtinker I found an addon on the Blender market called Empties to Bones, which can convert empties to bones and then be used with Auto-Rig Pro's Remap tool to transfer bone animations: However, in order for the addon to function, the empties must be parented using the hierarchy described above. Because I don't know how to write a script, I'm guessing the addon won't work. I was hoping to find a way to retarget my animation onto Mixamo or other rigs besides rigify.
@@recreatingcontents5795 you know what, I like the idea of an empty hierarchy and a ground-truth rig. I'll implement that.
@@cgtinker I'm excited to see how it works, as well as the Holistic (Experimental) and Leg animation transfer (Experimental) with foot locking. I appreciate how people like you are making mocap accessible to everyone. It is not an easy task, so keep up the good work. Thank you for your hard work in making this happen!
CG, Thank you very much! You are such a patient and kind guy!!!
man i just join this channel be your fan in 5 min .you are the best!
Hey man, really like your video! Found out about MediaPipe yesterday and today about your work.
I've been developing a Blender addon (solely face mocap) for myself over the past few weeks too.
It uses some heavy maths with projections from 3D to 2D and vice versa, so I guess it is by far a lot
easier this way, both implementation and maintenance wise. Am excited to compare it with your
work, after I'm done changing to MediaPipe too!
Keep up the good work man, it's motivating to have others around with similar hobbies, hehe :)
thanks! keep it up ;p
using mediapipe is not to easy as all points are in global space, so it's still lot's of work. therefore the facial animation part seems very similar to 2D - when you check it out you'll see.. :D
mind to share you git?
@@cgtinker You're probably right, I shouldn't assume anything before actually facing it :P
As for my Repo, I didn't create one yet... But I think it's a good time to set up one. I'll share
it with you in the future and will be looking forward to feedback!
This looks great!
Thank you so much for your effort. The work you are doing really makes people's lives easier.
Woow, This is Amazing! Thank you for this addon this is huge timesaver for indie-game development!
This is great! Please keep going!
Wow, voll der Hammer. Grüße aus Berlin und Danke!
absolutely wonderful stuff youve made. thank you!
Thankyou man, i tried AI mocap services but they are soo expensive. Needed something like this very badly. I'm glad you doing this.
I really admire you, your work has saved many enthusiasts like me, I will support you with a cup of coffee, hope you develop it more like the sound for the mouth joint, I Will wait and buy from you, wish you a lot of health, I love it , thank you
thanks a lot!
I'll keep it on - have already some updates in mind :)
Thank you making this mockap,
And sorry for ,
Downloding free,
Because I am on learning face, also I don't have money for supporting you.
Thanks buddy for this,
Hope your channel is grow
thanks, hope you have a gd learning experience within the blender community :)
what a great job worth millions
update again! thank you! you're amazing!
I wish u could use non-rigify bones for this to work.
Amazing work! You fantastic
Very exciting !
Merci! really thanks from france for your help and you're work you're amazing
subscribed, it's an awesome sounding addon!
great work brother
It is recommended that multiple channels can be input at the same time and can be captured together. The face is not supposed to consider using the blend shape key.
what do you mean?
@@cgtinker translation problem.
1 If multiple cameras can record and analyze at the same time, it is convenient for the characters to perform without repeatedly recording hand movements and body movements.
2 I have watched a lot of ai expression tracking, in fact, Ziva also drives blender's shape keys by analyzing the facial shape, so that some factors of unstable tracking can be supplemented by shape keys
@@cgtinker Of course, I prefer to be able to perform accurate tracking later, so the effect should be better than the effect of real-time tracking
@@dayspring4709 thanks for the feedback!
I just tested the addon. It's perfect. It's great..
Thank you sir for this amazing add on
Such a life saver 😋
Maaaan, thank you a lot for your content
Took downloaded everything works
THANK YOU!!! you're AMAZING!
A very big help for us. I did everything as in the lesson. Although I am a complete zero in blender. There was simply a great desire to revive the model. I have only one question - Is it possible to somehow make the model repeat the movements in real time instead of the Transfer? so as not to record the movements, but immediately the model repeated in the render window. I'm really looking forward to this opportunity. I will be ready to thank and support the project with money
Yea but needs a fast machine!
If you just record for a second and directly transfer the character is linked to the tracking results. If you then start recording again, you will see realtime results on the character.
Very good sir keep creating
Wow, this is amazing! Thanks for this work
Thanks for this! I want to create the code for motion capture myself in the nearest future (I am studying it now), but decided to try using addons as well until.
Great effort, thanks for sharing 😊
Great App mate!
Man, it's so fucking crazy! It's really important addon for animation! Thank you!
The only limitation I see is the webcam for this addon. I have an iMac and my room is not as big to capture my full body. So it would very difficult to adjust the 27inch iMac for the proper position. I wish you add pre-recorded video option to shot motions with our phones from any places 🙏💫
lots of love lots of respect
thank you so much for this.
amazing app
I’m in the blender rabbit hole foreal now
this is awesome!! Great video!
Quick question: Could we use hand gestures to drive animation. (If I do index finger pointing up (1# pose)", can I scale an object?
haha funny idea. uhm well obv it isn't made for that but guess you kinda can. you'd have to set up drivers for that but that shouldn't be too hard.
for this case, a simple sample: create a cube, add a driver to the scale, use the x-rotation value of the "index mcp driver" to drive the scale.
Really thanks a lot 😊.
Looks nice ! May I ask you, do you need an internet connection after all dependecies had been downloaded? Or is it totally internet dependent ?
Once the dependencies are installed you can turn down internet. Runs locally
eres un pan de dios y buena persona, gracias por ayudarme a simplificar el workflow :3 un abrazo
Keep going bro😘😘
Great tutorial, very well explained. Thank you. I'm trying to use the add-on on an animation made with the rokoko live plug-in and a mixamo rig. However, when I try to transfer the detection, it doesn't work. It places the animation to the side of my figure. Any help would be greatly appreciated
I guess I don't fully understand.
What's the side of your figure? Also, by default it just transfers motion to generated humanoid rigify rigs
@cg tinker ok I understand now, my rig is not a rigify rig it's a mixamo one
Installed it but every time i click 'start detection' it crashes Blender. The generated crash report is long but this stuck out "Termination Reason: Namespace TCC, Code 0x0" - any ideas?
Not sure, are you running as admin? Which operating system are you using?
can't get it to work unfortunately I've tried on windows and Mac, 3.0 and 3.1, I am running with elevated privileges on both, it says ...returned non zero exit status 1, seems like an awesome add-on though thank you for your hard work on it
Bummer, whats your operating system? I think 10.15+ is required. If you are using a silicon mac, make sure to download the intel version of blender. the silicone blender version is not supported.
I guess it only works on "Rigify". So, can't the scanned points be transferred as tracks? So we can connect these tracks to the bones in our armature. Also, isn't the full body (body + face + hands) rendered? Maybe this is possible with a triple camera system. But even now being able to do this in real time is exciting. So thank you for offering such a plugin for free.
this is amazing but do u mind doing a video where u connect the armature to a humen mesh.
I'm modelling a char right now, will prolly take it for the vid! :)
Amazing !
I've been trying to install it but I can't get mediapipe 0.8.10 installed on my m1 mac. It says "ERROR: Could not find a version that satisfies the requirement mediapipe>=0.8.10 (from versions: none)
ERROR: No matching distribution found for mediapipe>=0.8.10". I see some people on github have this issue as well. I don't know where to go from here. Can you help? I am new to all this so I don't know what I am doing wrong
I've heard people had success installing it when using the "blender intel version for mac". The arm blender version isn't supported by mediapipe
Thank you for this update. I noticed in the video the legs did not move after you did the pose transfer, may i ask why? thanks again, really appreciate it.
well uhm I didn't press the "experimental leg transfer button".. :P
@@cgtinker oh ok thanks.
amazing man!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
That's awesome. May I ask though why it's only working with the old face rig? Are there benefits to the old rig or do you plan to support the new face rig as well? Thanks again!
I plan to support it in the future. Main reason is that I usually develop in blender v2.9 and port from there to other versions.
Thank you so much. Is it possible to make the movement of the face together with the movement of the hands, at the same time?
Yes, absolutely
ua-cam.com/video/qtHf84YJvhk/v-deo.html
Amazing
Thanks a lot for this awesome work!!! Can I insert keframes for the pose bones also?
Hi, When installing dependencies. It shows error - Installation of mediapipe failed. Check system console output. Can anyone help me. Thanks
Respect!
Thank you. Can you more video as full body specific for person when walk
Thank you so much
How would i bake the motion from driver to keyframe
on my blender 3.3.1 whatever it's launched in administrator right or not when I tranfert the hand mocap to the 'rig', I can't see the bones move like in your tuto, only the 'rig' moves. Can you say what I was missing?
I saw at timestamp 3:12 when you hide the 'metarig' it left another bones. For me I don't have these bones.
Heard there are issues in blender 3.3
I'll look into that soon. In blender 2.93 with the official released version on GitHub shouldn't be any issues.
@@cgtinker No change with 2.93.11 LTS in zip portable mode. I'm looking for transfert manually your driver to my armature...
Hello There, Love the addon. I am studying this topic as well. I am not as good, but may I ask if there will be a possible way to use pre-recorded videos as an option for the feed of mediapipe?
Not yet, but in the future yep.
@@cgtinker thank you for the reply.
This is such a great tool and wonderful guide as well.This question shows my ignorance of MediaPipe API, but does your addon install all the functionality to do the machine learning locally or is your addon making calls to MediaPipes API? Could this addon work without an internet connection once installed?
Yep once the dependencies are installed no internet connection is required
@@cgtinker That is fantastic news. Excited to try this out.
Amazing! Can the plugin also handle image sequences as input or just input from webcam? If we can link it to footage that would be a great tool for visual effects. Thanks in advance!
you can import a video ;)
After successfully tracking the movement, how to re-target any 3D model?
How face data can be transfered to HumanGenerator characters?
Yeah checkout the new release ua-cam.com/video/qtHf84YJvhk/v-deo.html
OmegaWOW
RESPECT BRATAN
Great job! I want to know how can I stop the camera recording.
press 'q' or to "stop recording" button (appears after pressing "start recording")
@@cgtinker you sir are a legend thank you so much for this wonderful addon, btw It doesn't work with the upgraded face rig right ?
great job,, but may i ask you.. last time i try blendartrack,, i finish install the add on but there is only step for installing the add on... any tips??
I don't really understand. blendartrack doesnt require external dependencies.
When setting up blendarmocap, it's required to run blender as admin and click it "install dependencies" button in the add-ons preferences
@@cgtinker thanks for your answer,, i'm sorry for my late reply.. now i can use blendartrack without the issue... your software really helpfull for the community... after my own project finish and get the pay,, hope i can be your patreon :D... sorry for my bad english,, greeting from Indonesia
very nice but instalation proces links where are they for mediapips and rest :(
Wow ❤️🔥
This is awesome. Is there any plan on being able to use video footage a a source instead of a video cam?
yep! there's a dropdown (video / webcam)
Great add-on. I have a question. How do I stop the capture in real time, son I can focus on the recorded movements? This is because it keeps capturing and I can´t close the video window. Thanks for your attention. I know you probably have like a ton of requests like this.
No problem. Did you try to press 'Q'?
@@cgtinker It worked like a charm. Muchas gracias. :)
Which model are you using inside mediapipe for pose tracking? I find blaze pose the most stable one
Hello, I just bought the addon and I downloaded version 152, but on Blender V3.3 the addon does not even install, there is an error message every time I click on "install dependencies". I spent more than 5 hours but I can't install it on Blender V3.3 at all. I need your help. Thanks.
when i tried to start detection in blender with webcam or clip - I see the mistake and python code. Why? I have 4.0 version. Pleas help(
You need older version for this to work I think since op discontinued maintenance of this add-on
When i try to instal dependencies it keeps telling: no module named 'mediapipe'
What operating system and python version are you using?
@@cgtinker i use blender 3.0.1 on windows 10 python console is 3.9.7 i dont know how to make it work do i have to write something in the console?
@@mvberg5479 can you send me the full log either to my mail (hello@cgtinker.com) or post it at GitHub? github.com/cgtinker/BlendArMocap/issues
It is a really an awesome add on ....i was thinking and what if you make it import a video file from the pc and track from it with high precision. It will be good for pose tracks and the legs b/c we will record in different env't and can take the animation in other footages. Any ways thanks alot😊
Honestly, I've been trying to look into figuring out how to do this. It almost seems like most of the code is there with this addon. Just gotta input a video file, instead of Webcam footage. Theoretically, it shouldn't be that hard to implement. I'll look into it when I have time this week.
amazing.
Hi CG Tinker, would love to try out your Add on, but cannot install the dependencies although I opened blender as admin (on MacOS). Would be grateful for a hint!
cannot really tell based on the information. however, I'm working on a standalone executable to get rid of dependencies in blender.. will take a while though
@@cgtinker Thank you! Looking forward to it!
Does this only work with live streaming video? Can I use a recorded video?
Both worka