OpenPose to Blender Facial Motion Capture Tutorial
Вставка
- Опубліковано 24 кві 2020
- In this tutorial you will see how an OpenPose ariificail intelligence facial motion capture can be run on any video and mapped to a blender character. All tools and methods are freely available. The script can be downloaded here:
github.com/nkeeline/OpenPose-...
To get OpenPose go here:
github.com/CMU-Perceptual-Com...
I am an engineering generalist combining electrical, mechanical and software engineering to solve problems and make cool stuff! - Фільми й анімація
Really cool stuff, can't wait to try it out. My plan is to use it with my cat's face as input to try to guess what kind of food he currently wants lol
Great info! ive been following OP for a while and now I have a reason to experiment with it
Nice. Now let's do this for bones. A dream to rig and control through generic camera
Great work. Please do full body motion capture.
Thanks for your work, i'll check this out.
To have the body would be so awesome, a game changer
It‘s great!thank you for this video!
Awesome! This what I need!
Nice effort man.I am gonna try it in my next anim. keep it up. 👍
Wow! It's awesome! I'll try to import am UE4 MetaHuman head to Blender and run some tests since the head is already rigged. Mocap from a pre-recorded video data is the BEST option. Thank you very much!
Just released full addon on github.
@@checkeredbug8015 Awesome! You're awesome! Thank you, I'll download it very soon ^^ (right now, I'm preparing some models to receive the facial mocap. I gave up MetaHuman for now).
Please make fullbody tutorial
So great thank youuuu!!!!
Great facial motion tutorial. Are you also going to include a full body motion capture tutorial at some point?
Maybe, comments like this help me decide.
@@checkeredbug8015 It will be really helpful if you can create a tool for full body motion capture.
@@checkeredbug8015 yup, i'm working on getting it working now, full body would be useful, but i still appreciate the work you've done!
Yaa please help me to do full body motion capture
@@checkeredbug8015 Would love to see that! Just subbed and thanks for your work :)
amazing....openpose working 100%.....👍
thank you for this :-)
Cool video Brother Real Information!
Thanks for the comment...yeah I am trying to put out some videos on how to do advanced stuff so it's accessible to everyone since it took me a Long time to figure a lot of this out.
Hi! Cool plugin for blender with facial capture. Do you have a similar one for the hands?
Interesting... could I use this to work with Modo?
I like your music themes that you probably created yourself.
Yes made myself with lmms
Awesome.. Thanks ..
Your welcome... had fun doing it:)
Nice one om gah
Thank you for great videos! What bone is the lowerFaceRig and where is it’s location?
It was an extra.bone on the rig that I had that offset the jaw bone, comment it out if you dont need it. I just posted another file tweaked for auto rig pro on github that I believe commented that one out
Could you please give an open pose tutorial to the vmd file for MMD .....thank you
can we export this face recognation in unity ? if possabile then make video on it please ....
Hi
Please help with the following queries:
1. I have a Windows 10 machine running on 32GB RAM, Intel I7(10th Gen) with 4 cores and an Intel 16GB GPU. I downloaded OpenPose 1.7(Windows portable) CPU version. When I try to run the OpenPoseDemo.exe it opens the camera but the performance is too poor. Do I need to change some configuration or the hardware that I am using is insufficient for OpenPose?
2. If everything works, can I use the output over web, i.e. the camera opens in a browser and a 3D responsive model is rendered in the browser which mimics the facial expressions/movements.
Thanks
What are these bands on his arm for in 9:12? Do you need this to capture body?
Thanks for your tutorial,I just download the latest addon with UI,and I don't know why it doesn't work after I load the rigfy mapping file and the sample json from GitHub(openface doesn't work on my side either, to narrow down the issue,I just tried the sample json and mapping file,still no luck :( )
Can it be on real time using web camera? if yes how can it be done to avoid droping FPS?
Fun tutorial, but what was the mocap like setup you had in the end? A DIY setup?
I built an arduino interface to blender and used BNO055 IMUs to make my own mocap suit. It was DIY.
However I wish openpose was available free for commercial use
Great job, what mobile fhone do I need..?
Could you provide your blend file
I'm just trying stuff out
I do use Macintosh only. I did not get it on 2 link download. Blender 3.3.0 Alpha doesn't appear 3D viewer > Tool too
Great. Thanks in advance. Can we go on this path for any character we create?
Yes, find the file in github that creates an entire ui and play around.
Can I get a copy of the boy rig so I can see the bone orientation... Thanks
Thanks for this! Do you know if it's possible to import the .json files into Blender *not* as a face/character rig? More like "regular" face tracking data as if you've motion tracked a bunch of tracking markers on a face directly in Blender? See, what I'm trying to do is to use the tracking data on a series of images to de-age a human face from a live action video. Sort of like what you can do with Lockdown and Mocha Pro and others if you've heard of those, but on a budget, haha. I have absolutely zero knowledge when it comes to scripting so excuse the (possibly) stupid question! :^)
Yes, the data comes in as points that you can do anything you want with, but it would require scripting.
Hi there, firstly, awesome tutorial! 👍 I'd just like to ask about the placement of a certain the joint / bone. I looked at the script and saw a joint named "lower face rig", I couldn't make out which facial bone this is.
lower_face = DestArm.pose.bones["lowerFaceRig"]
Would love to try this, but figured I ask about this particular question first.
Hopefully I can answer this concisely. The problem with most facial rigs is the jaw moves and parts the lips. The information from json gives you the jaw as a point on the chin and I use the distance between the nose and the chin to set chin position. If the rig has a jaw bone that sets the position of the chin I calculate the angle of the bone with an arctangent or such. If I then set the chin position and the lips part, I have to put them back together again because I need to set the lip positions separately with respect to the nose. In the one script there is an internal jaw bone that moves the lips back together which I believe is the bone you are asking about, I set it to the inverse angle of his jaw bone to move his lips back together, on the auto rig pro, I just move the lips back together by a calculated offset distance caused by the jaw and moving the lips back up again. Anyway, hope that helps. making the lips move separately from jaw position is the most complex part of the script.
@@checkeredbug8015 Oh, that makes sense, I get it now. Thank you very much! 👍
wow nice thanks.. did u heard about 52 blendeshape ARkit. if this program can integrate with blendshape its gonna have good quality mocap. really appreciate your job.
Interesting, unfortunately I don't own any apple products...thanks for the tip
how to get the Facial Capture Dev.blend file?
I wanna create some videos, but don’t wanna show my real face, i wanna use something like memoji but i also want my videos to be hi quality, so I can’t use iPhone because it’s not that much hi quality and professional,i wanna record videos with DSLR camera and then add something like memoji on video so how can i do it?
How can I import a .JSON string generated by OpenPose from a single image into Blender? I am trying to copy a pose and apply it to my skeleton, but it apparently requires a PHD and some absurd amount of forbidden knowledge
ooooooo nice
Does it work for mac? because can't open a bash file...
I have download ui based open to rig from given link please make some video tutorial for facial as well as body motion capture to rig. i am maya user and it is very defficult for me to use blender for the same. it will be very halp full for me if you can provide video tutorial for ui base openpose to rig for body or face
Thanking you in advance.
Is it possible with kinects? As there are facial supporting SDK are provided by Microsoft. It would be great if it works with kinects..so we get realtime and pure shape keys
kinects is REALLY inacturrate atm, but there is some huge possibilities with www.intelrealsense.com/stereo-depth/
este proyecto se puede exportar a UPBGE?
I have not file Flir3dCPU ..?
Blender-Blender... Can OpenPose be used with 3DsMax???
Can you map facial morphs to this program? I am using Character Creator 3 and would like to do facial capture. However after looking at the CC3 rig it seems to me to only have a basic facial bone rig and instead uses morphs to change the face. I am very new this field so please be gentle in correcting me :)
If you mean map facial morphs as shape keys, yes you can, but the techniques I use modify rigged bone positions. rigging a face is really easy in blender, just add the bones then use automatic weight mapping and off you go.
@@checkeredbug8015Thanks but my character comes with its own rig that does not have all of the face bones that I want to use
When converting in cmd. It tells me error. Videocapture could not be opened for path. Why
Relative path is critical. Make the paths identical as the tutorial and make sure you cd in cmd to the same path I used
it's easier to build a rocket to the moon than to understand this tutorial
when I run it an error message appears like this:
Starting OpenPose demo...
Configuring OpenPose...
Starting thread(s)...
Error:
VideoCapture (IP camera/video) could not be opened for path: 'output\CaptureTest
\CaptureTest.mp4'. If it is a video path, is the path correct?
Coming from:
- C:\openpose_cpu\src\openpose\producer\videoCaptureReader.cpp:op::VideoCaptureR
eader::VideoCaptureReader():54
- C:\openpose_cpu\src\openpose\producer\videoCaptureReader.cpp:op::VideoCaptureR
eader::VideoCaptureReader():58
- C:\openpose_cpu\src\openpose\producer\producer.cpp:op::createProducer():475
- C:\openpose_cpu\include\openpose/wrapper/wrapperAuxiliary.hpp:op::configureThr
eadManager():1222
- C:\openpose_cpu\include\openpose/wrapper/wrapper.hpp:op::WrapperT::exec():424
D:\openpose\Flir3dCPU\bin>
Please help me fix it
Some open source software (deepfake ,face swap ) made it possible from image sequences from a video turn into face landmarks outputs..Is there a possibility to import this into blender using some python / json code ?So these facepoints convert to keypoints ? Just an idea ,i'm no programmer .My pc is really very slow on making these json files...Thanks for your reply and thanks again for these tutorials..
json is a data output, it is the primary data encoding between your web browser and the server. The son files are just filled with the points you see on the screen. I use the 2d points to calculate bone translation and rotation, say for instance the angle between both ears is the tilt of your head and the translation of the nose from the start frame calculates an angle by taking the Arctangent of the nose translation and the estimated distance from your nose to the pivot of your head etc. So any deepfake face swap would have to have a data output to use this method.
@@checkeredbug8015 thanks for your reply..Hope to see more of your tutorials.Regards,Johan
Did you manually rig your character to have all those facial bones?
No it came from a rig that already was mapped as an export from DAZ3d. I just cleared their roll and mapped the facial rig it had to work with the script... the characters were purchased as we're working on a large set of characters.
how can i get or make a character to use??!!
Do you have the tutorial for full body? :0 always I dreamed to use this tool
I will work on full body later when I have time, several irons in the fire atm.
@@checkeredbug8015 please do it as fast as u can
i 've a one question , your video that you did used in tuts
The videos of me were my cell phone, I xferred them to my HD and post processed them, not real time although if you have a beefy card openpose can do real time to unity, but I didn't need that for making movie performances on characters.
@@checkeredbug8015 thanks for all , but i started make a football game that can use real player movement , so i can do this now after showing your video but i 've a new question again if a video have a two character how can split it everyone in single file , and a lot thanks for you again ♥
openpose puts the multiple characters into the json as an array of people, my script just picks out the first one, you would need to pick out the second one if it's there: github.com/CMU-Perceptual-Computing-Lab/openpose/blob/master/doc/output.md
The gpu version doesn't work not even with Cuda and cudnn on my computer with all of the files in place I keep getting "Auto-detecting all available GPUs... Detected 1 GPU(s), using 1 of them starting at GPU 0.
Check failed: error == cudaSuccess (2 vs. 0) out of memory
*** Check failure stack trace: ***
what gpu do u have?
hey great clear tutorial dude. not sure what I'm doing wrong but in CMD its saying openposedemo.exe does not exist. any tips?
Change dir to same folder as exe
@@checkeredbug8015 The file doesnt have an exe.
is It require specific specifications in camera or any camera hd is good for it and blender program
Better/higher resolution makes better/more accurate captures, but any camera works, cellphone footage is fine.
@@checkeredbug8015 thank you
There is no Flir3d file for cpu or gpu and there is no Output folder. What am I doing wrong?
Nvm, I found it. the location of your files isn't obvious from your link. I solved this by looking at your URL and searching for anything that said releases on the site. Maybe include this in your video
would it be easy to transfer the facial performance onto an Autorig Generated armature?
I'm not very good with coding, and it would be really nice if there was a script to instal/run that handles that.
I am working on an auto rig pro version of the script.
I just posted a script that works with a rig created by auto rig pro on the github page
Oh wow!! Can't wait to experiment. I have a Kinect V2 and a Quadro p2000. I wonder if the GPU will work with my setup. This should be lots of fun!
Hi. My command is run but feedback “Check failed: error == cudaSuccess (2 vs 0) out of memory” I need install other software?
Install the cpu version to verify you're doing it right, update your cards nvidia driver, if that doesn't work your card probably isn't supported. Gpu works for me only on newer cards, my gtx 960 was too old
Checkered Bug I tried to install “CUDA”in the last week, thank god, it is worked.😅
Thank you. I used the free 3d character from the blender (Vincent). Then I change the line DestArmName = "Vincent". But I got KeyError: 'bpy_prop_collection[key]: key "Vincent" not found'. Can you help me, please?
Sorry if this doesn't help but make sure the object name and armature are vincent, sorry i forget which one, but its easy to get them mixed up and put the wrong one in the script
Have you solved the problem and did it work with Vincent character?
@@jfchilipeppers no.
So your saying I may be able to not have to use iClone's stuff and "borrowing" grandma's phone (3d scanner needed, not just for surveillance) by making a video of the performance I want to capture and doing this?
try it out, it might work:)
@@checkeredbug8015, why not. Since I'm not looting or rioting the docs are saying I still have to follow social distancing rules and stay at home.
Lucky for us, we got cool stuff to do under house arrest.
Stay safe.
i dont know how to code, can i still do the project
maybe, I would use an auto rig pro rig and the script I put in github that works with Auto Rig Pro, then you don't have to code anything. If you have a custom rig, you will have to code to make it work.
I tried to follow your tutorial but;
Error occurred on a thread. OpenPose closed all its threads and then propagated the error to the main thread. Error description:
Caffe trained model file not found: ..\models\pose/body_25/pose_iter_584000.caffemodel.
Possible causes:
1. Not downloading the OpenPose trained models.
2. Not running OpenPose from the root directory (i.e., where the `model` folder is located, but do not move the `model` folder!). E.g.,
Right example for the Windows portable binary: `cd {OpenPose_root_path}; bin/openpose.exe`
Wrong example for the Windows portable binary: `cd {OpenPose_root_path}/bin; openpose.exe`
3. Using paths with spaces.
I got some kind of error , the video can open but there is nothing showing up. Do I need to install other stuff as well? I am so new and I have no background with coding. Thank you in advance
Hey! were you able to correct the errors, I'm currently facing similar issues.
kindly reply!
Thanks.
I try to download the models and I get this erros
Connecting to posefs1.perception.cs.cmu.edu (posefs1.perception.cs.cmu.edu)|128.2.176.37|:80... failed: Unknown error.
Retrying.
does anybody have an idea
The batch file must be run as admin, may be a firewall issue as well. Try running on another pc on different network the copy it from there.
@@checkeredbug8015 Thank you for the response I just tried from another laptop and i could download the models
where is the link to these downloads I cant find it?
i dont see the same option to download based on the link in your description just the github
Slightly confused by question... This doesn't work?
github.com/nkeeline/OpenPose-to-Blender-Facial-Capture-Transfer
@@checkeredbug8015 setting up openpose is like pulling your teeth out with hopes it unlocks a door. Its literally the worst thing ever, every time you think you making strides it needs a new dependencies that needs a new dependencies that needs a new dependency, that breaks, in particular the caffe.dll thing just does not work, honestly its unusable, been trying to get it to work for 3 days, chatgpt4 cant even help rofl, lol and I work in python on various projects and I am a software developer by trade and I am saying this lol. my pc is a 3080ti with 5950x CPU and vs code 2019 etc... Wish you covered a step by step guide to set things up without skipping any steps....thanks for the URL I will check it out. :)
Can I use it for unity3d gaming
I haven't tried the unity version of openpose, you should try it.
Is it possible to apply openpose facialjason script to body
Using this script, Or if it is not applicable on body then please share another script that helps to apply in body Or share some tricks on it.
I used the 2d points to create 3D. There is a LOT of information in the 2d data that can be mapped. It wouldn’t be hard to take the 2d armature data and apply it to a 3d. I can calculate all of the bone angles by projecting the 2d onto a flat plane and calculating the quaternion necessary to create the 2d data. Of course each calculation would produce two results, one going out of the page and one angled in, but some simple rule checking would fix most all of it. For instance your forarm can’t bend backward etc. I was considering starting on this, but I’m taking a break for the moment. Thanks for your interest…
@@checkeredbug8015 I am glad that you replied my question and also I am so excited that you'll post a video on openpose of body as per my wish soon
@@checkeredbug8015 hey, great video. I'd realy appreicae it if you could do do this too!
FULL BODY CAPTUR PLZ
4:27 where did you opened this command line
That is the windows command prompt that's been around since before windows in the good ol dos days.
@@checkeredbug8015 ooo so you open the command prompt and just write the command ?
top
do you know how to make it work on a mac? can you do a tutorial on it?😀
Sorry, I don't own a mac.
Error:
Video to write frames could not be opened as `output\CaptureTest\OUTPUTFILENAME.AVI`. Please, check that:
1. The path ends in `.avi`.
2. The parent folder exists.
3. OpenCV is properly compiled with the FFmpeg codecs in order to save video.
4. You are not saving in a protected folder. If you desire to save a video in a protected folder, use sudo (Ubuntu) or execute the binary file as administrator (Windows).
I ran cmd as Admin.
?
I typed:
bin\OpenPoseDemo.exe --video output\CaptureTest\VID_20200722_224549.mp4 --write_json output\CaptureTest --face --hand --write_video output\CaptureTest\OUTPUTFILENAME.AVI
oh and in version 1.5.1 and 1.6.0 (i tried both) i had to make a Ouput file. there wasn't one after the unzip.
FIX*** I moved openpose from where i wanted it , in the same folder as my Blender folder, To c: . Fixed.
yeah, the path is the biggest place to go wrong. Putting it into a root folder with simple names and no spaces is key.
Now to figure out how to get more than. 07 frames per second
Thats 7 mins per 1 sec of video
I noticed you made an import .py fil for Auto rig pro, wil you be making one for Rigify or Blend rig 5
I can take a swipe at rigify if you like.
Where can we download your blender file for testing? ...with the boys face
Sorry, kid was a purchased model, Can't be post it legally.
Sorry, kid was a purchased model, Can't be posted legally. Also really torn about making video and better workflow since youtube isn't worth posting on or supporting any longer.
dear god.
Great video. I manage to get to where you paste the script "FacialJSON.py" into Blender but blender crashes every time. I'm using a free character I downloaded. Has anyone come across this?
Use the ui version of the script and browse to the file then map one bone at a time.
@@nickkeeline4840 Hi Nick, thanks for your reply. Where can I find the UI version of the script? I'm new to blender and I really want to make a career of this. Is there a step by step guide I can read to map one bone at a time? Any help will be greatly appreciated.
Are you using this
github.com/nkeeline/OpenPose-to-Blender-Facial-Capture-Transfer
Please read the instructions.
its not working...
Can you give a tutorial on how to install OpenPose for Mac?
Sorry, don't own a mac
Is this work for every character
I wrote a sript that makes it work for my character, and I regret not making it more universal for all of you, my goal was to encourage people to modify the code for their character and learn python to do it. The answer is yes, it will work for every character, but you have to tweak the script to work for yours.
@@checkeredbug8015 first I have to learn python ???
I want to encourage you to try..it's worth it
@@checkeredbug8015 when I try to type open pose in cmnd it's says not defined how to fix this
Not sure, try using EXACTLY the same paths I used.
"like THIS!!!... WHOA!!!!!" *and funny music plays* XDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD MAN THAT WAS SO FUN TUTORIA
lmao, I have just realized he was using CPU version. I was having so much headache with GPU and still not working :P
those codes are billion times scarrier than Insidious
Ничего не понял
Windows though
Ok Ill do a few in linux...just for you.
being open-source, then why it doesn't allow for commercial use
Sorry, I'm not part of openpose, so I don't know. I know if you want to use it commercially they suggest you contact them.
@@checkeredbug8015 thank you for the reply
I think, in there lisence they says that it is MIT lisenced
And they says it can be used for commercial use
Iam confused
It's a shame their commercial licence is $25,000 USD annual royalty.
@xOr Ok, maybe they are totally shameless to ask such a high price :D
@xOr Imagine an indie dev who made an app as a side project and wants to earn a bit from it, if possible. It is impossible to predict income for small niche projects. Paying 25k upfront would be a very risky investment.
Why for non comercial :(((( Comercial licence is soooo unfriendly...
Open source but only for non commercial use?
That's awful