Blender 2.8 facial mocap using OpenCV and webcam
Вставка
- Опубліковано 1 сер 2024
- Real-time facial motion capture in Blender 2.8 using OpenCV and a webcam.
This uses python scripting directly in Blender.
Installation Commands (change to python/python3/python3.7m):
python3 -m ensurepip
python3 -m pip install --upgrade pip --user
python3 -m pip install opencv-python opencv-contrib-python imutils numpy dlib --user
Blender Cloud:
cloud.blender.org/p/characters
Python scripts:
github.com/jkirsons/FacialMot...
Facial landmarks database:
dlib.net/files/shape_predictor...
Additional documentation on ibug facial annotations:
ibug.doc.ic.ac.uk/resources/f...
Citations:
C. Sagonas, E. Antonakos, G, Tzimiropoulos, S. Zafeiriou, M. Pantic. 300 faces In-the-wild challenge: Database and results. Image and Vision Computing (IMAVIS), Special Issue on Facial Landmark Localisation "In-The-Wild". 2016.
C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, M. Pantic. 300 Faces in-the-Wild Challenge: The first facial landmark localization Challenge. Proceedings of IEEE Int’l Conf. on Computer Vision (ICCV-W), 300 Faces in-the-Wild Challenge (300-W). Sydney, Australia, December 2013.
C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, M. Pantic. A semi-automatic methodology for facial landmark annotation. Proceedings of IEEE Int’l Conf. Computer Vision and Pattern Recognition (CVPR-W), 5th Workshop on Analysis and Modeling of Faces and Gestures (AMFG 2013). Oregon, USA, June 2013. - Наука та технологія
I've been meaning to figure out how to do this for month, this will be immensely helpful for getting this working for other characters.
Is this work with shape keys?
@@pranshu512 You could probably combine shape keys and drivers to get better results.
Wow, thats a great tip. please do more of these.
This is incredible!
U got one more subscriber and one more thumb up and it’s so cool I was looking for something like that
took me a while but got it working! Thank you!
is this working on other characters?? i.e own created characters??
It took some doing but I got it working with with my webcam, Windows 10 and the current Vincent rig. Thank you for making the simplified solution for Windows 10 which worked for me right out of the box. I took the time to get this version to see if dlib in conjunction with the shape_predictor_68_face_landmarks dataset produced better tracking results which it did. Much smoother animation IMO. Can't wait to see more from you.
I just wonder how did you install the opencv on blender using windows? I installed opencv in anaconda prompt and it's available to import cv2 directly in the window, but not in the blender, even if I have added the cv2 location to the sys.path.append. Really need some help here...
oh p.s. I use the blender2.8version,hope its not too different?
@@pillowurs3909 i installed cv2 per the instructions in the video....verified which version of python my blender installation was pointing at the from that python\bin directory, ran the command > "python -m pip install opencv-python opencv-contrib-python imutils numpy --user" from an CMD (as Admin) window. Had to restart blender after that to verify the installation as described in the video.
Hope that helps.
This is brilliant
Thank you soo much!! Great stuff!!
And here I thought I finished with all my scripting for my pipeline. Guess ill be knee deep in python again when I switch to my new machine!
This is awesome!!!
Thanks! Your input is pulling me out of the frustration I'm finding myself these days. Do you know if simultaneous multiple cameras are optional? Like for dialog between several characters?
very handy that the tutorial is on manjaro
Thanks a lot my friend!! New sub here!!
Super! Thanks
Thats so amazing!!!!!! Please, tell me that it'll become a complete addon!!!! Would bê awessome!!!!
Excellente video ! But I have a question. This add on works in every face characters ?
I think this is super awesome and I've been looking for a way to do real-time motion capture in blender for a long time. I would really love to get a better explanation of how you linked the realtime tracking to the character, so I could figure everything out and use this for my own characters. I don't know how hard this is, but I thought I'd ask anyway. And once again, I love this video.
I agree. This is copy/paste hard coded for bone naming and targeting. Is there a possibility, @Gadget Workbench that a mapping solution like RigPro (or mixamo's click on points to retarget) could be added to map out our own custom facial rig setup with the corresponding bone's name? Thanks again for this wonderful possibility. You´ve skipped a lot of steps (like usb camera or wifi camera, port listening and a lot of other things relating to blender and the camera), but I dig it: this is a tech demo. Great video nevertheless.
Thanks, I'll try and make it more dynamic if I can get the time. It's just a simple in-model script at the moment, demonstrating how it can be done. In both Manjaro and MacOS there were no additional steps to use a USB webcam within Blender. I'll try again on a fresh vm just to double check. Thanks for the feedback!
This is great. Can you update it for 2.83 and show us how to use it with other characters?
OK I got it working! So is there any tips to get the tracking a bit smoother!? Thanks again
Hi, I connect it to my blender and it works perfectly the only thing that I still looking for is how to export the animation from it I can’t find the key frames that it makes, thank u
awsome...
U are a beast :)
some way to adapt the script to new skeletons like rigify? would be very helpful
Is it possible that we use it on our own character model?
This is amazing! Can I use this to control shape keys for 2D models?
This is great, but does this also work with the blender rigify or auto rig. in blender? or its just the blenrig biped
Finally It works in my Windows 10 after installing manually dlib and changing the code.
Now my problem is that the pop-up window doesn't close when I click in the X, it opens again.
Thank you so much for this amazing guide. There are some problems to install dlib in bundled python with Blender on Windows 10 but somehow I managed to install it and finally I have got the same result as you have shown.
how did you resolve the dlib problem?
@@kmadisha I work on something similar in 2D but this is much better. I'm having problems installing dlib.
4232.cf/animacion-in-alive-with-obs/
@ChameleonIVCR Thanks this worked for me! :D
Got this working! Really cool stuff man. Found the same issue with Apple's ARKit True Depth sensor. It slips just enough and is fuzzy enough around the lips that it's not really useful for much beyond what Snap is doing with the tech. Weta Digital uses dots on faces and that'll stop the slippage with a good enough camera and enough light so I'm going to have to try this idea with dots on faces and old fashioned point tracking. but really solid work here. I'll be poking through your code tomorrow for sure.
I have not managed to figure out how you keyed everything while recording. auto keying button doesn't do it. hmm.
What about fps on the cam? I remember the ps3 eye worked really good with iPiSoft because it recorded at 60 hertz, iPhone cam records 1080 at 120 fps, instead of real time performance a recorded performance might work better. Also improving lighting might work as well.
Lawrence Whiteside, hey thats a great idea. I currently do a lot of facial animation via ipjonex depth cam but it lacks detail. If I could add a dot based tracking on top of iphonex depth cam tracking it would be really great. Would that be possible? If you are a programmer I would love to work with you to make such an app.
Does this work with kinect SDK 1.8 or 2 for keypoints to blender realtime? That would be great if there is addon
Import cv2 does'nt work for me, I'm on mac with python 3.7.4
It says (when I type import cv2)
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'cv2'
But I have installed it and everything is in. Anyone can help?
How do you get it to accurately Lip Sync? I use a very similar method for Facial Mocap via the Kinect 2 for Windows and through a Blender addon I found. My issue isn't getting my models to animate, it's after I record the keyframes using the auto keyframing option on Blender. My lip sync moves way to fast after rendering it and creating a actual video file to playback. Plus my PC isn't that powerful GPU wise, and so it's kind of tough to judge whether it's a GPU issue that causes everything to either slow down to a crawl or speed up after rendering.
Like do you ever need to clean up your Blender Timeline regarding keyframes you've captured/recorded?
I have a kinect that I'm trying to get to work in Blender to do similar. What blender addon did you use? thanks
I was not able to make OpenCv work even after it was installed, it is very complex to understand the installation, does anyone know if they have an addon to install OpenCv?
Wondering, how is OpenCV dlib working so well for you. In my own experiments I find it can get very confused with open mouth animations. What's weird - mouth animation works better in low light conditions. In bright light, as soon as I start talking the mouth landmarks sometimes suddenly jump up and down.
I just wonder how to install the opencv on blender using windows? I installed opencv in anaconda prompt and it's available to import cv2 directly in the window, but not in the blender, even if I have added the cv2 location to the sys.path.append. Really need some help here...
This is a Very informative video. I am using Blender 2.82 with python3 on MacBook Pro. but when I tried to Register and run OpenCVAnimOperator. Py I received error messages. Can you please explain the actual reason of error and probable solution for that? Thanks.
can you use this addon to a custom 3d face?
But the mouth only moves up & down is there a way to have thd mouth move properly?
If I want to use my own 3D model, what preparations do I have to do in order for it to work? I'm not very advanced, sorry if the question is dumb
Hi everybody, is there a tutorial to show ho to link this facial mocap system into another character? how to fit it in my own character face?
Wow, this is amazing!! Could you show how this method can work with the Autorig addon? I think that would be great!!
I'm also looking forward to this feature
I am having trouble with the dlib module. The module doesnt seem to work with python 3.7? Do i have to downgrade or is there an easier way?
i wonder if we could have the plugin already installed in blender?
This would be so amazing IF I could get it to work. Got stuck at SyntaxError: invald syntax when I type python -m ensurepip
That's because it seems like in newer version's of blender, pip is enabled by default and python -m ensurepip has been removed.
Anyone know how to do this with another character you have built? Ie from Fuse ?
hey man, would you help me install this. i cant get it to work on my. ios mac.
dlib is not working for windows 10
@gadgetWorkbench was able to get it setup with all the downloads, but as soon as the capture starts on MAC, the camera light goes on, then off, then crashes blender, any recommended hardware specs you need for this?
can you copy the keyframes to another model?
Does it apply to the blender2.79 ?
Sir can u please say which algorithm u used here for the open cv
Thank you but I have a problem with the lbfmodel.yaml file. If I use the raw version of your version, I get the message "python script failed, check the message console". After I run the OpenCVAnimOperator.py.
It is a syntaxerror "unicode 'unicodeescape' error codec can't decode bytes in position 2 - 3: truncated"
I added: r, open and double slashes but nothing worked. So I can't get capture under opencv animation.
How can I fix this?
However in object mode I see "openCV Animation" but not the button "capture"
I followed every step
I have a problem. When I Run Script from OpenCVOperator python file, it's says that could not find dlib module. After searching for dlib, I discovered that dlib supports Python v3.6, while that Blender 2.8 has Python 3.7. So I cannot find any way to make dlib to be compatible for Blender 2.8's Python version. What should I do?
Thanks for the video. Please with the script, do i have to enter the codes or just install the addon at user preferences ?
You need to enter it and save it with the blender file. It is not an addon.
uém pode me dizer o que está errado? Python script failed, check the message in the system console
How can I not find the terminal in my windows! Do I have to install Terminal and Python 3.7 to make it the same as your video?
this is cool. but do we have something drag and drop? I'm a coder who codes python for years but i have no knowledge of opencv.
In windows python script failed how to fix?
I successfully compiled for windows 10 and work fine with slow
When I follow the video come to the "python -m pip install dlib" step, I encounter a installing error , then I try to use cmake-gui with "Blender Python(3.10.2)" to compile Dlib directly, and it still complains the same error... but the "Original Python(3.10.6)" is OK. I don't know why ...😢
and how it works with a custom character?
How did you mive the workspace to show Scripting on 2:52 ?
How to use it in other models
hello, after run it will display
ValueError: WorkSpaceTool.setup(): error with keyword argument "options" - : 'REGISTER' not found in ('KEYMAP_FALLBACK')
may i know how to fix it!?
how can i apply this on hands and use videos instead of cam?
It would be more benefit and precision if you could just set video path to process so the script would go frame by frame and make it better (and you could fix it too per frame basis).
Dear teacher, I use the latest version of blender3.0, according to the operation of your video, but there are always errors, I would like to ask you
Will you please give the link of paper/documentation??
thanks for the video
If you're having troubles with this setup, please see my other video for a simplified process: ua-cam.com/video/9FBMoUo6vhY/v-deo.html
Or a Windows 10 step-by-step guide for this: ua-cam.com/video/RY_eErKlilw/v-deo.html
sir there am not getting capture button in my mac any help??
can i use this method as a webcam on twitch?
Could anyone help? Trying this on a mac, downloaded and installed the pip and the scripts. I see the capture button, but Blender (2.9) keeps crashing, also when I run it from Terminal. Should I install python 3 instead of the basic 2.7, cause I thought it would run from within Blender.
damn i dont see the capture button ( also on a mac), any idea what could be missing ?
Amazing,
how to find this terminal in Blender? Is this Python terminal separate?
Does this work with full body?
Trying this on macOS 10.13.6 High Sierra and running into problems. First off while, Blender 2.8 has a python3.7 folder, my Terminal says it's version 2.7.13. Later, when I tried the "python -m ensure pip" command I got the message: "Could not import runpy module". Where do I go form here?
Also, do you have to use a webcam built in to your computer? My eventual goal is to take video from a helmet mounted IP webcam as part of a mocap suit.
I know it's an old video, but would this OpenCV work with ManyCam?
I'm getting a Traceback error when I try to capture. I briefly got an output window that showed a few frames of video, but there were no points on it.
Python: Traceback (most recent call last):
File "D:\OneDrive\Documents\vincent.blend\OpenCVAnimOperator.py", line 196, in modal
cv2.error: OpenCV(4.5.5) :-1: error: (-5:Bad argument) in function 'circle'
> Overload resolution failed:
> - Can't parse 'center'. Sequence item with index 0 has a wrong type
> - Can't parse 'center'. Sequence item with index 0 has a wrong type
location: :-1
Any idea what this means or how to resolve it? Looks like a data type error?
I have a lot of problem with opencv on ubuntu 18.
hello! i have been facing an issue for an ETERNITY, the python that blender makes available is (frustrating to say the least) so i saw that in 3:19 you had your own custom python interpreter, is there any possible way for me to do that? i have looked at so many forums and videos online, nothing mentions the way you change python interpreters in blender, much appreciated if you help!
Great video! I'm installing opencv apparently correctly in the spot you said to, but when i try to import it in blender, it is not working. Any tips?
same man, do you have a mac ?
Python script failed,check the message in the sytem console .how to fix this sir?
Will this work on any character or only Vincent?
I ran into problem where I got the error during dlib installation.
Apparently it requires cmake to be installed
I installed the cmake library before dlib using the following command
python -m pip install cmake
adding cmake in the multi installation 3rd step just before dlib should also do the job
i know i made it understand in a very complicated way. just trying to help. maybe more people ran into this problem. so would be helpful
update... adding cmake in 3rd step does not help. first install cmake. then run the 3rd step
Arigato
May i Ask your for help with exact something in ICLONE 7? Iclone 7 also have PYTHON inside. But im complete newbie in Python this is dark magic for me... . Having thing like this, can save thousands of hours, to creating Facial Animation. Im goona to learn python soon, i installed pycharm,i installed also Python 3.8 , i installed also cmake (in pycharm), - Im not able to instal DLIB - ppl say, that DLIB only works with Python 3.6 smth. So can you help with this in iclone? Thanks for great video in blender. This is a Free Face Tracker ... and ppl Paying 2-3K $$ for payable thing like this...
Facial landmarks database link doesn't work. Anyway to get the database?
in the video you dont install dlib only install python3 -m pip install opencv-python opencv-contrib-python imutils numpy --user
It just crash for me when I start the script....
import cv2 in Blender is giving error, i followed whole video as u described.
I GET this in script window
Traceback (most recent call last):
File "", line 1, in
File "E:\SOFTWARES NEW\blender software\blender\2.80\python\lib\site-packages\cv2\__init__.py", line 5, in
from .cv2 import *
ModuleNotFoundError: No module named 'cv2.cv2'
Plz Help
Installed fine but whenever my camera launches in blender it just crashes after I start moving...
Traceback (most recent call last):
File "/home/bob/Desktop/Desktop/vincent.blend/OpenCVAnimOperator.py", line 116, in modal
KeyError: 'bpy_prop_collection[key]: key "vincent_blenrig" not found'
Any ideas? Thanks for the video
If the armature in the project is not called "vincent_blenrig" just modify this line in the script to the right name.
FaceRig in Blender !
Ok, so is there a version of this using a lib that *is* for commercial use? Or do we basically need to build our own from scratch?
Not that I know of. It's the dataset that the trained models were created on that is not for commercial use. I suppose you could use video of your own face with annotations provided by a commercial product to train a new model. If the commercial license permitted this...
Running on Mac OS, using blender 2.83 running the scripts successfully but once i click on the UI button to start the motion capture blender crashes, i see the webcam turn on (the green light) and it turns off, then a error message appears that Blender quit unexpectedly
Ok had to run blender from terminal on mac os. I think maybe the camera privacy permissions were messing something up
Thank you for amazingly useful tutorial!
In Blender 2.83 I faced with error: "vincent_blenrig" not found.
It can be fixed if replace it with "RIG-Vincent".
Hope it will help somebody.
thank you... i am done .
HI great work , and tanks for the hard work
But for me did not work, im in a mac late 2018, with Catalina 10.15.4 and blender 2.8.2,3 and 9. I get an error that I cant dicyfer about cmake ;(
But i love the idea.
You'll need to follow these steps to brew install python and then brew install dlib, and then the pip install stuff in the video, and then restart blender: stackoverflow.com/questions/54719496/installing-dlib-in-python-on-mac
Xan it work without a web cam and what about a full body???
Theoretically it could work with a recorded video - just check out some OpenCV tutorials. There are full body pose estimation models that work in OpenCV, but I haven't got them working reliably yet.
Is there anyway to just get facial capture data without having it run as a blender app. So that a VA can cam their performance send me the dataset and audio?
You could have them record a video, then on this line: github.com/jkirsons/FacialMotionCapture_v2/blob/master/OpenCVAnimOperator.py#L210 pass the filename instead of using a camera, syntax: docs.opencv.org/3.4/d8/dfe/classcv_1_1VideoCapture.html#a949d90b766ba42a6a93fe23a67785951.