Blender & OpenCV - Facial Motion Capture v2 (no dlib)

Поділитися
Вставка
  • Опубліковано 2 сер 2024
  • This is a follow up to the last facial motion capture video.
    Github repo:
    github.com/jkirsons/FacialMot...
    Vincent Model:
    cloud.blender.org/p/character...
    Trained Model: (note - this is not for commercial use)
    github.com/kurnianggoro/GSOC2...
    Citations:
    C. Sagonas, E. Antonakos, G, Tzimiropoulos, S. Zafeiriou, M. Pantic. 300 faces In-the-wild challenge: Database and results. Image and Vision Computing (IMAVIS), Special Issue on Facial Landmark Localisation "In-The-Wild". 2016.
    C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, M. Pantic. 300 Faces in-the-Wild Challenge: The first facial landmark localization Challenge. Proceedings of IEEE Int’l Conf. on Computer Vision (ICCV-W), 300 Faces in-the-Wild Challenge (300-W). Sydney, Australia, December 2013.
    C. Sagonas, G. Tzimiropoulos, S. Zafeiriou, M. Pantic. A semi-automatic methodology for facial landmark annotation. Proceedings of IEEE Int’l Conf. Computer Vision and Pattern Recognition (CVPR-W), 5th Workshop on Analysis and Modeling of Faces and Gestures (AMFG 2013). Oregon, USA, June 2013.
  • Наука та технологія

КОМЕНТАРІ • 54

  • @minorinxx
    @minorinxx 4 роки тому +2

    Thank you and I am really waiting your next project!!

  • @NerdyRodent
    @NerdyRodent 4 роки тому

    Very nice indeed! Looks like something I’d like to play with.

  • @choppacast
    @choppacast 3 роки тому

    Amazing work

  • @yajuvendra15
    @yajuvendra15 4 роки тому

    Finally Nailed it, some error popped up on console about the lip_low_4_bbone_easeout_R rest all worked perfectly.... thanks a lot

  • @samuelgetachew5547
    @samuelgetachew5547 4 роки тому +1

    Great job! It works for me!

    • @whatif6354
      @whatif6354 4 роки тому

      i'm getting this error please help
      traceback (most recent call last):
      File "C:\Users\Asus\Downloads\Vincent.blend\OpenCVAnimOperator.py", line 30, in
      File "C:\Users\Asus\Downloads\Vincent.blend\OpenCVAnimOperator.py", line 42, in OpenCVAnimOperator
      AttributeError: module 'cv2.cv2' has no attribute 'face'
      ERROR (bke.anim_sys): c:\b\win64_cmake_vs2017\win64_cmake_vs2017\blender.git\source\blender\blenkernel\intern\anim_sys.c:4153 BKE_animsys_eval_driver: invalid driver - pose.bones["lip_low_4_bbone_easeout_R"].rotation_quaternion[1]
      ERROR (bke.anim_sys): c:\b\win64_cmake_vs2017\win64_cmake_vs2017\blender.git\source\blender\blenkernel\intern\anim_sys.c:4153 BKE_animsys_eval_driver: invalid driver - pose.bones["lip_up_4_bbone_easeout_R"].rotation_quaternion[1]
      ERROR (bke.anim_sys): c:\b\win64_cmake_vs2017\win64_cmake_vs2017\blender.git\source\blender\blenkernel\intern\anim_sys.c:4153 BKE_animsys_eval_driver: invalid driver - pose.bones["lip_low_4_bbone_easeout_R"].rotation_quaternion[2]
      ERROR (bke.anim_sys): c:\b\win64_cmake_vs2017\win64_cmake_vs2017\blender.git\source\blender\blenkernel\intern\anim_sys.c:4153 BKE_animsys_eval_driver: invalid driver - pose.bones["lip_up_4_bbone_easeout_R"].rotation_quaternion[2]
      ERROR (bke.anim_sys): c:\b\win64_cmake_vs2017\win64_cmake_vs2017\blender.git\source\blender\blenkernel\intern\anim_sys.c:4153 BKE_animsys_eval_driver: invalid driver - pose.bones["lip_up_4_bbone_easeout_R"].rotation_quaternion[3]
      ERROR (bke.anim_sys): c:\b\win64_cmake_vs2017\win64_cmake_vs2017\blender.git\source\blender\blenkernel\intern\anim_sys.c:4153 BKE_animsys_eval_driver: invalid driver - pose.bones["lip_low_4_bbone_easeout_R"].rotation_quaternion[3]
      ERROR (bke.anim_sys): c:\b\win64_cmake_vs2017\win64_cmake_vs2017\blender.git\source\blender\blenkernel\intern\anim_sys.c:4153 BKE_animsys_eval_driver: invalid driver - pose.bones["lip_low_4_bbone_easeout_L"].rotation_quaternion[1]
      ERROR (bke.anim_sys): c:\b\win64_cmake_vs2017\win64_cmake_vs2017\blender.git\source\blender\blenkernel\intern\anim_sys.c:4153 BKE_animsys_eval_driver: invalid driver - pose.bones["lip_low_4_bbone_easeout_L"].rotation_quaternion[2]
      ERROR (bke.anim_sys): c:\b\win64_cmake_vs2017\win64_cmake_vs2017\blender.git\source\blender\blenkernel\intern\anim_sys.c:4153 BKE_animsys_eval_driver: invalid driver - pose.bones["lip_up_4_bbone_easeout_L"].rotation_quaternion[1]
      ERROR (bke.anim_sys): c:\b\win64_cmake_vs2017\win64_cmake_vs2017\blender.git\source\blender\blenkernel\intern\anim_sys.c:4153 BKE_animsys_eval_driver: invalid driver - pose.bones["lip_up_4_bbone_easeout_L"].rotation_quaternion[2]
      ERROR (bke.anim_sys): c:\b\win64_cmake_vs2017\win64_cmake_vs2017\blender.git\source\blender\blenkernel\intern\anim_sys.c:4153 BKE_animsys_eval_driver: invalid driver - pose.bones["lip_up_4_bbone_easeout_L"].rotation_quaternion[3]
      ERROR (bke.anim_sys): c:\b\win64_cmake_vs2017\win64_cmake_vs2017\blender.git\source\blender\blenkernel\intern\anim_sys.c:4153 BKE_animsys_eval_driver: invalid driver - pose.bones["lip_low_4_bbone_easeout_L"].rotation_quaternion[3]
      Traceback (most recent call last):

  • @fdtutorialindo
    @fdtutorialindo 4 роки тому

    Thanks you

  • @iamjameswong
    @iamjameswong 4 роки тому

    Thanks for sharing the awesome work. I'm wondering though how I can render these video frames outside of the Blender UI as a standalone script. Any ideas?

  • @Pablete099
    @Pablete099 4 роки тому +1

    Great! Although I was really looking to animate a character in Unity :D. Any tips on how to reuse the captured animation in Unity?

  • @user-hz4ci7vm6o
    @user-hz4ci7vm6o 4 роки тому

    Thx

  • @kmadisha
    @kmadisha 4 роки тому

    I struggled with the dlib setup but this one works perfectly. Perhaps step by step guide for the dlib setup would have done the trick. Thank you a lot, I now want to see if I can apply the whole lot on a different character.

  • @EliSpizzichino
    @EliSpizzichino 2 роки тому

    I've tested both solutions in Blender 3, and the dlib version it's way more accurate but slower and miss frames. Both seems to detect eyes open/close in the markers but the result is not replicated in the vincent rig. Subtle, and less subtle mouth movements are not detected at all, and both solutions are quite jittery when no movement is present.
    But hey good work, it may be cool to use for live performance!

  • @SergeiMaschenko
    @SergeiMaschenko 3 роки тому +2

    Hi!Thanks for video! It works! Is it necessary to create a new script for the hole body mocap or current 2 ones can do it with some arrangements in Blender?

  • @nomorecookiesuser2223
    @nomorecookiesuser2223 4 роки тому

    Very awesome! Do you know of any available facial landmark databases that can be used commercially?

  • @mat-rh8zj
    @mat-rh8zj 4 роки тому +4

    Love your work. Just for some of you having issues with running the Operator scripts in blender (MacOS user):
    1.You don't need the OpenCVAnim.py. Just the OpenCVAnimOperator.py
    2. After implementing the OpenCVAnimOperator.py script in Blender texts editor and running the script, open blender python console and type bpy.ops.wm.opencv_operator()
    3.If Blender crashes (happened in my case) then start Blender application via terminal.
    cd /Applications/Blender.app/Contents/MacOS
    ./Blender
    4. Then open vincent file (or any other file with the previous script) and Do the step 2. again
    Worked for me hope it helps
    I'm really looking forward for your next projects, i think you could try a very similar approach to body motion capture too, using the openPose library, although it would need to be just a planar very simple motion capture, but still. Keep it up.

    • @GadgetWorkbench
      @GadgetWorkbench  4 роки тому

      Thanks for the feedback on MacOS.
      I had a look at openPose, but the USD $25,000 / year commercial license kind of turned me away.

    • @mutanazublond4391
      @mutanazublond4391 4 роки тому

      how did you set up your mac ...

    • @cgart5511
      @cgart5511 4 роки тому

      Thank you

  • @memomind7415
    @memomind7415 4 роки тому +5

    Create it as add-on and make it works with any models

  • @hilmiyafia
    @hilmiyafia 4 роки тому

    It took me a second to process the image at 4:56.

    When I realized what I was seeing I was like "OH MY GOD!"
    You're planning to capture positions in 3D space and sent them through WiFi!!!!! That is so cool!! 😂😂
    Edit: Wait, that was WiFi module right? Or am I wrong?

    • @GadgetWorkbench
      @GadgetWorkbench  4 роки тому

      You are right... It's taking a while to work out the software side of things

    • @hilmiyafia
      @hilmiyafia 4 роки тому +1

      @@GadgetWorkbench I got college years flashbacks when I saw that! I remember using NodeMCU to control an LED from my phone through UDP. That was a simple project compared to the projects you have done on your channel, you're amazing! Good luck on your motion capture project! 🤗

  • @SamyakAgarkar
    @SamyakAgarkar 4 роки тому

    wow... ok.. so
    is there anything of the same kind for rigify addon?
    I believe rigify always generates the same face shape keys. so, is there a way to use this opencv to track onto the rigify version?

  • @thephotoman6906
    @thephotoman6906 3 роки тому

    Could anyone help? Trying this on a mac, downloaded and installed the pip and the scripts. I see the capture button, but Blender (2.9) keeps crashing, also when I run it from Terminal. Should I install python 3 instead of the basic 2.7, cause I thought it would run from within Blender.

  • @Sen3D
    @Sen3D 4 роки тому +1

    I wouldn't invest too much energy into inertia-based MoCap systems. Using the HTC Vive Tracker makes way more sense.
    I'm going to build gloves for finger tracking thou.
    And too bad about the eye tracking. I have to find another solution then.

  • @jitone1
    @jitone1 4 роки тому

    Hi, can this work for Blender v 2.83.2?

  • @asechannel6646
    @asechannel6646 3 роки тому

    Im triend i try to use in blender update 2.93 and python 3.9 but not work

  • @mohamedhasib5037
    @mohamedhasib5037 3 роки тому

    can i use this for hands motion capture?

  • @CrossPadCastle1
    @CrossPadCastle1 3 роки тому

    can this work with vroid?

  • @MotuDaaduBhai
    @MotuDaaduBhai Рік тому

    If I use FACEIT plugin to rig my character, will this method work? I can drive the animation using Hallway app and offline Livelink face recording but your setup cuts the middleman.

  • @MegaCyberpirate
    @MegaCyberpirate 4 роки тому

    I installed opencv But when i import it inside blender . It said no module found.. But if i used the same python as blender used inside the terminal it import it without any issue.. Any idea how to fix it .. Thank in advance.. By the way i am using linux

  • @jumpman23nith
    @jumpman23nith 3 роки тому

    What does this error mean?
    \OpenCVAnim.py:15
    rna_uiItemO: operator missing srna 'wm.opencv_operator'

  • @jumpman23nith
    @jumpman23nith 3 роки тому

    I am receiving an error. It says "ERROR: Failed building wheel for dlib"
    This is whenever I try to install OpenCV in the terminal.

  • @AxonMediaSeattle
    @AxonMediaSeattle 4 роки тому

    I don't know if you're aware, but this open source Blender mocap suit is currently in beta: chordata.cc/ I have also not been able to get your demonstrated facial mocap to work (soooo many lines of errors), and a coder I am not, so I have a question: how much in Patreon donations would it take for you to develop this into a full working installable add-on for Blender 2.8?

    • @GadgetWorkbench
      @GadgetWorkbench  4 роки тому +2

      Yes, chordata is the inspiration, but I want to do it with minimal soldering skills and with smaller sensors. I'll have a look into blender plugins...

  • @shockerson
    @shockerson 3 роки тому

    Hi. Are you author of this idea? You think its is possible to make this with RIGIFY face-rig - it gona be hard? im newbie In Blender.

  • @zuckerpapa1234
    @zuckerpapa1234 4 роки тому

    I tried now every possible version of Blender and after some fiddling it won't give me a console error at the beginning anymore, but if I press capture, Blender quits and gives me the following error. I would really like to make this work and in a next step use other characters to drive this. Any help would be appreciated.
    qt.qpa.plugin: Could not find the Qt platform plugin "cocoa" in ""
    This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.

    • @GadgetWorkbench
      @GadgetWorkbench  4 роки тому

      Try this:
      stackoverflow.com/questions/60032540/opencv-cv2-imshow-is-not-working-because-of-the-qt
      but with blender's python folder...

    • @aleenahaider7313
      @aleenahaider7313 4 роки тому

      @@GadgetWorkbench could you give a heads up for a trained hand model? also could we do this for video instead of real-time? want to work on a sign language project. Thanks!

  • @acess091
    @acess091 4 роки тому +7

    Can we do this with other models

  • @Kam_maK
    @Kam_maK 4 роки тому

    Hello, i try your 2 tuto and i have always this message :
    " File "D:\Téléchargement\Vincent (1).blend\OpenCVAnimOperator.py", line 1
    importer bpy
    ^
    SyntaxError: invalid syntax
    location: :-1
    "
    I don't Understant what is mean ^^"

  • @user-vy4oi4rg4l
    @user-vy4oi4rg4l 2 роки тому

    It doesn't seem to be able to work in blender3.1 :(

  • @thebads
    @thebads 3 роки тому

    File "/home/devv/Documents/blendre/facerig/Vincent.blend/OpenCVAnimOperator.py", line 30, in
    File "/home/devv/Documents/blendre/facerig/Vincent.blend/OpenCVAnimOperator.py", line 36, in OpenCVAnimOperator
    AttributeError: module 'cv2' has no attribute 'data'
    Error: Python script failed, check the message in the system console
    well. that's something