Liked and subscribed! Also I want to say I appreciate you working for a grant rather than monetizing on Patreon and subscriptions and such. I find many subscription channels end up creating content to feed the people who are paying and turn into more "UA-cam channel" media companies rather than remain focused on gamechanging projects. Of course, they are good for the wealthy of educational info, but I think niche stuff like this takes knowledge forward.
Thanks so much! That's amazing. As it happens I was just animating something and have come across a behaviour of the 'lip funnel' shape under certain conditions that kinda wrecked the capture so just a heads up if you are using it today. I'll try and get a fix in for tomorrow. :)
WOW! This is extremely generous of you, Rob. I wish you luck in getting the grant. Your work certainly deserves it. I do most of my work in DAZ Studio, but I'm fairly certain it's possible to export animations from Blender to DAZ. I'll be giving it a try in the very near future. Thanks again for the fantastic work!
Thanks! Version 0.8 is out now and I'll be putting together a walkthrough of the new features tomorrow hopefully. Going to be filling out the grant application after that!
I am honestly impressed with your work and I must congratulate you for it. I'm very curious to see how it evolves! Awesome and exciting stuff, for sure! Keep up the great work!
@@Squarecoin Absolutely. i bought it. The weight painting demonstration in video is pretty legit. I have some complex faces from Daz3D characters and thats gonna save me loads of time. Funnily enough it released after i finished my own technical workflow documentation. now I've gotta make a change because of that addon!
I loved this solution. For me its better than the others I've tested because we can tweek it before sending to blender. Today I spend more than 4 hours testing it, I had some crashes at the begging, but now it is working great. I plan to make a decent animation to share on twitter to show more people about your solution. Do you use twitter? I would like to tag you there when I finish the decent animation (hopefully tomorrow) And great work you are doing!!!
great work! I haven't tested it yet but your tracker seems to be very good. The export closed to a defined type of rig seems to me to be a bad idea for professional use. There is an option which seems to be missing in your program so that it may be useful in my work. Able to export tracking point animations in fbx. It would allow the use of your tracker with our own rigs.
Hii i have littel problem. When i click on export, the program crash. :( I was trying cut the 3 minutes video on smaller sequences but the software crashed when i wanted export my face tracking.
awesome.. do we need to setup blendshapes or morph targets to our character to use in? does it required presetup blendshapes i mean ?, or it based on realtime deformation and skinning..? and i am curios about if we can recored the animation to use in maya..
It is really really wonderful. But i am not able to re-create the animation in Unreal. Can you share the level blueprint of the project? 'Strongtrack08UnrealExample' doesn't have the same blueprint as in this video. Thanks
In the Blender Example File the Tracked Text File address has forward slash but the address from my windows has back slash... does this have anything to do with why I cant import a track? What I mean is, in the example file the text file reads; C:/Users/Robert/Desktop/anim_export.txt but my windows address reads; C:\Users\Bio\Desktop\Bio_02A.txt I have "\" instead of "/"
Hi, Waw great job!... I am trying to train a model (Tomorrow I will do some animation if I succeed. But for now, The iris point disappears... Is it a problem you know?
When I click export, set name for thx file and click save, program crash with error: IndexError: index 1 is out of bounds for axis 0 with size 1 EDIT: to export without error, you need to make ALL extraction poses, not just few as program suggest...
Great work, may i ask which method did you use for decomposing the coefficients? i.e given a facial pose, how does your code decompose to the base poses like smile, open jaw etc. Im very interested in the technicality :)
It's actually the easiest bit of the whole thing in a way; just using sparse coder from sci kit learn, which takes care of all the maths. If you look in the decomp_functions.py you'll see my very sloppy implementation in there.
Sir, i need help. I can get approximate facial landmark location data in realtime. Now i want to transfer it to blender any rig. How should i move forward? Should i use shape keys or rig bones?0 And how to transfer the landmark to any rig? I need this for my college project! thanks in advance
Shape keys are the ultimate point you want to end up on. You can 1) use a rig to then in turn generate shape keys. You can see me do this in my second to most recent video 2) Resculpt the example mesh to resemble your target mesh or 3) Use a solution like FaceIt for blender.
Rob. I just don't understand one thing. I see StrongTrack capturing points of movement in a video, and then you just export a file, that's awesome - but it is not shown in the tutorial how the 3D model was set up to connect with StrongTrack. Could you please make a video explaining this process so we can make it work for our custom models? It is not clear to me how you prepared the model to receive this information. Thank you for giving this for free. Good work!
The file just contains 51 values that range from 0-1 and align with 51 different shapes that a mesh can have (52 with neutral/no expression). If you look at the example blender mesh (or in unreal) you should be able to see the different shape keys/morph targets. So the tricky part is making a mesh with those 51 shapes. You can take the mesh I've provided and sculpt that to fit your own mesh (which I show in an earlier video with Captain Rex from Star Wars), create your own from scratch with a rig as seen in my previous video (pt 2 and 3 of face modelling) or you could also use something like FaceIt for Blender which is a product that came out recently that makes the same standard 51 shapes. Does that make sense?
@@Squarecoin Kinda. It makes sense to me when I think that the points movement exported from StrongTrack then recognizes the bones that are in the model, which I see is important to have them, since you mentioned we have to build our models with the same rig as you did (plus the shapekeys). So, the points locate the appropriate bones automatically ? no need for matching their names or anything ? or it just identifies by similar position of the bones in relation to the points? Sorry if my questions are stupid ! I am a modeler and not an animator :D. But I am very interested in this because I plan to work on my own game in Unreal.
The bones don't come into play with this implementation; it's all shapekey based. You can think of the shapekeys as basically just 51 different statues in a museum showing the same person pulling different expressions that we've condensed into one instance that can blend between them all. That's all it is really. Bones are an alternative way to animate a face, if you want to 100% accurately show a jaw rotating for example - but for what we're doing here, not necessary. Bones can also come into play as a way to generate the different statues but only as an intermediate step.
@@Squarecoin Yes I know what shapekeys and bones are. At first I just didn't understand what were the requirements for the model to have such seamless match with this file you generated from StrongTrack. Now it all makes more sense. Thank you for the clarification. However In certain situations, people might be interested in the output animation to be driven by bones, in games for example. I read somewhere about some game studios that after the facial motion capture is done, they run a program to convert the vertex animation to bones skinned animation. If I remember well, they did that with ''Hellblade: Senua’s Sacrifice''. Studios have such programs in house as their own, interesting that I didn't find any free or even paid application that would do this. There was one though made by Hans Godard that he used at Naughty Dog, but it was kinda expensive and only for maya. Article here - lesterbanks.com/2015/04/skinning-converter-for-maya/ Just saying in case you might be interested. But hey! what you already did is already fantastic and for free!! Thank you again for this!
Very interesting papers! I'll have to look into the subject more, though I've always personally been a bit unsure about the advantage of using bones as opposed to morph targets in facial rigging (outside of the eyes and perhaps the jaw of course). Is it considered more performant?
Hi Rob, I'm python intermediate coder and having passion in VFX and starting investing time in learning blender. can I join with you for developing this project. I was planning to create the same type of project so after seen your I'm so excited. Please let me know, thanks.
I dont know how many have tried...but when I tried it was very difficult to follow the tutorial. First I was not able to install from the exe file and there was no file to see what was exception. So, I gave up trying to run from exe. However, I was able to run from anaconda. Major issue i ran was that, i didnt know where to exactly put the 51 key points. If you could provide the accurate positions of all 51 keypoints in image with description (e.g. 23 point should be in mid of nose and so on.) that would be really helpful. Coz, when i was tracking the positions in inner lips somehow i had 4 points in upper lips and 2 points in lower lips. Moving track points are also not very intuitive...it took me a while to figure out Rt mouse click would move whole group. It was really troublesome to record own video and put extreme poses. First of all, how many extreme poses are required or what can be defined as extreme poses? It would be really helpful if there was a sample video to try out and later after being comfortable user can record a video. One thing to clear out should be when to press F and T and when to Press W and N. I dont think anyone without prior knowledge to AI will understand , he has to train models after the landmark is set. Also, will it improve significantly if we have 51 or atleast (eye brows/ nose/ mouth trackers) in our face while recording? After a long painful process, the tracking was really good but then I was not able to save the file. Execption is as below: File "strongtrack.py", line 854, in export mouth_coeffs, brow_coeffs, _, _ = decomp.findCoeffsAll(points,self.keyposes, self.keydrops) File "strongtrack-0.7\0.7\decomp_functions.py", line 224, in findCoeffsAll shiftedPosesMouth = shiftKeyPoses(width_points, mouth_centre, keyposes_mouth, 'mouth') File "strongtrack-0.7\0.7\decomp_functions.py", line 258, in shiftKeyPoses width_keypose = (keyposes[0][16][0]-keyposes[0][0][0]) IndexError: index 0 is out of bounds for axis 0 with size 0 But, It seems really promising application that you are building sir. Thank you very much.
NVM ....i didn't set keyposes ....it really fits greatly on the provided sample blender project....CHEERS!!!! but there is flickers on mouth movements.
Thanks for your feedback! Being a pre-release of something put together by an amateur it is very buggy....as I hope I'm always trying to make clear. It's possible the flickering mouth movements are being caused by something that I'm working on fixing as we speak. And yep, there's a lot of quality of life improvements to be made with UI such as - as you say - indicating best layout. As the weeks/months roll on things will be improving.
Also: sorry to hear it was a long and painful process. Overcoming the fiddly-ness of the landmark placements is one of the reasons I'm looking to assemble a copyright-zero dataset and, based on your comment, I'll probably pivot toward spending more time on that.
Has anyone else had an error loading Dlib at line 17 (file strongtrack.py) when run the windows 10 64bit executable? Dlib is installed in my python directory.
Huh. That's weird. I've been testing it on friend's machines and virtual machines with just blank installations of windows 10 and no sign of issues. Suppose I might need to rethink how to package it up. Sorry about that!
@@Squarecoin Rob, you've got better things to get on with. It's probably my setup. I was just hoping some others following your work would have a suggestion. Don't do anything unless others have the same issue :-)
@@Squarecoin I've found out that elementree didn't install properly so I'll work on finding out why. Don't put any time into helping on this, Rob. I'm sure it will be something I haven't installed properly
Hi Rob...this is a great...I want to..I have wanted to implement this kind of face mocap in a raspberry pi motion capture suit that am working on...tried alot...and boom only to come accross your video... Please can you tell me the use of the 1).base.npy 2).base_face.npy In the /data folder of the sourcecode... thanks in advance
Hi. Base face is just the default position of the face dots if you haven't begun positioning them yet. So just an array of 2d co-ordinates. It could probably be rolled into the main scripts tbh but I haven't got around to doing it yet. It's an array that's used when taking an arbitrary number of coefficients of different key poses and converting it to an array of the 50 or so morph targets. Super simple stuff but the words escape me at this time, but say you have an expression of 0.5 smile, 0.7 jaw open. The final data that's saved can't just be those two numbers because you're streaming to a model that has 50 different shapes. So with base.npy you end up with a full list of all 50 or so shapes but with smile and jaw being 0.5 and 0.7 respectively within that, while the other 48 are 0.0. It's a bit over engineered for now but it will be useful when version 0.9 or 1.0 is released where greater control over all the 50 of the shapes can be tied to the coefficients more flexibly.
In theory yeah....it has to have the corresponding shape keys (or morph targets). The example files contain such a mesh which you could adapt to a sculpt or you could look into a tool such as FaceIt for Blender which helps quickly create the right array of keys. It's the same process as shown in the previous video I just put out where you rig a face and then bake keys.
Really nice ! That's what I'm looking for ! I'll try it soon with meta human. I tried one but I give up because it was 3 software to sync and then retarget on character in maya (Why maya ?? So stupid and why retarget with a soft, your way is so much better
It would in the sense that it just uses regular video so anything that can record video works. It doesn't work in real time but currently this program is about taking recorded video
I hope you get a Megagrant, its a great project. Congratulations for your work!
I was just watching random ass blender videos and this got recommended . . . goddamn incredible
😊 Thank you Rob. It's quite exciting. I still wonder how are able to do all that magic in your spare time. Inspired. Cheers. 👍🏾👍🏾👍🏾
Liked and subscribed! Also I want to say I appreciate you working for a grant rather than monetizing on Patreon and subscriptions and such. I find many subscription channels end up creating content to feed the people who are paying and turn into more "UA-cam channel" media companies rather than remain focused on gamechanging projects. Of course, they are good for the wealthy of educational info, but I think niche stuff like this takes knowledge forward.
Man from all my Heart - Thank You 🙏 You are an amazing person! Wish you all the best 💗
definitly deserve a dev grant .
this is awesome
This is outstanding work! I'm enjoying watching it develop. Good luck with the grant :-)
Niceee I've been checking this for the last few months, glad to see you're still having fun with it and getting this out to the community
Rob this is great! I'll be trying this tonight and/or tomorrow night! Also...buying you a cup of coffee!
Thanks so much! That's amazing. As it happens I was just animating something and have come across a behaviour of the 'lip funnel' shape under certain conditions that kinda wrecked the capture so just a heads up if you are using it today. I'll try and get a fix in for tomorrow. :)
WOW! This is extremely generous of you, Rob. I wish you luck in getting the grant. Your work certainly deserves it. I do most of my work in DAZ Studio, but I'm fairly certain it's possible to export animations from Blender to DAZ. I'll be giving it a try in the very near future. Thanks again for the fantastic work!
Thanks! Version 0.8 is out now and I'll be putting together a walkthrough of the new features tomorrow hopefully. Going to be filling out the grant application after that!
I am honestly impressed with your work and I must congratulate you for it. I'm very curious to see how it evolves! Awesome and exciting stuff, for sure! Keep up the great work!
Pressure applied:) thanks for doing this.
Great progress! Fantastic work as usual, it's really great to see everything coming together nicely.
Thanks a bunch!
Thank you Rob, this tool worked on my custom character.
Amazing! Really going to follow along with as it would help tremendously with future projects
I appreciate your hard work on this! I might try to use this along side with the Blender "Faceit" Addon.
Yeah! I need to give that a look. Real time saver!
@@Squarecoin Absolutely. i bought it. The weight painting demonstration in video is pretty legit. I have some complex faces from Daz3D characters and thats gonna save me loads of time.
Funnily enough it released after i finished my own technical workflow documentation. now I've gotta make a change because of that addon!
I also have faceit. That's what I'm going to try with some Make Human models
I'd be interested in know how it works for you Scott! email4emo at gmail dot com
This is so cool Rob! Can't wait to see more!
I loved this solution. For me its better than the others I've tested because we can tweek it before sending to blender.
Today I spend more than 4 hours testing it, I had some crashes at the begging, but now it is working great.
I plan to make a decent animation to share on twitter to show more people about your solution.
Do you use twitter? I would like to tag you there when I finish the decent animation (hopefully tomorrow)
And great work you are doing!!!
I have also planed work on that on my channel, I will use hand ✋ tracking, and pose estimation. Using module called mediapipe
great work!
I haven't tested it yet but your tracker seems to be very good.
The export closed to a defined type of rig seems to me to be a bad idea for professional use.
There is an option which seems to be missing in your program so that it may be useful in my work.
Able to export tracking point animations in fbx.
It would allow the use of your tracker with our own rigs.
Subscribed and liked... pressure applied!!!
Yey! Can't wait to try it
Hii i have littel problem. When i click on export, the program crash. :( I was trying cut the 3 minutes video on smaller sequences but the software crashed when i wanted export my face tracking.
is this suitable for realtime animation through unreal bridge? i really dont wanna buy an iphoneX
awesome..
do we need to setup blendshapes or morph targets to our character to use in? does it required presetup blendshapes i mean ?, or it based on realtime deformation and skinning..?
and i am curios about if we can recored the animation to use in maya..
Hi Rob, I can't set key poses and the panel is disable in V0.8 ! What am I doing wrong?
Love you bro. Keep this up!
Thanks! Will do! There's a new update out on the github page now with a lot of new features. Tutorial tomorrow hopefully.
It is really really wonderful. But i am not able to re-create the animation in Unreal. Can you share the level blueprint of the project? 'Strongtrack08UnrealExample' doesn't have the same blueprint as in this video.
Thanks
In the Blender Example File the Tracked Text File address has forward slash but the address from my windows has back slash... does this have anything to do with why I cant import a track?
What I mean is, in the example file the text file reads;
C:/Users/Robert/Desktop/anim_export.txt
but my windows address reads;
C:\Users\Bio\Desktop\Bio_02A.txt
I have "\" instead of "/"
When I'm trying to export the txt file, the window just closes. Anyone having the same issue? Please let me know if there's any solution.
me too same problem
to export without error, you need to make ALL extraction poses, not just few as program suggest...
Hello, and great job you have here, I was wondering how many or which shape keys my model shoud have to be compatible with strongtrack? thank you!
Does this export blendshape info like most of the ARkit apps? Im looking hard for a replacement for faceshift and ARkit isnt up to snuff!
This thing... I'll be back when I try it out
Hi, Waw great job!... I am trying to train a model (Tomorrow I will do some animation if I succeed. But for now, The iris point disappears... Is it a problem you know?
very nice work thank you ! but how can we apply that facial animation in a specific character model in blender ? not sure how to do it
amazing stuff! can it work with pre recorded video as well?
Is it possible to transfer this to other 3D software like blender?
I hope you will get the grant!
When I click export, set name for thx file and click save, program crash with error:
IndexError: index 1 is out of bounds for axis 0 with size 1
EDIT: to export without error, you need to make ALL extraction poses, not just few as program suggest...
Great work, may i ask which method did you use for decomposing the coefficients? i.e given a facial pose, how does your code decompose to the base poses like smile, open jaw etc. Im very interested in the technicality :)
It's actually the easiest bit of the whole thing in a way; just using sparse coder from sci kit learn, which takes care of all the maths. If you look in the decomp_functions.py you'll see my very sloppy implementation in there.
@@Squarecoin Awesome thanks! I'll take a deeper look!
Awesome!
you are great
So this is for Windows only, no Apple?
Any chance for this to be compatible with Unity?
im having trouble importing it to blender
You are the best❤
is it possible to use your own model?
Sir, i need help.
I can get approximate facial landmark location data in realtime. Now i want to transfer it to blender any rig. How should i move forward?
Should i use shape keys or rig bones?0
And how to transfer the landmark to any rig?
I need this for my college project!
thanks in advance
Shape keys are the ultimate point you want to end up on. You can 1) use a rig to then in turn generate shape keys. You can see me do this in my second to most recent video 2) Resculpt the example mesh to resemble your target mesh or 3) Use a solution like FaceIt for blender.
Rob. I just don't understand one thing. I see StrongTrack capturing points of movement in a video, and then you just export a file, that's awesome - but it is not shown in the tutorial how the 3D model was set up to connect with StrongTrack. Could you please make a video explaining this process so we can make it work for our custom models? It is not clear to me how you prepared the model to receive this information.
Thank you for giving this for free. Good work!
The file just contains 51 values that range from 0-1 and align with 51 different shapes that a mesh can have (52 with neutral/no expression). If you look at the example blender mesh (or in unreal) you should be able to see the different shape keys/morph targets. So the tricky part is making a mesh with those 51 shapes. You can take the mesh I've provided and sculpt that to fit your own mesh (which I show in an earlier video with Captain Rex from Star Wars), create your own from scratch with a rig as seen in my previous video (pt 2 and 3 of face modelling) or you could also use something like FaceIt for Blender which is a product that came out recently that makes the same standard 51 shapes. Does that make sense?
@@Squarecoin Kinda. It makes sense to me when I think that the points movement exported from StrongTrack then recognizes the bones that are in the model, which I see is important to have them, since you mentioned we have to build our models with the same rig as you did (plus the shapekeys). So, the points locate the appropriate bones automatically ? no need for matching their names or anything ? or it just identifies by similar position of the bones in relation to the points?
Sorry if my questions are stupid ! I am a modeler and not an animator :D. But I am very interested in this because I plan to work on my own game in Unreal.
The bones don't come into play with this implementation; it's all shapekey based. You can think of the shapekeys as basically just 51 different statues in a museum showing the same person pulling different expressions that we've condensed into one instance that can blend between them all. That's all it is really. Bones are an alternative way to animate a face, if you want to 100% accurately show a jaw rotating for example - but for what we're doing here, not necessary. Bones can also come into play as a way to generate the different statues but only as an intermediate step.
@@Squarecoin Yes I know what shapekeys and bones are. At first I just didn't understand what were the requirements for the model to have such seamless match with this file you generated from StrongTrack. Now it all makes more sense. Thank you for the clarification.
However In certain situations, people might be interested in the output animation to be driven by bones, in games for example. I read somewhere about some game studios that after the facial motion capture is done, they run a program to convert the vertex animation to bones skinned animation. If I remember well, they did that with ''Hellblade: Senua’s Sacrifice''. Studios have such programs in house as their own, interesting that I didn't find any free or even paid application that would do this. There was one though made by Hans Godard that he used at Naughty Dog, but it was kinda expensive and only for maya. Article here - lesterbanks.com/2015/04/skinning-converter-for-maya/
Just saying in case you might be interested.
But hey! what you already did is already fantastic and for free!! Thank you again for this!
Very interesting papers! I'll have to look into the subject more, though I've always personally been a bit unsure about the advantage of using bones as opposed to morph targets in facial rigging (outside of the eyes and perhaps the jaw of course). Is it considered more performant?
Hi Rob, I'm python intermediate coder and having passion in VFX and starting investing time in learning blender. can I join with you for developing this project. I was planning to create the same type of project so after seen your I'm so excited. Please let me know, thanks.
Well there's always the github repo if you want to look through that and see what you make of it. That's the beauty of open source :)
Nice :D
Great work. Can you make it work for lightwave 3d??
I dont know how many have tried...but when I tried it was very difficult to follow the tutorial. First I was not able to install from the exe file and there was no file to see what was exception. So, I gave up trying to run from exe. However, I was able to run from anaconda. Major issue i ran was that, i didnt know where to exactly put the 51 key points. If you could provide the accurate positions of all 51 keypoints in image with description (e.g. 23 point should be in mid of nose and so on.) that would be really helpful. Coz, when i was tracking the positions in inner lips somehow i had 4 points in upper lips and 2 points in lower lips. Moving track points are also not very intuitive...it took me a while to figure out Rt mouse click would move whole group.
It was really troublesome to record own video and put extreme poses. First of all, how many extreme poses are required or what can be defined as extreme poses? It would be really helpful if there was a sample video to try out and later after being comfortable user can record a video.
One thing to clear out should be when to press F and T and when to Press W and N. I dont think anyone without prior knowledge to AI will understand , he has to train models after the landmark is set.
Also, will it improve significantly if we have 51 or atleast (eye brows/ nose/ mouth trackers) in our face while recording?
After a long painful process, the tracking was really good but then I was not able to save the file. Execption is as below:
File "strongtrack.py", line 854, in export
mouth_coeffs, brow_coeffs, _, _ = decomp.findCoeffsAll(points,self.keyposes, self.keydrops)
File "strongtrack-0.7\0.7\decomp_functions.py", line 224, in findCoeffsAll
shiftedPosesMouth = shiftKeyPoses(width_points, mouth_centre, keyposes_mouth, 'mouth')
File "strongtrack-0.7\0.7\decomp_functions.py", line 258, in shiftKeyPoses
width_keypose = (keyposes[0][16][0]-keyposes[0][0][0])
IndexError: index 0 is out of bounds for axis 0 with size 0
But, It seems really promising application that you are building sir. Thank you very much.
NVM ....i didn't set keyposes ....it really fits greatly on the provided sample blender project....CHEERS!!!! but there is flickers on mouth movements.
Thanks for your feedback! Being a pre-release of something put together by an amateur it is very buggy....as I hope I'm always trying to make clear. It's possible the flickering mouth movements are being caused by something that I'm working on fixing as we speak. And yep, there's a lot of quality of life improvements to be made with UI such as - as you say - indicating best layout. As the weeks/months roll on things will be improving.
Also: sorry to hear it was a long and painful process. Overcoming the fiddly-ness of the landmark placements is one of the reasons I'm looking to assemble a copyright-zero dataset and, based on your comment, I'll probably pivot toward spending more time on that.
Someone knows how to do this into Maya ?
i can used in windows 7 bit32????
Can this be used for grease pencil rig
damn coollllllll!
Has anyone else had an error loading Dlib at line 17 (file strongtrack.py) when run the windows 10 64bit executable? Dlib is installed in my python directory.
Huh. That's weird. I've been testing it on friend's machines and virtual machines with just blank installations of windows 10 and no sign of issues. Suppose I might need to rethink how to package it up. Sorry about that!
@@Squarecoin Rob, you've got better things to get on with. It's probably my setup. I was just hoping some others following your work would have a suggestion. Don't do anything unless others have the same issue :-)
@@Squarecoin I've found out that elementree didn't install properly so I'll work on finding out why. Don't put any time into helping on this, Rob. I'm sure it will be something I haven't installed properly
Hi Rob...this is a great...I want to..I have wanted to implement this kind of face mocap in a raspberry pi motion capture suit that am working on...tried alot...and boom only to come accross your video...
Please can you tell me the use of the
1).base.npy
2).base_face.npy
In the /data folder of the sourcecode... thanks in advance
Hi. Base face is just the default position of the face dots if you haven't begun positioning them yet. So just an array of 2d co-ordinates. It could probably be rolled into the main scripts tbh but I haven't got around to doing it yet. It's an array that's used when taking an arbitrary number of coefficients of different key poses and converting it to an array of the 50 or so morph targets. Super simple stuff but the words escape me at this time, but say you have an expression of 0.5 smile, 0.7 jaw open. The final data that's saved can't just be those two numbers because you're streaming to a model that has 50 different shapes. So with base.npy you end up with a full list of all 50 or so shapes but with smile and jaw being 0.5 and 0.7 respectively within that, while the other 48 are 0.0. It's a bit over engineered for now but it will be useful when version 0.9 or 1.0 is released where greater control over all the 50 of the shapes can be tied to the coefficients more flexibly.
#MEGAGRANTFORROBINMOTION
Can I use this with any model....rig.
In theory yeah....it has to have the corresponding shape keys (or morph targets). The example files contain such a mesh which you could adapt to a sculpt or you could look into a tool such as FaceIt for Blender which helps quickly create the right array of keys. It's the same process as shown in the previous video I just put out where you rig a face and then bake keys.
Really nice ! That's what I'm looking for ! I'll try it soon with meta human. I tried one but I give up because it was 3 software to sync and then retarget on character in maya (Why maya ?? So stupid and why retarget with a soft, your way is so much better
Hey mate did it work with meta humins?
Does Andriod work with your app? 😊
It would in the sense that it just uses regular video so anything that can record video works. It doesn't work in real time but currently this program is about taking recorded video
@@Squarecoin that's perfect! Recording works for me! Thanks for this amazing app Rob!
i poop