This captures depth and uses it to morph a volume that is then converted into polygons. As far as it's concerned each frame is just a static mesh. There is no information that can be applied to a rig. This is good for capturing big events for VR, so that you can allow people to move around and maintain true perspective, imagine hundreds of these set up at a football stadium and then have it rendered with lightfields for VR, that is what this kind of tech is for, it doesn't really benefit motion capture at all.
Absolutely brilliant tech! Not so much for film or games because the money is not in the capture, but in the retargeting. This, however, could get you a VR seat at a show on Broadway or The Superbowl. Amazing.
If this revolutionizes motion capture, does this mean less work to put into making a 3D model? Such as designing the head, facial movements, mounting the 3D modeled head onto a motion captured body, ect?
Awesome - hope this will be made available publicly to use/license/implement. Do you include approaches to deal with reflective/refractive/transparent surfaces or absorption? How about reconstructing from RGB(without D) cameras?
How is the "Qualitative Comparison" carried out? the source code of "DynamicFusion" seems not publicly available till now, did you priviately acquired the sourcecode or implemented it on your own? Thx for reply~ :-)
the potential of what you can create from this is sky high.
This is awesome work! Congratulations to the MSR I3D / Holoportation team!
The future of Cinema, theatre, T.v., Video Games...you name it
this is gonna revolutionize motion-capture
Especially with independent film makers/ game designers.
This must require one hell of a setup to work in real time with minimum lag.
This captures depth and uses it to morph a volume that is then converted into polygons. As far as it's concerned each frame is just a static mesh. There is no information that can be applied to a rig.
This is good for capturing big events for VR, so that you can allow people to move around and maintain true perspective, imagine hundreds of these set up at a football stadium and then have it rendered with lightfields for VR, that is what this kind of tech is for, it doesn't really benefit motion capture at all.
You guys are doing astonishing work!
Absolutely brilliant tech! Not so much for film or games because the money is not in the capture, but in the retargeting. This, however, could get you a VR seat at a show on Broadway or The Superbowl. Amazing.
Very cool! Please just add this code to Kinect so we don't have to buy four more cameras :)
Excellent performance!
If this revolutionizes motion capture, does this mean less work to put into making a 3D model? Such as designing the head, facial movements, mounting the 3D modeled head onto a motion captured body, ect?
Awesome - hope this will be made available publicly to use/license/implement. Do you include approaches to deal with reflective/refractive/transparent surfaces or absorption? How about reconstructing from RGB(without D) cameras?
4:25 oh uh goodbye then
How is the "Qualitative Comparison" carried out? the source code of "DynamicFusion" seems not publicly available till now, did you priviately acquired the sourcecode or implemented it on your own? Thx for reply~ :-)
when we can test it? i've some kinects and oculus wating for test it!
Amazing stuff.
i want this + live sports + vr headset pls.. first quarter 2017?
Nope, but VR can show you da wae.
how about more cams? all around
great
Wow
And this still not a thing? Just prototype and thats all?
Почему не сделают игру с этим?
how to get ground truth
I think ground truth here is basically a green screen, so it will always be more accurate than a 3d estimation