what about the deepmotion copyrights? on the free version what are we alowed to do with those 30sec free anims? or what is the share we get from those?
Thanks for the feedback! We have been hard at work releasing quality updates and new features every few weeks. We don't expect to be able to replace tracking suits immediately and our current goal is to fill a gap for more cost-effective solutions that doesn't require a big investment.
They have ways to go before it is even really usable - I’ve found you’ll use up your credits in any tier just trying to get an OK base animation to work on
@@LloydSummers same here. I mean even with the suit I have…I have to do a few takes to get it right. 25 minutes tops a month an 60% unusable just want feasible yet. I love deepmotion and may use it if I can’t get my Rokoko suit to an actor.
Deep motion is not reliable, I found we needed to redo it a half dozen times but by then you used up your credits. In practice it is unmanageably expensive as you need to upgrade to deal with their unreliable framework. It also does some really bad jitter, and doesn’t do facial or hands. This has convinced me to just use OpenPose and contribute back to an open project and maybe tie in some smoothing specific ML. Of note: we also use rokoko- much smoother but still needs cleaning. I’ll see if I can’t post a video on how our mocap compares. Edit: Oh wow, looks like OpenPose even has a Unity plugin.
@@tabby842 yes it's still quite unusable. I ended up cloning OpenPose (open source) and using that for a bit We actually have Rokoko suits, depth sensors, and VR equipment - so we have a lot more options than most folx. I think the pricing is unreasonable given the number one reason for needing to reporocess captures is poor coding on the Deep Motion team. There are deep motion alternatives now though that are cheaper and better at processing that you should check out. Just google for non marker motion capture solutions :).
I think video fps and quality might be a better result. However, it's just for finer result. Some of the positioning will still be estimated at 0.7%-1.2%, currently even on high end imagery.
Excellent comparison. I am assuming that once a 3-D animation file is generated, it is possible to modify it (I.e. make it do different things) ? Or are you stuck with the animation that was captured initially?
On the other hand, as long as the estimation of just one dimension then you should expect at worse 1/3 of the error and at best 2/3 of the dimensions( it does better in real life most times). The dinominator is the number of dimension there is to account for. At worse z depth isn't solved and just sparse data. At most xy and z are figured out but there's the handicap of just one perspective point(camera). No way to always cross check.
The problem with Ai driven software for mocap is the lack of depth recognition, especially if recording with only 1 camera. Great for 2D animation but very bad for 3D as it won't recognize bones moved in the Y axis (front and back)
But that's the thing about AI, it can estimate depth from a single image by inferring it from, for example, the way objects shrink with distance or the parallax effect. If you close one of your eyes, you still have a pretty decent depth vision despite it being just one image.
@@GS-tk1hk I don't quite understand it. Before, mocap suits were used and now AI is being used D: It will be possible to make animations with the AI without mocap suit?
@@canalactivado It already is, but it's not as good as mocap suits. I think a company called DeepMotion is working on it, among others. Basically the AI recognizes the human and how he/she moves in space, and translates that to virtual bones for animation
Yeah it can lol, wtf are you on about ? The whole purpose of AI is to make extermely good educated guesses. If it was trained on data that move on the Y axis (and it was) then it can estimate that. As a matter of fact, i'm pretty sure you would be able to tell if every point of every bone was in the same plane. This comment is dumb.
Thank you for the amazing video! Do we have a tutorial which shows us Do we have a tutorial which shows us how to set up open pose? To someone with no coding experience.
Can any of these free monocular solutions like openpose and vibe actually get root motion? As much as i like them, without root motion anything with root motion is inherently going to be better.
But don't the Vtubers already do it? D: I see some of them move their heads and some others move their fingers. *Don't they already do it with AI? Or with DeepMotion?* D:
Why there are just two extrems. Super expensive special suite and then attempts to track motion completely in any clothes. Why not use something in the middle with still extremely low price as some usual joga suit with some markers on it, which makes tracking much more accurate.
I don't think you can compare 2d solution to a 3d one. I don't know why you are taking this approach when you already stated there are open source 3d pose estimation models. This comparison also doesn't make much sense given these advantages depend on your use cases
Seems like I wasn't clear enough in my video. They might have slightly better or worse models to estimate 3D positions but at the end of the day it relies on monocular 2D estimations first, then convert into 3D. The same goes for the commercial options too. Essentially, they both relies on 2D estimations, it's just that the visualization for DeepMotion is able to be presented in 3D. That's all. A comparison video is literally about testing each of them in different use cases. This video is just to demonstrate to people the difference and similarity so that everyone know what they will get out of them when they use these technologies.
We just added a new feature that automatically fails videos that do not meet a certain animation threshhold. We are also adding another new feature in the coming weeks that allows you to rerun a video that passed with new settings.
@@DeepMotionInc that is good to know. But with the price and quality I can buy a Rokoko. I think the pay options are limiting and should include more minutes. But that is just me. I’m doing a training course using deepmotion. It’s pretty cool so far.
Don't worry. Within a year someone will come up with an easy free solution. 😀 Then you will scratch your head for paying folks for free stuff. Never pay anyone unless youre earning tons using that. Rather Use open source.
@@noedits5543 There are open source options out there right now and we welcome any and all comparisons to our service! We have a team that is continuously hard at work optimizing the animation quality and adding new features that provide added value, releasing a stream of frequent updates every few weeks.
I think a sponsored video should be more clearly disclosed, I watched the entire video and yet it wasn't clearly confirmed to be sponsored by you directly until reading the description and even then I had to click the 'show more' button which legally within my location wouldn't qualify to digital advertising regulations. You can also tag a video as an advert, of which this one very much felt like to me. I like your work, but I don't feel you were very ethically transparent at all with this being an advert, the only transparency came in the way of knowing the social cues of when somebody is trying to sell you something, but beyond that it'd be very easy to miss, and I feel you really dropped the ball there. Even a 'thank you for sponsoring this video' at the start and end would have felt more transparent when mentioning DeepMotion, as by sponsoring they are by definition being given preferential treatment, of which needs to be clear.
He says it verabally at the beginning of the video and there is also an overlay on the actual video saying it contains a paid promotion. Sooo... maybe you just missed those?
@@crescentsmile3463 The overlay wasn't there when I made this comment, he added that afterwards. Equally, the description was different too and wasn't as clear as it currently is. I find it odd that you didn't consider this as a possibility. As for the disclaimer in the video, I listened to the opening several times before making the comment, and I didn't find it clear enough. I never said it wasn't disclosed, just I felt it wasn't disclosed in a way that's transparent. However, I'd retract that now that he's made the two main and most important changes with the overlay and the description change. I feel these changes make it ethically transparent, and I'm glad he took such a perspective constructively rather than dismissing it or assuming bad faith like you did.
@@crescentsmile3463 There's something known as digital advertising standards, mentioning it 1 minute in isn't considered transparent nor contradicts what I stated. I have no idea why you are still pushing back against my fair constructive criticism when he clearly read it and made the correct changes I suggested, thus making my initial point resolved and making the content better for it. There's literally nothing negative here beyond your inability to respect a fair comment, made with good faith, not even directed at you specifically. It's bizarre.
@@DrMattPhillips "I watched the entire video and yet it wasn't clearly confirmed to be sponsored by you directly until reading the description" - actually mentioning it in the video does contradict what you said in your original comment. You're trying to throw shade at a dude and I'm just giving the facts.
chatgpt: wait a sec..let me explain: Motion capture (sometimes referred as mo-cap or mocap, for short) is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications bla bla bla..
I mean, did he not say it was sponsored in the video? When you watch a movie or a TV show they don't have their sponsors in the title or upfront. That just doesn't make sense. And worthless? You just weren't paying attention
I mean how dare he talk about a service with a free offering and explain he's sponsored by them and then spend half the video talking about a free alternative to them.
I don’t really care about the sponsored parts; I sub to this channel just to see what these AI people have thought of next. It’s like my news.
Even though it was sponsored, I think you showed their strenghts and weaknesses fairly. Great vid!
can you add multiple camera sources to increase accuracy in DeppMotion3D? i missed it if you said so in the video
Nope,
How to change the file to pmd for MMD.
what about the deepmotion copyrights? on the free version what are we alowed to do with those 30sec free anims? or what is the share we get from those?
Definitely a ways to go before they render mocap suits obsolete. Still, amazing progress in such little time!
There are still instances where mocap suits will be need, so I don't think it will be going obsolete soon
Thanks for the feedback! We have been hard at work releasing quality updates and new features every few weeks. We don't expect to be able to replace tracking suits immediately and our current goal is to fill a gap for more cost-effective solutions that doesn't require a big investment.
They have ways to go before it is even really usable - I’ve found you’ll use up your credits in any tier just trying to get an OK base animation to work on
@@LloydSummers same here. I mean even with the suit I have…I have to do a few takes to get it right. 25 minutes tops a month an 60% unusable just want feasible yet.
I love deepmotion and may use it if I can’t get my Rokoko suit to an actor.
can you compare deepmotion to ROKOKO?
pleased to see master Ma
What version of OpenPose is this?
Deep motion is not reliable, I found we needed to redo it a half dozen times but by then you used up your credits. In practice it is unmanageably expensive as you need to upgrade to deal with their unreliable framework. It also does some really bad jitter, and doesn’t do facial or hands. This has convinced me to just use OpenPose and contribute back to an open project and maybe tie in some smoothing specific ML.
Of note: we also use rokoko- much smoother but still needs cleaning. I’ll see if I can’t post a video on how our mocap compares. Edit: Oh wow, looks like OpenPose even has a Unity plugin.
dunno if you're still using mocap, any changes on your opinion of deepmotion 7 months later?
@@tabby842 yes it's still quite unusable. I ended up cloning OpenPose (open source) and using that for a bit
We actually have Rokoko suits, depth sensors, and VR equipment - so we have a lot more options than most folx. I think the pricing is unreasonable given the number one reason for needing to reporocess captures is poor coding on the Deep Motion team.
There are deep motion alternatives now though that are cheaper and better at processing that you should check out. Just google for non marker motion capture solutions :).
@@LloydSummers ive been trying to get OpenPose to work for a while now on unity not working unity just keeps crashing.
Is there any AI motion capture software which can capture my finger movements? Deepmotion doesn't capture fingers motion
I think if you used this tool together with a cascadeur or mixamo, the result would be better
I think video fps and quality might be a better result. However, it's just for finer result. Some of the positioning will still be estimated at 0.7%-1.2%, currently even on high end imagery.
Creative processes are just going to made so much easier with this software like this, this is the time of indie games with small teams.
Can i export it to vmd format?
Given the jankiness of the results, that pricing is unreasonable
2:11 is that Chika dance?
Excellent comparison. I am assuming that once a 3-D animation file is generated, it is possible to modify it (I.e. make it do different things) ? Or are you stuck with the animation that was captured initially?
yeah you can always clean it up and modify keyframes
On the other hand, as long as the estimation of just one dimension then you should expect at worse 1/3 of the error and at best 2/3 of the dimensions( it does better in real life most times). The dinominator is the number of dimension there is to account for.
At worse z depth isn't solved and just sparse data. At most xy and z are figured out but there's the handicap of just one perspective point(camera). No way to always cross check.
The problem with Ai driven software for mocap is the lack of depth recognition, especially if recording with only 1 camera. Great for 2D animation but very bad for 3D as it won't recognize bones moved in the Y axis (front and back)
But that's the thing about AI, it can estimate depth from a single image by inferring it from, for example, the way objects shrink with distance or the parallax effect. If you close one of your eyes, you still have a pretty decent depth vision despite it being just one image.
@@GS-tk1hk I don't quite understand it. Before, mocap suits were used and now AI is being used D:
It will be possible to make animations with the AI without mocap suit?
@@canalactivado It already is, but it's not as good as mocap suits. I think a company called DeepMotion is working on it, among others. Basically the AI recognizes the human and how he/she moves in space, and translates that to virtual bones for animation
im not sure If it does have support or not but you could probably use the xbox kinnect V2 with the depth sensing tech it has.
Yeah it can lol, wtf are you on about ? The whole purpose of AI is to make extermely good educated guesses. If it was trained on data that move on the Y axis (and it was) then it can estimate that. As a matter of fact, i'm pretty sure you would be able to tell if every point of every bone was in the same plane. This comment is dumb.
Dude. This info is fucking gold. I’m sold in deep motion. Thanks for the video
How do you like it so far?
anny multicam options ?
'Ad' should be in the title - clearly visible from homepage and search results.
Is there something that could capture motions when the camera isn’t fixed? Say, a close up of walking with the camera moving 360 around it.
What about brekel
Thanks for this
deep motion: let's sponsor this guy so he can give us only a good review
bycloud: na'ahhh
Anyone compare this to ipisoft?
Thank you for the amazing video!
Do we have a tutorial which shows us Do we have a tutorial which shows us how to set up open pose? To someone with no coding experience.
amazing thank you !
Can any of these free monocular solutions like openpose and vibe actually get root motion? As much as i like them, without root motion anything with root motion is inherently going to be better.
This would be every small time vtubers dream.
But don't the Vtubers already do it? D:
I see some of them move their heads and some others move their fingers. *Don't they already do it with AI? Or with DeepMotion?* D:
@@canalactivado probably Facerig
惊现闪电五连鞭
Why there are just two extrems. Super expensive special suite and then attempts to track motion completely in any clothes. Why not use something in the middle with still extremely low price as some usual joga suit with some markers on it, which makes tracking much more accurate.
I think you've just saved me a few 1000 pounds here
very helpful
I don't think you can compare 2d solution to a 3d one. I don't know why you are taking this approach when you already stated there are open source 3d pose estimation models. This comparison also doesn't make much sense given these advantages depend on your use cases
Seems like I wasn't clear enough in my video.
They might have slightly better or worse models to estimate 3D positions but at the end of the day it relies on monocular 2D estimations first, then convert into 3D. The same goes for the commercial options too. Essentially, they both relies on 2D estimations, it's just that the visualization for DeepMotion is able to be presented in 3D. That's all.
A comparison video is literally about testing each of them in different use cases. This video is just to demonstrate to people the difference and similarity so that everyone know what they will get out of them when they use these technologies.
Awesome 🔥
Looks like they lowered the amount of seconds you can upload for free accounts. This is now only 10 seconds!
should know that it was the year for giant steps in motion capture..
Very cool
Alot of bugs in the motions
Their pricing rates are weird. How about they just straight up charge per minute. No " bundles".
Like 1 min vid = $2, straight forward.
How many coding languages do you know
*闪电五连鞭 卧槽*
Update: they give free 60 credits now and 30 second video lengths for free
Best mocap is x box 360 kinect
閃電五連鞭🤣
Hi
闪电五连鞭和蔡徐坤绷不住了🤣🤣🤣
Great study. It looks like both solutions suck unfortunately.
Anyone here who's tried the deep motion?
Deep motion is to expensive and charges you for even bad captures.
We just added a new feature that automatically fails videos that do not meet a certain animation threshhold. We are also adding another new feature in the coming weeks that allows you to rerun a video that passed with new settings.
@@DeepMotionInc that is good to know. But with the price and quality I can buy a Rokoko. I think the pay options are limiting and should include more minutes. But that is just me. I’m doing a training course using deepmotion. It’s pretty cool so far.
Don't worry. Within a year someone will come up with an easy free solution. 😀 Then you will scratch your head for paying folks for free stuff. Never pay anyone unless youre earning tons using that. Rather Use open source.
@@noedits5543 There are open source options out there right now and we welcome any and all comparisons to our service! We have a team that is continuously hard at work optimizing the animation quality and adding new features that provide added value, releasing a stream of frequent updates every few weeks.
no way, with a lot of drifts, pose issues and wrong rotations we will not see these solution on movies or games
yeah
In the future it will be possible to make animations with the AI without mocap suit? D:
Somewhere a blenderdev is watching it and...
I think a sponsored video should be more clearly disclosed, I watched the entire video and yet it wasn't clearly confirmed to be sponsored by you directly until reading the description and even then I had to click the 'show more' button which legally within my location wouldn't qualify to digital advertising regulations. You can also tag a video as an advert, of which this one very much felt like to me. I like your work, but I don't feel you were very ethically transparent at all with this being an advert, the only transparency came in the way of knowing the social cues of when somebody is trying to sell you something, but beyond that it'd be very easy to miss, and I feel you really dropped the ball there. Even a 'thank you for sponsoring this video' at the start and end would have felt more transparent when mentioning DeepMotion, as by sponsoring they are by definition being given preferential treatment, of which needs to be clear.
He says it verabally at the beginning of the video and there is also an overlay on the actual video saying it contains a paid promotion. Sooo... maybe you just missed those?
@@crescentsmile3463 The overlay wasn't there when I made this comment, he added that afterwards. Equally, the description was different too and wasn't as clear as it currently is. I find it odd that you didn't consider this as a possibility. As for the disclaimer in the video, I listened to the opening several times before making the comment, and I didn't find it clear enough. I never said it wasn't disclosed, just I felt it wasn't disclosed in a way that's transparent. However, I'd retract that now that he's made the two main and most important changes with the overlay and the description change. I feel these changes make it ethically transparent, and I'm glad he took such a perspective constructively rather than dismissing it or assuming bad faith like you did.
@@DrMattPhillips He literally says 'DeepMotion is today's sponsor' in the video... so...?
@@crescentsmile3463 There's something known as digital advertising standards, mentioning it 1 minute in isn't considered transparent nor contradicts what I stated. I have no idea why you are still pushing back against my fair constructive criticism when he clearly read it and made the correct changes I suggested, thus making my initial point resolved and making the content better for it. There's literally nothing negative here beyond your inability to respect a fair comment, made with good faith, not even directed at you specifically. It's bizarre.
@@DrMattPhillips "I watched the entire video and yet it wasn't clearly confirmed to be sponsored by you directly until reading the description" - actually mentioning it in the video does contradict what you said in your original comment. You're trying to throw shade at a dude and I'm just giving the facts.
AHHHHH NVIDIA card t,t
哈哈哈哈 闪电五连辫 遭不住了
马宝国
不讲武德
Can you convert a 2D image to a 3D model for me for free?
Ha ha
I can convert a real human being into an animal for free.
.
all of these ai things is just bullshit..
chatgpt: wait a sec..let me explain: Motion capture (sometimes referred as mo-cap or mocap, for short) is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications bla bla bla..
Bodysuits suck.
sure ... pose the face go ahead make 1 shot of a project with that horrible rig
blender users are done for..
Label this as ad in the title. Worthless.
I mean, did he not say it was sponsored in the video? When you watch a movie or a TV show they don't have their sponsors in the title or upfront. That just doesn't make sense. And worthless? You just weren't paying attention
I mean how dare he talk about a service with a free offering and explain he's sponsored by them and then spend half the video talking about a free alternative to them.