Yet Another Comparison of $100 Markerless MoCap and $25k Optical Mocap
Вставка
- Опубліковано 23 сер 2024
- Viewer's Left: Dollars Markerless MoCap, capturing from the video and streaming to Unity in real-time
Viewer's Right: OptiTrack Prime13 * 12, Manus Prime II Gloves
Character on the Viewer's Left:
Unity-Chan! KAGURA Style(URP)
unity-chan.com...
Original video,
www.bilibili.c...
www.dollarsmoc...
#mediapipe #unity #unity3d #motioncapture #mocap #miku #mikumikudance #mmd
Once the one in the middle has the clothes fully rendered, he'll be the 2nd most realistic of the three.
Also the middle model looks a tiny wiener bit gay the way he dances.
But aside from that the right model looks so fucking human.
biggest difference is, the one on the left looks like a dead puppet being pulled, and the right looks like it has a soul and actually belongs in the scene with the depth to really sell the vibe.
it's hard to get a similar level of quality to 3d data reference.
Mostly has to do with lack of important details on the left model: no shadows, ankles never move, static facial expressions, fingers barely move, the model is not stuck to the ground. Makes me wonder if the problem with the model and not the mocap itself. They should've used the same model for better comparison.
Besides, a good animator could clean up the issues (like they do anyway with mocap) and it will look very convincing for an untrained eye. Not good enough for titanic movie/game productions, but they would buy the expensive one anyway, and for a smaller studio or individual this is perfect price for quality
I wonder how they go about hiring these people and how these guys even become mocap actors
probably look at people with dance and performance backgrounds, (kinda like how wrestling games use trainers and trainees for motion capture performances for video games)
Step 1: record yourself dancing 5,000 times
Step 2: watch recordings
Step 3: ?????
Step 4: profit!
well step 1: you have to be born. . .
yeah that's as far as I know
I love how at 3:00 the optitracker just falls on the floor and he hid it by kicking it
i love these videos sm theyre so fun to watch! Pls do more
The left is me in dance class
aww the one on the right side is so short 😂❤
Seeing these comparison videos got me thinking at what point to is too much to pay for one of these 3d models? when will there be barely any performance difference between the models? like 1000 dollar iem vs a 1500 dollar iem or something. With people seeking the last 10 percent of improvement.
I don't think you're paying for the models. You're paying for how well it can sync your motion within the suit to the model you have. My understanding is that the $100 option is using the video to try and sync the motion to the model on the left. So it's sort of watching the video like us viewers, and basing the movements on that. But because of that, it doesn't have as much data to more accurately make the model move like the dancer. The model on the right though, is using the actual points on the suit, from the front and back. So the model is more likely to move like the dancer.
As someone mentioned, the $100 option only has access to 2d space. So it's always going to look off, it has no frame of reference for 3d objects.
I wonder if there's a way to sort of make that less of an issue though. To have the program still use video to sync up the animation, but somehow have it respect 3d constraints.
it specifically states markerless. you can do proper mocap for free with right software.the uploader picked markerless because its the cheapest & most janky whereas optical is more expressive & non conventional meaning it cost more but neither are the best mocap techniques.its just for views
@@tiacool7978Rokoko studio has a 20$ a month streaming mocap package that supports 2 webcams, meaning you can gain extra accuracy adding another perspective angle.
Poor Unity-chan.
I think the one on the right has motion blur! Look at her hands around 1:45 or so...
if it's a proper comparison, then use the same model!
10 months ago I started working on AI. 4 months ago I discovered how AI helps 2d models appear 3d. Then I discovered vtubers. Then found out how there's some serious mocap artists among them. Suddenly I've got mocap videos in my feed. This is how the algorithm should work. Subd n liked.
Wow!!! This software is amazing and a game changer!!! ❤️👏🏿👏🏿👏🏿👏🏿👏🏿🙌🏿
🎉 thanks iclone 7 sir ❤
I just realised thats Twin Turbo from Uma-musume lol
Optitrack
if i see anyone faint and fall for vtubers, i don't blame them they are too real!!
Dang the dancer on the left needs a LOT more practice on their dancing and… facial expressions, it’s like, they’re staring straight into my soul 😰
no sabia a quien mirar
Amazing!
The middle one looks really realistic
this isnt fair
does 25k one get facial expression capture as well?
No. What you would want to do is a facial cap with a different rig. You could add tracking to it but it would have to be fixed from here to next week because of all the movement artifacts it would cause.
This isn't a valid comparison when you have one locked to screen space and one unlocked to world space.
One having the capability to track and represent real time location is certainly part of the comparison and does not invalidate it.
It's fair enough when you factor in the original Optitrack one was rendered by the original dancer/producer.
The markerless one was effectively added on top, probably only using the original video as reference.
Not even sure they asked for permission from the original owners.
average vtuber