Yet Another Comparison of $100 Markerless MoCap and $25k Optical Mocap

Поділитися
Вставка
  • Опубліковано 23 сер 2024
  • Viewer's Left: Dollars Markerless MoCap, capturing from the video and streaming to Unity in real-time
    Viewer's Right: OptiTrack Prime13 * 12, Manus Prime II Gloves
    Character on the Viewer's Left:
    Unity-Chan! KAGURA Style(URP)
    unity-chan.com...
    Original video,
    www.bilibili.c...
    www.dollarsmoc...
    #mediapipe #unity #unity3d #motioncapture #mocap #miku #mikumikudance #mmd

КОМЕНТАРІ • 36

  • @bl8388
    @bl8388 10 місяців тому +167

    Once the one in the middle has the clothes fully rendered, he'll be the 2nd most realistic of the three.

    • @-camaro
      @-camaro 9 місяців тому

      Also the middle model looks a tiny wiener bit gay the way he dances.
      But aside from that the right model looks so fucking human.

  • @SPLICEKNIGHT
    @SPLICEKNIGHT 10 місяців тому +138

    biggest difference is, the one on the left looks like a dead puppet being pulled, and the right looks like it has a soul and actually belongs in the scene with the depth to really sell the vibe.
    it's hard to get a similar level of quality to 3d data reference.

    • @raven75257
      @raven75257 10 місяців тому +13

      Mostly has to do with lack of important details on the left model: no shadows, ankles never move, static facial expressions, fingers barely move, the model is not stuck to the ground. Makes me wonder if the problem with the model and not the mocap itself. They should've used the same model for better comparison.
      Besides, a good animator could clean up the issues (like they do anyway with mocap) and it will look very convincing for an untrained eye. Not good enough for titanic movie/game productions, but they would buy the expensive one anyway, and for a smaller studio or individual this is perfect price for quality

  • @yyidhraa
    @yyidhraa Рік тому +139

    I wonder how they go about hiring these people and how these guys even become mocap actors

    • @Ravenholm337
      @Ravenholm337 10 місяців тому +10

      probably look at people with dance and performance backgrounds, (kinda like how wrestling games use trainers and trainees for motion capture performances for video games)

    • @Benzinilinguine
      @Benzinilinguine 10 місяців тому +10

      Step 1: record yourself dancing 5,000 times
      Step 2: watch recordings
      Step 3: ?????
      Step 4: profit!

    • @mojofier1909
      @mojofier1909 10 місяців тому +5

      well step 1: you have to be born. . .
      yeah that's as far as I know

  • @maththenight
    @maththenight Рік тому +63

    I love how at 3:00 the optitracker just falls on the floor and he hid it by kicking it

  • @meli0t0
    @meli0t0 Рік тому +43

    i love these videos sm theyre so fun to watch! Pls do more

  • @qaqbqc1005
    @qaqbqc1005 10 місяців тому +8

    The left is me in dance class

  • @Moemoepot
    @Moemoepot Рік тому +8

    aww the one on the right side is so short 😂❤

  • @Reubenhater
    @Reubenhater Рік тому +34

    Seeing these comparison videos got me thinking at what point to is too much to pay for one of these 3d models? when will there be barely any performance difference between the models? like 1000 dollar iem vs a 1500 dollar iem or something. With people seeking the last 10 percent of improvement.

    • @tiacool7978
      @tiacool7978 Рік тому +40

      I don't think you're paying for the models. You're paying for how well it can sync your motion within the suit to the model you have. My understanding is that the $100 option is using the video to try and sync the motion to the model on the left. So it's sort of watching the video like us viewers, and basing the movements on that. But because of that, it doesn't have as much data to more accurately make the model move like the dancer. The model on the right though, is using the actual points on the suit, from the front and back. So the model is more likely to move like the dancer.
      As someone mentioned, the $100 option only has access to 2d space. So it's always going to look off, it has no frame of reference for 3d objects.
      I wonder if there's a way to sort of make that less of an issue though. To have the program still use video to sync up the animation, but somehow have it respect 3d constraints.

    • @laylafishborne6169
      @laylafishborne6169 10 місяців тому +1

      it specifically states markerless. you can do proper mocap for free with right software.the uploader picked markerless because its the cheapest & most janky whereas optical is more expressive & non conventional meaning it cost more but neither are the best mocap techniques.its just for views

    • @zemptai
      @zemptai 9 місяців тому

      @@tiacool7978Rokoko studio has a 20$ a month streaming mocap package that supports 2 webcams, meaning you can gain extra accuracy adding another perspective angle.

  • @imfromheII88
    @imfromheII88 Рік тому +5

    Poor Unity-chan.

  • @MedorraBlue
    @MedorraBlue 7 місяців тому +1

    I think the one on the right has motion blur! Look at her hands around 1:45 or so...

  • @erickoh5481
    @erickoh5481 Рік тому +26

    if it's a proper comparison, then use the same model!

  • @MrLargonaut
    @MrLargonaut 10 місяців тому +10

    10 months ago I started working on AI. 4 months ago I discovered how AI helps 2d models appear 3d. Then I discovered vtubers. Then found out how there's some serious mocap artists among them. Suddenly I've got mocap videos in my feed. This is how the algorithm should work. Subd n liked.

  • @bit-studios1
    @bit-studios1 Рік тому

    Wow!!! This software is amazing and a game changer!!! ❤️👏🏿👏🏿👏🏿👏🏿👏🏿🙌🏿

  • @kunjekal
    @kunjekal Рік тому +5

    🎉 thanks iclone 7 sir ❤

  • @theepicgamer84
    @theepicgamer84 10 місяців тому

    I just realised thats Twin Turbo from Uma-musume lol

  • @hatsunemikuchannel2023
    @hatsunemikuchannel2023 Рік тому +1

    Optitrack

  • @wolfeloma
    @wolfeloma 8 місяців тому

    if i see anyone faint and fall for vtubers, i don't blame them they are too real!!

  • @Awkward-
    @Awkward- 7 місяців тому

    Dang the dancer on the left needs a LOT more practice on their dancing and… facial expressions, it’s like, they’re staring straight into my soul 😰

  • @eych7977
    @eych7977 10 місяців тому

    no sabia a quien mirar

  • @jaredsquad601
    @jaredsquad601 11 місяців тому

    Amazing!

  • @rachelkarengreen99
    @rachelkarengreen99 9 місяців тому +1

    The middle one looks really realistic

  • @alfredocha3886
    @alfredocha3886 9 місяців тому

    this isnt fair

  • @redhongkong
    @redhongkong 10 місяців тому

    does 25k one get facial expression capture as well?

    • @StompGojiStomp
      @StompGojiStomp 7 місяців тому

      No. What you would want to do is a facial cap with a different rig. You could add tracking to it but it would have to be fixed from here to next week because of all the movement artifacts it would cause.

  • @MrBorderdown
    @MrBorderdown Рік тому +21

    This isn't a valid comparison when you have one locked to screen space and one unlocked to world space.

    • @indigo1296
      @indigo1296 10 місяців тому +24

      One having the capability to track and represent real time location is certainly part of the comparison and does not invalidate it.

    • @pawala7
      @pawala7 10 місяців тому

      It's fair enough when you factor in the original Optitrack one was rendered by the original dancer/producer.
      The markerless one was effectively added on top, probably only using the original video as reference.
      Not even sure they asked for permission from the original owners.

  • @KiyoshiKenji881
    @KiyoshiKenji881 9 місяців тому

    average vtuber