Inference accuracy exceeded, AI motion capture solution with over 900 million training data

Поділитися
Вставка
  • Опубліковано 10 чер 2024
  • Distinguished from open source algorithms, Cyanpuppets have very high stability and capture quality, supported by spatial computation, providing very high leg stability and data in the world coordinate system, you can get high quality real-time data under the webcam device.

КОМЕНТАРІ • 9

  • @Fingle
    @Fingle Місяць тому +1

    My jaw is on the floor. This is unbelievable. WHAT WIZARDRY?!?!

  • @gabaly92
    @gabaly92 Місяць тому +1

    Epic 🔥🔥🔥🔥🔥🔥

  • @virtualfilmer
    @virtualfilmer Місяць тому +2

    Looks great :) can you please add what version you’re showing to the videos? It gets confusing. :) is this 1.56? We’ve been stuck on 1.54 since March so I can’t wait for 1.56 to be released. :)
    Also is this single cam or two?

    • @cyanpuppets
      @cyanpuppets  Місяць тому +1

      We're sorry, but we've actually been down for a while, and we don't have enough power to run our new mathematical model. Although we're a member of the NVIDIA inception program, the cost of the power we've purchased for the Chinese market has gone up significantly in the past few months, and of course, there have been some issues with the development of the mathematical model, so we're still anticipating the release of the 1.56 version of the model this month. We still expect to release 1.56 this month, and this video shows the dual-camera effect of 1.54.

  • @billalorra890
    @billalorra890 Місяць тому +2

    Much better than previous versions. But there are still twitches. It seems that one camera is not enough for this. Move ai level is far away.

    • @cyanpuppets
      @cyanpuppets  Місяць тому +3

      Our algorithmic model and training direction is not consistent, our team wants to provide high-quality real-time capture solution in a low-barrier way, multiple cameras bring a huge arithmetic rise and calibration process, so move ai also did not carry out the implementation of his version of real-time computing, our solution does not require any calibration or auxiliary tools, only need a graphics card such as a GTX1060 can run smoothly, the It supports most of the webcam models.

    • @billalorra890
      @billalorra890 Місяць тому +2

      @@cyanpuppets Then I will wait for the data to be fully processed so that there are no shaking. Or I’ll wait until several cameras are added, without realtime.

    • @Fingle
      @Fingle Місяць тому +1

      What are you talking about? This is REAL TIME! This defiantly rivals move one as well which is a PAID service. With a little clean up (which lets be honest we are gonna do anyway) it'll look great.

    • @cyanpuppets
      @cyanpuppets  Місяць тому +2

      @@Fingle Real-time processing is very demanding for the lightweight design of the architecture and distributed calls, and also for the quality of the dataset after the final training, because there is no time for post-processing and adjustments, real-time output results, so we do not use heavy post-processing scheme, the use of multi-dimensional depth camera matrix scheme can bring a lot of redundancy for post-processing of the data, but our team does not choose this direction, the We specialize in real-time processing
      Translated with DeepL.com (free version)