Videogrammetry Demo Real by FAKE

Поділитися
Вставка
  • Опубліковано 12 січ 2025

КОМЕНТАРІ • 76

  • @luciox2919
    @luciox2919 Рік тому +5

    Thank u blender bob for sharing with us the professionalism of real fake

  • @JorisPlacette-e5c
    @JorisPlacette-e5c 4 місяці тому +1

    Awesome video!
    @blenderBob you may have figured it out already, but the 64k max vertices cap can be disabled, allowing improved mesh resolution, which is critical when capturing 3+ people at the same time. This cap is here by default because of an encoding/decoding optimization in unity and unreal for real-time playback but irrelevant in your use case.
    I hope you are having fun with your capture studio!
    (PS, I may be one of the guys who came to your office to install your volumetric capture studio ;) )

    • @BlenderBob
      @BlenderBob  4 місяці тому +1

      Really? Cool! Have you ever setup a system in Montreal?

  • @vinnypassmore5657
    @vinnypassmore5657 Рік тому +2

    Looks fantastic, nice job. Thanks for sharing.

  • @PhotiniByDesign
    @PhotiniByDesign Рік тому +13

    Just speculation, but I am guessing you combat the motion blur by either using a really high shutter speed, or by utilizing the lights to strobe at a really high frame rate which is synced to the camera shutter speed. This is awesome Robert, it's pretty awesome to see your videogrammetry pipeline.

    • @BlenderBob
      @BlenderBob  Рік тому +6

      High sutter speed. :-)

    • @jamess.7811
      @jamess.7811 Рік тому +1

      why would a strobe be necessary? why wouldn't you just have the lights on constantly?

    • @PhotiniByDesign
      @PhotiniByDesign Рік тому

      It all depends on the camera, the lights and final outputs. For example continuous lights aren't always suitable due to limitations in output and flickering, especially if they are not specifically designed for cinematography. I have used synchronized strobes to shoot bats flying overhead a few year back, I used this method to take several images of the bat in one photo. I used a long exposure of 1.3 seconds, in that 1.3 seconds the strobe lights were programmed to flash 5 times. And so I shot the same bat in mid flight 5 times in one shot with no motion blur. Some sonar devices use the same principle to freeze frames. @@jamess.7811

    • @AliasA1
      @AliasA1 Рік тому

      @@jamess.7811 the idea is to have the camera shutter open for longer, and let the strobing light be the thing that limits motion blur. Its not "necessary" it's just another way to do it that you might pick depending on what equipment you have on hand. Studio photography is often done this way, controlling the effective shutter duration with the flash duration instead of the camera setting.

  • @Ruan3D
    @Ruan3D Рік тому +1

    That's pretty AMAZING Robert!! Thanks for sharing.

  • @zachhoy
    @zachhoy Рік тому +6

    Bob, this is QUALITY! I can't wait to start getting into video production in the near future. I'm sure the 60k poly upper limit will eventually increase to 1M

  • @MellowMelodiesHub612
    @MellowMelodiesHub612 Рік тому +2

    Looking forward to hear more from you Bob.

  • @scottesplin4426
    @scottesplin4426 Рік тому +2

    Amazing Mr. Bob! Busy pushing the boundaries as always,... while your cat lives the high life. 😹

  • @PrinceWesterburg
    @PrinceWesterburg Рік тому +3

    Wow - Remember seeing CSO (Colour Separation Overlay) done on the BBC in the early 70's as a child, now 50 years later that era is home movie tech and you've moved onto the next generation. With AI this will become easier and easier - look at the one image to 3D model tech that exists now, this is going to grow and grow. Amazing to see!

    • @BlenderBob
      @BlenderBob  Рік тому +2

      Yep. As director of innovation and technology it’s my job to check out all the new stuff

  • @SquirrelTheorist
    @SquirrelTheorist Рік тому +1

    This is absolutely brilliant! I wonder if this will eventually include reflective surfaces as with instant ngp NeRFs using radiance instead of meshes. Still, it is insane that something like this exists, and you guys handle it really well. Thank you for sharing these developments, although I probably couldn't afford it I would love to test out the limits of this system like tossing objects and watching them appear and disappear from the 3D output. Could make for some nice 3D magic tricks!

  • @MediaWayUKLtd
    @MediaWayUKLtd Рік тому +3

    Really impressive Blender Bob! I hope this is really successful for you!

  • @Nicollaos
    @Nicollaos Рік тому +2

    Потрясающая технология!

  • @GaryParris
    @GaryParris Рік тому +1

    well done, hope its a success fot you

  • @kidfl4sh295
    @kidfl4sh295 Рік тому +2

    I see a lot of possibilities for game stuff and for some VFX sequences, simulation applied to the body and what not. For background characters, how usuable is this, on a set, wouldnt it be less trouble to have extra on set ?

  • @llbsidezll
    @llbsidezll Рік тому +4

    I'd be interested in seeing how this could be implemented in VR. Current 3d video breaks immersion as soon as you try to move and look around.

    • @BlenderBob
      @BlenderBob  Рік тому +2

      Most of the videogrammetry systems have been developed for VR so you can find lots of information on the web

  • @blacklightretro
    @blacklightretro Рік тому +1

    imagine the ability to doctor other people's videos, with this technology, rofl.
    this tech gives a whole new meaning to the term: "trick photography"

    • @BlenderBob
      @BlenderBob  Рік тому

      Isn’t that the definition of VFX?

  • @EdLrandom
    @EdLrandom Рік тому +2

    This is sick, if you need close-ups you might be able to make these characters with actual CG hair particle systems, if only you could find a way to mount a tiny camera close to the face of the actor paint or key it out and project that sequence back to the character's face.

    • @BlenderBob
      @BlenderBob  Рік тому +2

      That would actually be possible but the geometry wouldn’t be hires enough anyways.

  • @superkaboose1066
    @superkaboose1066 Рік тому +1

    Very cool! Crowd demo looked insane

  • @amazinggraphicsstudios
    @amazinggraphicsstudios Рік тому +2

    You are always Super,Thank you.But please what software do you use for the videogrammetry.

    • @FireAngelOfLondon
      @FireAngelOfLondon Рік тому +2

      It's their own custom software, that's the whole point of this video, they are promoting their services for 3D capture. It isn't for sale and probably won't be.

    • @amazinggraphicsstudios
      @amazinggraphicsstudios Рік тому

      @@FireAngelOfLondonok thank you

  • @starwars9191
    @starwars9191 Рік тому +2

    If you extend the scenes do you have to reshoot the videogrammetry or are they looped in some magical way

    • @BlenderBob
      @BlenderBob  Рік тому +2

      We can morph two animations together to a certain limit. You need to be more precise by extend.

  • @blacklightretro
    @blacklightretro Рік тому +1

    amazing work guys.

  • @Vassay
    @Vassay Рік тому +2

    Looks pretty nice! How many cameras are you using, and how big is the resulting bandwidth per 1 second of a character's performance?

    • @BlenderBob
      @BlenderBob  Рік тому +1

      32 cams. The files are huge. 8GB for the guy juggling

    • @Vassay
      @Vassay Рік тому +2

      @@BlenderBob the big size is to be expected =) Quite good quality for only 32 cams, great job!

  • @keysignphenomenon
    @keysignphenomenon Рік тому +1

    Merci Bob👏

  • @davebulow2
    @davebulow2 Рік тому +2

    Very impressive, Bob! I have to ask, how on earth did you do the motion blur? Surely the mesh is a different mesh from frame to frame and the vertices don't have a reference point from previous frame?

    • @BlenderBob
      @BlenderBob  Рік тому +2

      Secret recipe ;-)

    • @Vassay
      @Vassay Рік тому +2

      I would do it AFTER rendering the 3d person - calculate motion vectors from the rendered 2d image, use those to drive the motion blur. Easy, and should be more than enough for mid-far characters.

    • @spitfirekryloff744
      @spitfirekryloff744 Рік тому +1

      First thing that comes to mind would be to turn all the individual captures into a single animated mesh with 100+ shape keys (1 shape key per capture) and thus get the motion blur when rendering inside Blender. But that seems like a very tedious method, unless there was a way to automate the process

    • @Vassay
      @Vassay Рік тому +1

      @@spitfirekryloff744 that would work, if the topology was consistent between frames - and it's not, it literally cannot be, because each frame is a totally different mesh =)

    • @BlenderBob
      @BlenderBob  Рік тому +3

      I'll give you a hint. Water simulation. The geometry changes at every frame yet it's still possible to get motion blur. The vectors are not computed in Blender. It's done in the proprietary software.

  • @willowproduction
    @willowproduction Рік тому +1

    Man, what the actual frack. BRAVO

  • @ThomasMJergel
    @ThomasMJergel Рік тому +1

    Why are you using green screen?
    In my experience from photogrammetry you wouldn't necessarily need a green screen to key out a person from a background as that is already being done when capturing the person using multiple cameras.
    What is your reason for using green screen when I've already seen others do videogrammetry effectively without it and gettin the same results?

    • @BlenderBob
      @BlenderBob  Рік тому +1

      It’s the most efficient way to extract the character from the BG. Check the BCON 2023 clips on the Blender channel on YT. I have a more detailed approach on it. But I know that the goal is to eliminate it

  • @themightyflog
    @themightyflog Рік тому +1

    I want more information! Wow!

  • @xalener
    @xalener Рік тому +1

    how the hell did you get motion blur working here?

  • @uttula
    @uttula Рік тому

    I guess the next step for even higher fidelity and further options would be to implement the gaussian splatting principles … just like recent evolution from simple photogrammetry => nerfs => gaussian splats :)

    • @BlenderBob
      @BlenderBob  Рік тому

      You can’t shade splatters.

    • @uttula
      @uttula Рік тому

      The Blender plugins I’ve seen are admittedly still quite limited, but based on what I’ve already seen done in other engines, I’m feeling positive that eventually we should be getting to a point where they become highly useful for all sorts of things. We might not be there yet, but Rome wasn’t built in a day - could well be worth at least keeping an eye open … the road from research papers and proof of concepts to this day has been staggeringly fast and people are still continuing to make things better all the time. Of course, I could simply be hopelesly optimistic :D

  • @thenout
    @thenout Рік тому +1

    Bam! Does the Head of Innovation need an intern by any chance?

    • @BlenderBob
      @BlenderBob  Рік тому

      Do you live in Quebec?

    • @thenout
      @thenout Рік тому

      Narp, Berlin. But hey, ready when you are. I'd even make coffee (in Blender, that is).@@BlenderBob

  • @ZeroBudgetDevelopments
    @ZeroBudgetDevelopments 6 місяців тому

    HI BLENDER BOB how do i get in touch with you i would like to speak with you please :)

    • @BlenderBob
      @BlenderBob  5 місяців тому

      Tiki.movie.bb at gmail dot com

  • @tgavel4691
    @tgavel4691 Рік тому +1

    Wow - very cool!

  • @S9universe
    @S9universe Рік тому +1

    i'm curious about the tool :)

    • @BlenderBob
      @BlenderBob  Рік тому

      What do you want to know?

    • @S9universe
      @S9universe Рік тому

      pricing, conditions and in which format does the app come ? please

    • @BlenderBob
      @BlenderBob  Рік тому +1

      The price depends on the project, how many characters, how long the sequences. We generate alembic files. FBX if you need a skeleton. If you have a project that could use that tech please contact us at Real by FAKE. :-)

    • @S9universe
      @S9universe Рік тому

      thank you

  • @johntnguyen1976
    @johntnguyen1976 Рік тому +1

    So next level!

  • @AyushBakshi
    @AyushBakshi Рік тому +1

    Interesting!

  • @joefamme127
    @joefamme127 2 місяці тому +1

    Cool!

  • @keithtam8859
    @keithtam8859 Рік тому +1

    clever

  • @vassilidario8029
    @vassilidario8029 Рік тому +1

    Hey that's pretty neat

  • @electronicmusicartcollective
    @electronicmusicartcollective Рік тому +1

    WOW

  • @bomosley9226
    @bomosley9226 Рік тому +1

    Whoa

  • @rekad8181
    @rekad8181 Рік тому

    The future is definitely guassian splats, and even prompt generation. If i was you, i would spend a week doing thousands of shots and feeding this data into ai to be able to then generate the action you want on any skeleton based on a prompt. Chat gpt could probably guide you through this process🎉

    • @BlenderBob
      @BlenderBob  Рік тому +1

      Try to rig, key and shade Gaussian splatter and the we’ll talk. ;-)