Videogrammetry Demo Real by FAKE

Поділитися
Вставка

КОМЕНТАРІ • 70

  • @luciox2919
    @luciox2919 7 місяців тому +5

    Thank u blender bob for sharing with us the professionalism of real fake

  • @PhotiniByDesign
    @PhotiniByDesign 7 місяців тому +13

    Just speculation, but I am guessing you combat the motion blur by either using a really high shutter speed, or by utilizing the lights to strobe at a really high frame rate which is synced to the camera shutter speed. This is awesome Robert, it's pretty awesome to see your videogrammetry pipeline.

    • @BlenderBob
      @BlenderBob  7 місяців тому +6

      High sutter speed. :-)

    • @jamess.7811
      @jamess.7811 6 місяців тому +1

      why would a strobe be necessary? why wouldn't you just have the lights on constantly?

    • @PhotiniByDesign
      @PhotiniByDesign 6 місяців тому

      It all depends on the camera, the lights and final outputs. For example continuous lights aren't always suitable due to limitations in output and flickering, especially if they are not specifically designed for cinematography. I have used synchronized strobes to shoot bats flying overhead a few year back, I used this method to take several images of the bat in one photo. I used a long exposure of 1.3 seconds, in that 1.3 seconds the strobe lights were programmed to flash 5 times. And so I shot the same bat in mid flight 5 times in one shot with no motion blur. Some sonar devices use the same principle to freeze frames. @@jamess.7811

    • @AliasA1
      @AliasA1 6 місяців тому

      @@jamess.7811 the idea is to have the camera shutter open for longer, and let the strobing light be the thing that limits motion blur. Its not "necessary" it's just another way to do it that you might pick depending on what equipment you have on hand. Studio photography is often done this way, controlling the effective shutter duration with the flash duration instead of the camera setting.

  • @zachhoy
    @zachhoy 7 місяців тому +6

    Bob, this is QUALITY! I can't wait to start getting into video production in the near future. I'm sure the 60k poly upper limit will eventually increase to 1M

  • @Ruan3D
    @Ruan3D 6 місяців тому +1

    That's pretty AMAZING Robert!! Thanks for sharing.

  • @PrinceWesterburg
    @PrinceWesterburg 7 місяців тому +3

    Wow - Remember seeing CSO (Colour Separation Overlay) done on the BBC in the early 70's as a child, now 50 years later that era is home movie tech and you've moved onto the next generation. With AI this will become easier and easier - look at the one image to 3D model tech that exists now, this is going to grow and grow. Amazing to see!

    • @BlenderBob
      @BlenderBob  7 місяців тому +2

      Yep. As director of innovation and technology it’s my job to check out all the new stuff

  • @vinnypassmore5657
    @vinnypassmore5657 7 місяців тому +2

    Looks fantastic, nice job. Thanks for sharing.

  • @scottesplin4426
    @scottesplin4426 7 місяців тому +2

    Amazing Mr. Bob! Busy pushing the boundaries as always,... while your cat lives the high life. 😹

  • @MellowMelodiesHub612
    @MellowMelodiesHub612 7 місяців тому +2

    Looking forward to hear more from you Bob.

  • @MediaWayUKLtd
    @MediaWayUKLtd 7 місяців тому +3

    Really impressive Blender Bob! I hope this is really successful for you!

  • @Nicollaos
    @Nicollaos 7 місяців тому +2

    Потрясающая технология!

  • @SquirrelTheorist
    @SquirrelTheorist 7 місяців тому +1

    This is absolutely brilliant! I wonder if this will eventually include reflective surfaces as with instant ngp NeRFs using radiance instead of meshes. Still, it is insane that something like this exists, and you guys handle it really well. Thank you for sharing these developments, although I probably couldn't afford it I would love to test out the limits of this system like tossing objects and watching them appear and disappear from the 3D output. Could make for some nice 3D magic tricks!

  • @superkaboose1066
    @superkaboose1066 7 місяців тому +1

    Very cool! Crowd demo looked insane

  • @GaryParris
    @GaryParris 7 місяців тому +1

    well done, hope its a success fot you

  • @EdLrandom
    @EdLrandom 7 місяців тому +2

    This is sick, if you need close-ups you might be able to make these characters with actual CG hair particle systems, if only you could find a way to mount a tiny camera close to the face of the actor paint or key it out and project that sequence back to the character's face.

    • @BlenderBob
      @BlenderBob  7 місяців тому +2

      That would actually be possible but the geometry wouldn’t be hires enough anyways.

  • @unrealengine1enhanced
    @unrealengine1enhanced 7 місяців тому +1

    imagine the ability to doctor other people's videos, with this technology, rofl.
    this tech gives a whole new meaning to the term: "trick photography"

    • @BlenderBob
      @BlenderBob  7 місяців тому

      Isn’t that the definition of VFX?

  • @keysignphenomenon
    @keysignphenomenon 7 місяців тому +1

    Merci Bob👏

  • @llbsidezll
    @llbsidezll 7 місяців тому +4

    I'd be interested in seeing how this could be implemented in VR. Current 3d video breaks immersion as soon as you try to move and look around.

    • @BlenderBob
      @BlenderBob  7 місяців тому +2

      Most of the videogrammetry systems have been developed for VR so you can find lots of information on the web

  • @unrealengine1enhanced
    @unrealengine1enhanced 7 місяців тому +1

    amazing work guys.

  • @willowproduction
    @willowproduction 7 місяців тому +1

    Man, what the actual frack. BRAVO

  • @tgavel4691
    @tgavel4691 7 місяців тому +1

    Wow - very cool!

  • @kidfl4sh295
    @kidfl4sh295 7 місяців тому +2

    I see a lot of possibilities for game stuff and for some VFX sequences, simulation applied to the body and what not. For background characters, how usuable is this, on a set, wouldnt it be less trouble to have extra on set ?

  • @johntnguyen1976
    @johntnguyen1976 7 місяців тому +1

    So next level!

  • @Vassay
    @Vassay 7 місяців тому +2

    Looks pretty nice! How many cameras are you using, and how big is the resulting bandwidth per 1 second of a character's performance?

    • @BlenderBob
      @BlenderBob  7 місяців тому +1

      32 cams. The files are huge. 8GB for the guy juggling

    • @Vassay
      @Vassay 7 місяців тому +2

      @@BlenderBob the big size is to be expected =) Quite good quality for only 32 cams, great job!

  • @AyushBakshi
    @AyushBakshi 7 місяців тому +1

    Interesting!

  • @themightyflog
    @themightyflog 7 місяців тому +1

    I want more information! Wow!

  • @davebulow2
    @davebulow2 7 місяців тому +2

    Very impressive, Bob! I have to ask, how on earth did you do the motion blur? Surely the mesh is a different mesh from frame to frame and the vertices don't have a reference point from previous frame?

    • @BlenderBob
      @BlenderBob  7 місяців тому +2

      Secret recipe ;-)

    • @Vassay
      @Vassay 7 місяців тому +2

      I would do it AFTER rendering the 3d person - calculate motion vectors from the rendered 2d image, use those to drive the motion blur. Easy, and should be more than enough for mid-far characters.

    • @spitfirekryloff744
      @spitfirekryloff744 7 місяців тому +1

      First thing that comes to mind would be to turn all the individual captures into a single animated mesh with 100+ shape keys (1 shape key per capture) and thus get the motion blur when rendering inside Blender. But that seems like a very tedious method, unless there was a way to automate the process

    • @Vassay
      @Vassay 7 місяців тому +1

      @@spitfirekryloff744 that would work, if the topology was consistent between frames - and it's not, it literally cannot be, because each frame is a totally different mesh =)

    • @BlenderBob
      @BlenderBob  7 місяців тому +3

      I'll give you a hint. Water simulation. The geometry changes at every frame yet it's still possible to get motion blur. The vectors are not computed in Blender. It's done in the proprietary software.

  • @uttula
    @uttula 7 місяців тому

    I guess the next step for even higher fidelity and further options would be to implement the gaussian splatting principles … just like recent evolution from simple photogrammetry => nerfs => gaussian splats :)

    • @BlenderBob
      @BlenderBob  6 місяців тому

      You can’t shade splatters.

    • @uttula
      @uttula 6 місяців тому

      The Blender plugins I’ve seen are admittedly still quite limited, but based on what I’ve already seen done in other engines, I’m feeling positive that eventually we should be getting to a point where they become highly useful for all sorts of things. We might not be there yet, but Rome wasn’t built in a day - could well be worth at least keeping an eye open … the road from research papers and proof of concepts to this day has been staggeringly fast and people are still continuing to make things better all the time. Of course, I could simply be hopelesly optimistic :D

  • @amazinggraphicsstudios
    @amazinggraphicsstudios 7 місяців тому +2

    You are always Super,Thank you.But please what software do you use for the videogrammetry.

    • @FireAngelOfLondon
      @FireAngelOfLondon 7 місяців тому +2

      It's their own custom software, that's the whole point of this video, they are promoting their services for 3D capture. It isn't for sale and probably won't be.

    • @amazinggraphicsstudios
      @amazinggraphicsstudios 7 місяців тому

      @@FireAngelOfLondonok thank you

  • @starwars9191
    @starwars9191 7 місяців тому +2

    If you extend the scenes do you have to reshoot the videogrammetry or are they looped in some magical way

    • @BlenderBob
      @BlenderBob  7 місяців тому +2

      We can morph two animations together to a certain limit. You need to be more precise by extend.

  • @vassilidario8029
    @vassilidario8029 7 місяців тому +1

    Hey that's pretty neat

  • @keithtam8859
    @keithtam8859 7 місяців тому +1

    clever

  • @electronicmusicartcollective
    @electronicmusicartcollective 6 місяців тому +1

    WOW

  • @bomosley9226
    @bomosley9226 7 місяців тому +1

    Whoa

  • @xalener
    @xalener 7 місяців тому +1

    how the hell did you get motion blur working here?

  • @thenout
    @thenout 7 місяців тому +1

    Bam! Does the Head of Innovation need an intern by any chance?

    • @BlenderBob
      @BlenderBob  7 місяців тому

      Do you live in Quebec?

    • @thenout
      @thenout 7 місяців тому

      Narp, Berlin. But hey, ready when you are. I'd even make coffee (in Blender, that is).@@BlenderBob

  • @Voicetaco
    @Voicetaco 7 місяців тому +1

    Why are you using green screen?
    In my experience from photogrammetry you wouldn't necessarily need a green screen to key out a person from a background as that is already being done when capturing the person using multiple cameras.
    What is your reason for using green screen when I've already seen others do videogrammetry effectively without it and gettin the same results?

    • @BlenderBob
      @BlenderBob  7 місяців тому +1

      It’s the most efficient way to extract the character from the BG. Check the BCON 2023 clips on the Blender channel on YT. I have a more detailed approach on it. But I know that the goal is to eliminate it

  • @S9universe
    @S9universe 7 місяців тому +1

    i'm curious about the tool :)

    • @BlenderBob
      @BlenderBob  7 місяців тому

      What do you want to know?

    • @S9universe
      @S9universe 7 місяців тому

      pricing, conditions and in which format does the app come ? please

    • @BlenderBob
      @BlenderBob  7 місяців тому +1

      The price depends on the project, how many characters, how long the sequences. We generate alembic files. FBX if you need a skeleton. If you have a project that could use that tech please contact us at Real by FAKE. :-)

    • @S9universe
      @S9universe 7 місяців тому

      thank you

  • @rekad8181
    @rekad8181 7 місяців тому

    The future is definitely guassian splats, and even prompt generation. If i was you, i would spend a week doing thousands of shots and feeding this data into ai to be able to then generate the action you want on any skeleton based on a prompt. Chat gpt could probably guide you through this process🎉

    • @BlenderBob
      @BlenderBob  7 місяців тому +1

      Try to rig, key and shade Gaussian splatter and the we’ll talk. ;-)