Awesome video! @blenderBob you may have figured it out already, but the 64k max vertices cap can be disabled, allowing improved mesh resolution, which is critical when capturing 3+ people at the same time. This cap is here by default because of an encoding/decoding optimization in unity and unreal for real-time playback but irrelevant in your use case. I hope you are having fun with your capture studio! (PS, I may be one of the guys who came to your office to install your volumetric capture studio ;) )
Just speculation, but I am guessing you combat the motion blur by either using a really high shutter speed, or by utilizing the lights to strobe at a really high frame rate which is synced to the camera shutter speed. This is awesome Robert, it's pretty awesome to see your videogrammetry pipeline.
It all depends on the camera, the lights and final outputs. For example continuous lights aren't always suitable due to limitations in output and flickering, especially if they are not specifically designed for cinematography. I have used synchronized strobes to shoot bats flying overhead a few year back, I used this method to take several images of the bat in one photo. I used a long exposure of 1.3 seconds, in that 1.3 seconds the strobe lights were programmed to flash 5 times. And so I shot the same bat in mid flight 5 times in one shot with no motion blur. Some sonar devices use the same principle to freeze frames. @@jamess.7811
@@jamess.7811 the idea is to have the camera shutter open for longer, and let the strobing light be the thing that limits motion blur. Its not "necessary" it's just another way to do it that you might pick depending on what equipment you have on hand. Studio photography is often done this way, controlling the effective shutter duration with the flash duration instead of the camera setting.
Bob, this is QUALITY! I can't wait to start getting into video production in the near future. I'm sure the 60k poly upper limit will eventually increase to 1M
Wow - Remember seeing CSO (Colour Separation Overlay) done on the BBC in the early 70's as a child, now 50 years later that era is home movie tech and you've moved onto the next generation. With AI this will become easier and easier - look at the one image to 3D model tech that exists now, this is going to grow and grow. Amazing to see!
This is absolutely brilliant! I wonder if this will eventually include reflective surfaces as with instant ngp NeRFs using radiance instead of meshes. Still, it is insane that something like this exists, and you guys handle it really well. Thank you for sharing these developments, although I probably couldn't afford it I would love to test out the limits of this system like tossing objects and watching them appear and disappear from the 3D output. Could make for some nice 3D magic tricks!
I see a lot of possibilities for game stuff and for some VFX sequences, simulation applied to the body and what not. For background characters, how usuable is this, on a set, wouldnt it be less trouble to have extra on set ?
This is sick, if you need close-ups you might be able to make these characters with actual CG hair particle systems, if only you could find a way to mount a tiny camera close to the face of the actor paint or key it out and project that sequence back to the character's face.
It's their own custom software, that's the whole point of this video, they are promoting their services for 3D capture. It isn't for sale and probably won't be.
Very impressive, Bob! I have to ask, how on earth did you do the motion blur? Surely the mesh is a different mesh from frame to frame and the vertices don't have a reference point from previous frame?
I would do it AFTER rendering the 3d person - calculate motion vectors from the rendered 2d image, use those to drive the motion blur. Easy, and should be more than enough for mid-far characters.
First thing that comes to mind would be to turn all the individual captures into a single animated mesh with 100+ shape keys (1 shape key per capture) and thus get the motion blur when rendering inside Blender. But that seems like a very tedious method, unless there was a way to automate the process
@@spitfirekryloff744 that would work, if the topology was consistent between frames - and it's not, it literally cannot be, because each frame is a totally different mesh =)
I'll give you a hint. Water simulation. The geometry changes at every frame yet it's still possible to get motion blur. The vectors are not computed in Blender. It's done in the proprietary software.
Why are you using green screen? In my experience from photogrammetry you wouldn't necessarily need a green screen to key out a person from a background as that is already being done when capturing the person using multiple cameras. What is your reason for using green screen when I've already seen others do videogrammetry effectively without it and gettin the same results?
It’s the most efficient way to extract the character from the BG. Check the BCON 2023 clips on the Blender channel on YT. I have a more detailed approach on it. But I know that the goal is to eliminate it
I guess the next step for even higher fidelity and further options would be to implement the gaussian splatting principles … just like recent evolution from simple photogrammetry => nerfs => gaussian splats :)
The Blender plugins I’ve seen are admittedly still quite limited, but based on what I’ve already seen done in other engines, I’m feeling positive that eventually we should be getting to a point where they become highly useful for all sorts of things. We might not be there yet, but Rome wasn’t built in a day - could well be worth at least keeping an eye open … the road from research papers and proof of concepts to this day has been staggeringly fast and people are still continuing to make things better all the time. Of course, I could simply be hopelesly optimistic :D
The price depends on the project, how many characters, how long the sequences. We generate alembic files. FBX if you need a skeleton. If you have a project that could use that tech please contact us at Real by FAKE. :-)
The future is definitely guassian splats, and even prompt generation. If i was you, i would spend a week doing thousands of shots and feeding this data into ai to be able to then generate the action you want on any skeleton based on a prompt. Chat gpt could probably guide you through this process🎉
Thank u blender bob for sharing with us the professionalism of real fake
Awesome video!
@blenderBob you may have figured it out already, but the 64k max vertices cap can be disabled, allowing improved mesh resolution, which is critical when capturing 3+ people at the same time. This cap is here by default because of an encoding/decoding optimization in unity and unreal for real-time playback but irrelevant in your use case.
I hope you are having fun with your capture studio!
(PS, I may be one of the guys who came to your office to install your volumetric capture studio ;) )
Really? Cool! Have you ever setup a system in Montreal?
Looks fantastic, nice job. Thanks for sharing.
Just speculation, but I am guessing you combat the motion blur by either using a really high shutter speed, or by utilizing the lights to strobe at a really high frame rate which is synced to the camera shutter speed. This is awesome Robert, it's pretty awesome to see your videogrammetry pipeline.
High sutter speed. :-)
why would a strobe be necessary? why wouldn't you just have the lights on constantly?
It all depends on the camera, the lights and final outputs. For example continuous lights aren't always suitable due to limitations in output and flickering, especially if they are not specifically designed for cinematography. I have used synchronized strobes to shoot bats flying overhead a few year back, I used this method to take several images of the bat in one photo. I used a long exposure of 1.3 seconds, in that 1.3 seconds the strobe lights were programmed to flash 5 times. And so I shot the same bat in mid flight 5 times in one shot with no motion blur. Some sonar devices use the same principle to freeze frames. @@jamess.7811
@@jamess.7811 the idea is to have the camera shutter open for longer, and let the strobing light be the thing that limits motion blur. Its not "necessary" it's just another way to do it that you might pick depending on what equipment you have on hand. Studio photography is often done this way, controlling the effective shutter duration with the flash duration instead of the camera setting.
That's pretty AMAZING Robert!! Thanks for sharing.
Bob, this is QUALITY! I can't wait to start getting into video production in the near future. I'm sure the 60k poly upper limit will eventually increase to 1M
Looking forward to hear more from you Bob.
Amazing Mr. Bob! Busy pushing the boundaries as always,... while your cat lives the high life. 😹
Wow - Remember seeing CSO (Colour Separation Overlay) done on the BBC in the early 70's as a child, now 50 years later that era is home movie tech and you've moved onto the next generation. With AI this will become easier and easier - look at the one image to 3D model tech that exists now, this is going to grow and grow. Amazing to see!
Yep. As director of innovation and technology it’s my job to check out all the new stuff
This is absolutely brilliant! I wonder if this will eventually include reflective surfaces as with instant ngp NeRFs using radiance instead of meshes. Still, it is insane that something like this exists, and you guys handle it really well. Thank you for sharing these developments, although I probably couldn't afford it I would love to test out the limits of this system like tossing objects and watching them appear and disappear from the 3D output. Could make for some nice 3D magic tricks!
Really impressive Blender Bob! I hope this is really successful for you!
Потрясающая технология!
well done, hope its a success fot you
I see a lot of possibilities for game stuff and for some VFX sequences, simulation applied to the body and what not. For background characters, how usuable is this, on a set, wouldnt it be less trouble to have extra on set ?
I'd be interested in seeing how this could be implemented in VR. Current 3d video breaks immersion as soon as you try to move and look around.
Most of the videogrammetry systems have been developed for VR so you can find lots of information on the web
imagine the ability to doctor other people's videos, with this technology, rofl.
this tech gives a whole new meaning to the term: "trick photography"
Isn’t that the definition of VFX?
This is sick, if you need close-ups you might be able to make these characters with actual CG hair particle systems, if only you could find a way to mount a tiny camera close to the face of the actor paint or key it out and project that sequence back to the character's face.
That would actually be possible but the geometry wouldn’t be hires enough anyways.
Very cool! Crowd demo looked insane
You are always Super,Thank you.But please what software do you use for the videogrammetry.
It's their own custom software, that's the whole point of this video, they are promoting their services for 3D capture. It isn't for sale and probably won't be.
@@FireAngelOfLondonok thank you
If you extend the scenes do you have to reshoot the videogrammetry or are they looped in some magical way
We can morph two animations together to a certain limit. You need to be more precise by extend.
amazing work guys.
Looks pretty nice! How many cameras are you using, and how big is the resulting bandwidth per 1 second of a character's performance?
32 cams. The files are huge. 8GB for the guy juggling
@@BlenderBob the big size is to be expected =) Quite good quality for only 32 cams, great job!
Merci Bob👏
Very impressive, Bob! I have to ask, how on earth did you do the motion blur? Surely the mesh is a different mesh from frame to frame and the vertices don't have a reference point from previous frame?
Secret recipe ;-)
I would do it AFTER rendering the 3d person - calculate motion vectors from the rendered 2d image, use those to drive the motion blur. Easy, and should be more than enough for mid-far characters.
First thing that comes to mind would be to turn all the individual captures into a single animated mesh with 100+ shape keys (1 shape key per capture) and thus get the motion blur when rendering inside Blender. But that seems like a very tedious method, unless there was a way to automate the process
@@spitfirekryloff744 that would work, if the topology was consistent between frames - and it's not, it literally cannot be, because each frame is a totally different mesh =)
I'll give you a hint. Water simulation. The geometry changes at every frame yet it's still possible to get motion blur. The vectors are not computed in Blender. It's done in the proprietary software.
Man, what the actual frack. BRAVO
Why are you using green screen?
In my experience from photogrammetry you wouldn't necessarily need a green screen to key out a person from a background as that is already being done when capturing the person using multiple cameras.
What is your reason for using green screen when I've already seen others do videogrammetry effectively without it and gettin the same results?
It’s the most efficient way to extract the character from the BG. Check the BCON 2023 clips on the Blender channel on YT. I have a more detailed approach on it. But I know that the goal is to eliminate it
I want more information! Wow!
Ask away
how the hell did you get motion blur working here?
Secret recipe
I guess the next step for even higher fidelity and further options would be to implement the gaussian splatting principles … just like recent evolution from simple photogrammetry => nerfs => gaussian splats :)
You can’t shade splatters.
The Blender plugins I’ve seen are admittedly still quite limited, but based on what I’ve already seen done in other engines, I’m feeling positive that eventually we should be getting to a point where they become highly useful for all sorts of things. We might not be there yet, but Rome wasn’t built in a day - could well be worth at least keeping an eye open … the road from research papers and proof of concepts to this day has been staggeringly fast and people are still continuing to make things better all the time. Of course, I could simply be hopelesly optimistic :D
Bam! Does the Head of Innovation need an intern by any chance?
Do you live in Quebec?
Narp, Berlin. But hey, ready when you are. I'd even make coffee (in Blender, that is).@@BlenderBob
HI BLENDER BOB how do i get in touch with you i would like to speak with you please :)
Tiki.movie.bb at gmail dot com
Wow - very cool!
i'm curious about the tool :)
What do you want to know?
pricing, conditions and in which format does the app come ? please
The price depends on the project, how many characters, how long the sequences. We generate alembic files. FBX if you need a skeleton. If you have a project that could use that tech please contact us at Real by FAKE. :-)
thank you
So next level!
Interesting!
Cool!
clever
Hey that's pretty neat
WOW
Whoa
The future is definitely guassian splats, and even prompt generation. If i was you, i would spend a week doing thousands of shots and feeding this data into ai to be able to then generate the action you want on any skeleton based on a prompt. Chat gpt could probably guide you through this process🎉
Try to rig, key and shade Gaussian splatter and the we’ll talk. ;-)