We are not experts in VR development. However, this wall would need to be converted into a low poly mesh. We have plans to create a tutorial video on that. We suggest keeping an eye out for new tutorial videos we will be releasing in the next couple of week that address EveryPoint data and game engines.
Hang on a moment! Are you saying you got 3D scan data purely from processing video, and no LiDaR? I use EveryPoint on a Lidar equipped apple device, but this scan on the video looks better than I get on my device. The question is, here do we start with the video processing application?
A pure image based 3D reconstruction option will be available to the general public in early 2022. You will be able to upload videos and photos from most commercially available cameras.
You only capture imagery from the right camera. We suspect the second camera is used for stabilization and 3D effects post capture. The stereo depth data is not available from the glasses.
We noticed that the video resolution changes between videos. Always a 1:1 aspect ratio. However, the the resolution is +/- 100 pixels in length and width between scans. We suspect this is cropping from image stabilization post processing.
would recording from 2 Cameras make it much easier to create 3D models from the footage?
how long did it take to create the 3d model?
I want to try that. Where can I learn more about EveryPoint and video photogrammetry?
Great Video! Is it possible to Ddo scans with the Hololens2?
We have not tried the Hololens 2. Depends if you can save a video or images from onboard cameras.
Is the wall you just scanned somethings that can easily be imported into a Horizon World or workroom? Are there any demos of that being done?
We are not experts in VR development. However, this wall would need to be converted into a low poly mesh. We have plans to create a tutorial video on that. We suggest keeping an eye out for new tutorial videos we will be releasing in the next couple of week that address EveryPoint data and game engines.
Really cool man!
Thanks!
I thought that with drones they use pictures instead of video
Hang on a moment! Are you saying you got 3D scan data purely from processing video, and no LiDaR? I use EveryPoint on a Lidar equipped apple device, but this scan on the video looks better than I get on my device. The question is, here do we start with the video processing application?
A pure image based 3D reconstruction option will be available to the general public in early 2022. You will be able to upload videos and photos from most commercially available cameras.
Can you do 3d video? Do these have two cameras that can record at the same time?
You only capture imagery from the right camera. We suspect the second camera is used for stabilization and 3D effects post capture. The stereo depth data is not available from the glasses.
@@EveryPoint interesting. Thanks for the info.
hahaha, that's unexpected, imagine if you calibrated those cameras!
We noticed that the video resolution changes between videos. Always a 1:1 aspect ratio. However, the the resolution is +/- 100 pixels in length and width between scans. We suspect this is cropping from image stabilization post processing.
Interesting - once they stick a lidar in them, you're going to be able to do that in realtime!
Absolutely! Also, improved video resolution would allow the person to be further away from objects and still capture decent detail.