Thank you so much for sharing this stuff. Very powerful! I wish you would make a video describing how to get great quality gaussian splatting for a room. Tips like what kind of videos or what kind of pictures. "How to take good quality source material for great splatts" or something like that. Thanks!
To me the way glomap works is really similar how 3d scanners align scanned frames, or rather correct them after scanning with a global registration ICP algorithm. As shown in the video this will easily work with objects with distinct features, but if you tried it with geometrical shapes or featureless/smooth objects it most likely wouldn't. Still very interesting! Didn't think someone was insane enough to try matching "stereo frames" for photogrammetry but for GS it makes sense I guess as you don't need the mm accuracy. Or perhaps I'm completely wrong, gotta read that paper 😂
You are correct, it takes a global camera alignment approach! Also, it still relies on matching features. If you have blank or reflective surfaces you will have a hard time.
Great tutorial Jonathan! Glomap seem to be very fast. Great work with the python script to make this even faster. Thanks for that. I have to check how this could be implemented with Postshot.
@@thenerfguru Yeah! I managed to make it work with Postshot. But it seems that it works best with only smaller datasets where you have 120 to 250 images. When you have something bigger where is nearly 1000 images from different takes it generates only huge sparse cube and it naturally does not lead to any reasonable outcome. It seems that the Glomap method works best for now with easier material where the images come from a single continuous shot. It partly explains why the speed of the process has been optimized to be so fast. But if there are more challenging scans shot with, for example, different cameras and FOV, this early development phase of Glomap cannot solve them yet.
@@OlliHuttunen78thanks to you both for your nice content!🤘 You already helped me so much. I‘m a researcher at HSWT, we try to model complex structures like the crowns of trees. We use fpv and camera drones as well as a camera pole, similar to yours Olli. Do you have any tips how to get sharper Gaussian splats?
My script is definitely is geared towards using one camera model, however, if you run the steps manually and have a solid grasp of COLMAP's functions and modifier flags, you run huge image sets of multiple cameras. For example, you should have a folder for each camera and then put them all in a common image folder. The top level image folder is your --image_path folder and then pass this modifier for the feature extractor: --ImageReader.single_camera_per_folder 1. They call it out in the project too: github.com/colmap/glomap/blob/main/docs/getting_started.md
@freddiemercury5424 have you tried getting proper camera calibrations on your camera and undistorting them all first? Then you should be able to use simple_pinhole camera for all of the various cameras.
"Hi there, great video! I'm currently using a reality capture pipeline to generate point clouds. I'm curious to know more about how this method compares to traditional reality capture. Could you elaborate on the differences in terms of processing speed and quality of the resulting point cloud?
It should be on par with speed and accuracy of reality capture. However, this is opensource and can be built right into your workflows. Just depends on how you want to use the data. I will do a comparison video!
@@deniaq1843I have found when testing many datasets the speed seems about the same. I have not tried really large datasets. I think that’s where GLOMAP may pull ahead.
Awesome video.
15:12 @anyone trying this: Be sure to not have whitespaces in your file path as it will not work (as of now)
Yea, never white space in your paths. EVER!
Thank you so much for sharing this stuff. Very powerful! I wish you would make a video describing how to get great quality gaussian splatting for a room. Tips like what kind of videos or what kind of pictures. "How to take good quality source material for great splatts" or something like that. Thanks!
Sorry for the month long delay in responding! I think this is a great topic. I’ll get around to making a video soon.
To me the way glomap works is really similar how 3d scanners align scanned frames, or rather correct them after scanning with a global registration ICP algorithm. As shown in the video this will easily work with objects with distinct features, but if you tried it with geometrical shapes or featureless/smooth objects it most likely wouldn't. Still very interesting! Didn't think someone was insane enough to try matching "stereo frames" for photogrammetry but for GS it makes sense I guess as you don't need the mm accuracy.
Or perhaps I'm completely wrong, gotta read that paper 😂
You are correct, it takes a global camera alignment approach! Also, it still relies on matching features. If you have blank or reflective surfaces you will have a hard time.
you dropped this, 👑
Yup! It was a fun evening project. I wished GLOMAP was more robust. Not all datasets are successful.
Great tutorial Jonathan! Glomap seem to be very fast. Great work with the python script to make this even faster. Thanks for that. I have to check how this could be implemented with Postshot.
I don’t use Postshot often. If you can use COLMAP data with Postshot, this will work too. It’s the same output
@@thenerfguru Yeah! I managed to make it work with Postshot. But it seems that it works best with only smaller datasets where you have 120 to 250 images. When you have something bigger where is nearly 1000 images from different takes it generates only huge sparse cube and it naturally does not lead to any reasonable outcome. It seems that the Glomap method works best for now with easier material where the images come from a single continuous shot. It partly explains why the speed of the process has been optimized to be so fast. But if there are more challenging scans shot with, for example, different cameras and FOV, this early development phase of Glomap cannot solve them yet.
@@OlliHuttunen78thanks to you both for your nice content!🤘
You already helped me so much. I‘m a researcher at HSWT, we try to model complex structures like the crowns of trees. We use fpv and camera drones as well as a camera pole, similar to yours Olli. Do you have any tips how to get sharper Gaussian splats?
My script is definitely is geared towards using one camera model, however, if you run the steps manually and have a solid grasp of COLMAP's functions and modifier flags, you run huge image sets of multiple cameras. For example, you should have a folder for each camera and then put them all in a common image folder. The top level image folder is your --image_path folder and then pass this modifier for the feature extractor: --ImageReader.single_camera_per_folder 1.
They call it out in the project too: github.com/colmap/glomap/blob/main/docs/getting_started.md
@freddiemercury5424 have you tried getting proper camera calibrations on your camera and undistorting them all first? Then you should be able to use simple_pinhole camera for all of the various cameras.
Special thanks from Japan!!!!!
Arigato
You Rock Jonathan 💪🏽
Thank you!
This is really interesting. I'm excited to explore this! By the way, great job, @thenerfguru. I hope you create videos on dynamic scenes as well.😅
Like 4D GS?
@@thenerfguru Yes , Especially There multipleviews scenes implementation would be nice
I'm grabbing 🍿 for this one... Thanks !!
Haha!
"Hi there, great video! I'm currently using a reality capture pipeline to generate point clouds. I'm curious to know more about how this method compares to traditional reality capture. Could you elaborate on the differences in terms of processing speed and quality of the resulting point cloud?
It should be on par with speed and accuracy of reality capture. However, this is opensource and can be built right into your workflows. Just depends on how you want to use the data. I will do a comparison video!
@@thenerfguru Thank you. Your videos have been incredibly helpful, and I really appreciate all the great content you share.
Thanks!
@@thenerfguru the glomap method is way faster than the alignment with RC, isnt it? so how can you say that it is par with RC. I am curios! :-)
@@deniaq1843I have found when testing many datasets the speed seems about the same. I have not tried really large datasets. I think that’s where GLOMAP may pull ahead.
can you make a tutorial about making dynamic gaussian splattung?
Thank you!well ,could I use it to get a "transform_json" file? For example, for instant-ngp training
nice. can anything that gets produced via "dark magic" you've showed be used with Post Shot ?
Do you also have sometime the pinhole bug with GLOMAP when trying to train Gaussians splats?
Pinhole bug? Can you elaborate?
@@thenerfguru finally i fixed it, it was the COLMAP part when creating the database for points.
Can this be used with different feature extractors?
Not sure. Haven’t tried.