@@marcomoscoso7402 dang, been switching over to unreal since about a year ago cause I had iffy feelings after being a Unity dev for 10 years. Thank God I did. I gotta try em out. Researching over the weekend
I find meshroom's image outputs from 360 video to be very limiting, it only goes along the middle of the frame which has you missing out on up close things and only focusing on things in the horizon. My solution was to put the video on an inverted sphere in blender, with some cameras (12 of them) facing outwards from the center at varying angles, and then create a bunch of camera markers (Ctrl b) that switches between all the cameras every frame. I found I got way better results doing this, especially because I have a lower end 360 camera thats only 5k res. Hope this helps someone
@@Thats_Cool_Jack interesting. I have almost zero experience with Blender. What is your experience with your method being trained into a NeRF or used for photogrammetry output?
@@thenerfguru it works really well. The images are the same quality as they would be if you were using the meshroom method but you can choose the angles that the cameras are looking. When I record the 360 video I sway the camera on the end of a camera stick back and forth while walking to create as much parallax as possible, which gets the best depth information, but can be somewhat blurry in low light situations. I've done both nerf and photogrammetry. I made a vrchat world using this method in a graffiti alleyway.
Such a great video. Thank you. I have been doing historic site and relic capture for a while now using photogrammetry and different NeRF solutions like Luma AI. I am excited to get started with Gaussian Splatting because: 1. It should render a lot faster for my clients, 2. may look better, and 3. It honestly seems easier to set up that many of the cutting edge NeRF frameworks I've been experimenting with that require Linux. Much of my workflow involves Windows because I also do a lot of Insta360 Captures, Omniverse, etc. This is great stuff!
Very cool. I'm headed to a cabin on top of a mountain and Im going to do some loops with a drone in attempts to turn it into some sport of radiance field. Thank you for this tutorial.
I was rushing this one out for you! I could use tips on how to get better 360 video footage. I have the two cameras at the start of the video. A Insta360 Pro II and a RS One 1-Inch
Awesome, def gonna try and play with it. But how do you get the sky/ceiling rendered? As you said you didnt include the top? Also wonder how you can remove yourself if you use a 360 camera. I wonder if this would work with a fisheye and 4K video. Then you are always out of the image and can get very high res images or just pictures on my Canon R5 with fisheye. Any idea on what command you need then?
Really great! If you mounted three cameras to one post at different heights could you combine the three videos to make a better result? Or does the source have to come from the same device moving the one device to different heights? Thanks
Wow this is epic! What I did not quite understand, in this training data did you only record the alley way once or did you record it multiple times walking different paths as you told us to do?
This was a single walk through. You can see that I didn't have the best movement freedom in the end. Unless I stuck to the single trajectory, the result falls apart fast.
Thanks for your time and effort. I want to try it out myself soon. Was the whole progress in real-time? I especially mean the creation of the 3d gausian file. I just wonder how fast this can be. Thanks so far and best wishes :)
When it comes to NeRFs and GS, can you foresee any advantage to shooting with that larger 360 camera when in 3D mode? I have the Canon dual fisheye 3D 180 8K video camera and hoping to take advantage of it in new, unintended ways, but seems like stereoscopic wouldn't help for this purpose as you could just take more pictures with a single lens, no?
It can help but what helps the most is constantly moving the camera. I have my camera at the end of a stick and I sway it back and forth as I walk to create the most parallax
Thanks for your detailed and professional video. We follow your steps and we can indeed get gaussian splatting results, but we also found that the 6K panoramic video (200MB bitrate, H265) shot using insta360 RS one is converted into a 360 image by ffmpeg, and then converted into perspective images using alicevision. The images are not very clear. Could you please give us some guidance on how to improve the clarity of the picture?
Insane idea. I was thinking of using iPhone lidar to capture point clouds but then that has a limited field of view and hence more waving the camera around. Capturing in 360 could be much more efficient.
Amazing tutorial thanks! While I don’t have a 360 camera I do have a full frame camera and a fish eye lens, how would you compare this workflow to if I was to take 4k video with the fisheye and obviously walk back and forth multiple time at different height?
Just found you on youtube after following you on linkedin for a while now! Great stuff! One question, do the scans have the correct real world measurements, for example could i measure a kitchen counter that is scanned and it be correct?
Awesome, exactly what I was looking for! I do want my reconstruction to have the views looking up and down though, not just 360 horizontally. Is there a way to extract that data from a spherical 360 video?
Regarding the conversion of the rectangular images to cubemaps - I'm afraid I don't understand the need for this. My experience with COLMAP is intermediate, but I typically experienced fewer camera pose misalignment issues when I didn't perform any operations on the input images. Not to mention the extreme slowdown on bundle adjustment & block matching when you start having tons of image tiles. Does Insta360 Studio not allow you to export the raw video from each camera independently? Or are you performing this workflow for some other reason? Additionally I'd love to hear why you're using meshroom for the cubemaps instead of something like 'ffmpeg -i input_equirectangular.mp4 -vf "v360=e:ih_fov=90:iv_fov=90:output_layout=cubemap_16" cubemap_output.mp4'
Great questions: 1. I cannot export raw images from each lens. I use that workflow from my Insta360 Pro II, but I still drop a lot of the extreme warped sections of the images. 2. As far as FFMPEG, That shows you have far back in updates I’ve been focused on with this software! After a few comments, I have written a Python script to extract 8 images and added some additional controls for optimization. 3. For getting 8 cubemapped images, I’m going off of what I have tested in the past and works best. Using just the front, back, left, right, up and down images do not yield a great result.
I have an Insta360 Pro II also and would like to try your workflow out! Other than dealing with a balling ball on a pole overhead, does the workflow for a Pro II differ from this video?@@thenerfguru
Hi, thank you for your awesome work. I want to know if you ever try to use 360 video of indoor environment for Gaussian Splatting and is it ok in term of output quality?
Dear Jonathan. I have a question. When you cut the pano images with meshroom into cube maps they will have the size 1200by1200 with your line of code. I wonder if there is a formula with which one can calculate the maximum size possible for the input pano. I for example will be able to use a 8k 360 camera soon and i wonder what would be the ideal cube map size for the corresponding input material. Do you have any idea how to calculate or figure this out? Or is it a simple try out process? Thanks :-)
Hello Jonathan, I already have a NeRF and a Gaussian Splatting from the same scene, and I would like to make a video comparision to show how better are the GS, any recomendations about how to do it?? Thanks
You bet! You can either manually resize all of your photos ahead of time, or when you prep the images it should make a half, quarter, and 8th scale version.
Thanks fro posting! Would it have been better to shoot stills every 5' or 2m with the 360 1 inch? As per your suggestion, would a higher elevation pass walking one way and then a lower elevation going back the other way be ideal?
Hi Jonathan, When I follow your workflow the quality of the generated Gaussian splatting looks good only if you follow exactly the same path with the original (recording) camera. In your video you show the 6-camera Insta360 Pro model. Have you tried to create a Gaussian Splatting using that camera? I'd expect that the higher resolution would produce better results(?). Keep up your excellent work.
is it possible for you to share the 360 video for practice. I haven't been able to find good 360 videos to try gaussian splatting on. I have tried it successfuly on a lot 2d videos but just can't seem to find a good 360 one. Thanks for you the beginner guide, it was really helpful
Sorry, I am using machine translated English. I hope I can understand it -------------------------------- Thank you very much for your video. I have learned a lot, Here is a small question to ask, if I created a ply model in a video I filmed I found that the ground was missing, and then I was filming a video of the ground and creating another ply model So how can we merge these two ply models into a complete one? If I can merge, then I can segment and shoot more videos, making a scene perfect without dead corners. 笔记
I’m still having issues just getting my computer to run python and such so I can start making nerfs. But I have a drone with 360 camera attachments I would love to start making using this
@@thenerfguru I am not sure why it wasn't working with my C: drive as that is where my OS is, but I put it on an old OS drive, now python is working just fine. technology, its weird sometimes 😆
I know that for the Leica BLK2GO, it captures both the LiDAR scans and the 360 panorama stills as you go. In the Leica SW, you can export the images every x feet that you want. The devices use both the laser and the RGB sensor to do SLAM as you move.
That’s a good question. Give it a shot. I bet you’ll have a fun time with COLMAP 🙃. Also, not sure how to export native fisheye images from this camera. I can do it with my Insta360 Pro II, but I still prefer using my only dewarp calibration.
@@thenerfguru well I ran it last night and I was seg faulting. I think it's my cuda toolkit version, I hope. Thanks for sharing I'll reference your videos for help
What would be the result if you didn't move back or turn around while capturing the video? I tried to create a nerf after capturing a video inside a room, moving from one end to the other, but it didn't work out. Why its happening?
I have worked with the insta360 RS one inch, and it is not worth the price tag! Bigger sensor is great for low light and higher dynamic range but this model has a few drawbacks. 1 high price, 2 the flare as seen in this video. I suggest buying a qoocam 3 at much lower price and better specs. It just released but will be on shelves soon.
I have been using The Insta360 RS One Inch, the Insta360 X3, and the iPhone 13Pro. All three have their place in captures. The higher resolution and the larger sensor on the One Inch is great, but I really find the in-camera, realtime HDR video of the X3 helpful in outdoor scenes. If you can keep your subject in front of you as you orbit, even an older iPhoneXR is worlds better than the Insta360s. If you need to get in somewhere tight like a smaller building, out come the 360s. The 13Pro has much better low light and high pixel density captures than either, if you can orbit your subject. This is especially true now that they added shooting in RAW as an option in the Pro phones. Keep capturing!!
@@thenerfguru Indeed. You already know this, but for others on here, try to walk in the shadows like a thief in Skyrim. You can often pull up a map of your target area, evaluate when you will be there to do the capture, and try to stay in the shade as the sun moves during the day. This is a little easier in towns and cities since you can use the buildings' shadows. Sometimes, you just need to sidestep a foot to the right or left and it makes all the difference. Not always an option, but it can help. You can also tape a piece of paper up to the camera on the side with the sun (just wide enough) so it will keep the sun off the lens. You will lose some degrees of capture on the side with the sun, but what you do capture will be glare free. might be a fair trade.
It seems like the algo has a hard time with more data. For example you normally go around something and have a small area to look at with nerf or Gaussian .. however how do you maybe even combine for a larger scene ? You go around something and then you expand by getting more footage and try and combine all the data so you have more to look at .. or just create a larger scene .. it seems to have problems with that .. any thoughts ??
You mentioned around 15min that you could have gone back over the scene again. Would that significantly increase processing time but also significantly improve image quality (remove floaters, blur, etc.. )?
It probably wouldn't make training time too much longer. However, it would reduce floaters and bad views. You basically would have a greater degree of freedom.
hm thats weird. small videos ive done have taken hours and hours just to convert. maybe i missed this in your tutorial video but do I need to capture at a lower res?@@thenerfguru
Looks well made💪, but Abit unnecessary ;) I usually use a long screw,40 mm. Screw it in 20mm into the corner and stick the magnet to it. Completely hidden by the sensor 🤙
Hi! What are the exact convert.py parameters you run the 360 vid? I tried with mine, I shoot with insta 360 X3, good, slow recording, 4K equirects, I do exactly how you show and colmap only finds 3-6 images... :S
@thenerfguru i wonder if using this method, you can create stereoscopic 3d gaussin splatting using a VR180 camera? i have footage i can provide for testing purposes
Yes, there is way you can download and view them. I took 1080 X 1920 stills and free then into photogrammetry software but the result was a sphere with the image protected onto it.
Ok, so I bought one and tried this and my resulting GS seemed to be as if it was a single frame? a tiny section of the total recorded space. Any ideas why this may happen? I might be doing something wrong, my first attempt ever I have all my 360 frames each. I split them with ffmpeg. I see all the split frames, I put them into the "input" folder of my COLMAP root. Although after its done, I see in COLMAP "images" there is only 3 and that is the spot that i see in my GS. It only processed 3 of the 4600 images
@user-kd2uw1oy1d the entire splat needs to handled in less than the entirety of your VRAM, that was the issue. I bought an XGRIDS K1 scanner, boom problem solved, insane quality
Perhaps your best bet is to try Nerfstudio's 360 image supported training. Then, convert it to 3D Gaussian Splatting format. I don't have a tutorial for this though.
so after we get a gaussian splat where can we even use it? no adobe programs can run them, da vinci cant, blender does it very poorly, ue5 costs $100, i think maybe unity is the only program that can use a gaussian splat. they are awesome but its like havin 8k video and youtube only plays 1080. where can i actually use these splats to make a cool video?
@@thenerfguru Thank you, I'm picturing two 360 cameras. perhaps one on a stick for sweeping around and one on a pole sticking up from a backpack? Or two at different heights on a walking stick. Do you have any guesses as to how two insta360 X3s used like that would do vs a single RS ONE 360 edition? Also imagining a frame to put 3 of them for quick one pass scanning of cooperative humans.
Not currently. I wouldn’t be surprised if a new project comes out where geometry is exportable. I’ve seen a paper on it and a demo code, but it’s not usable today.
@@thenerfguru Hey, I got the same device and wanted to try reproducing the similar thing like you did, but I could only generate almost-one frame result after rendering although the aliceVision_utils_split360Images did a lot "subimages". I checked the result "output" directory, actually there were only few images used. Do you have any idea about the problem I had?
There is so much depth information with the parallax effect and lighting/shadows.
Perfect! i've been messing with this thanks to your Splatting tutorial, excited to mess with some 360 footage I captured too!
Awesome! Follow me on social. If you share anything you come up with, tag me and I’ll repost it.
thanks a bunch! This is what I needed. I have an Insta360 but didn't know where to start.
Great! Let me know if you run into any roadblocks.
Looks so straightforward. I wonder what will this technology look like in 5 years.
Can’t help but think that gaming and sims are going to change dramatically 🤔
Like is this the future of memory? 🤷🏽♂️
@@c0nsumption there are implementations with Unreal engine already with this technology. I think this is the future of games
@@marcomoscoso7402 dang, been switching over to unreal since about a year ago cause I had iffy feelings after being a Unity dev for 10 years. Thank God I did. I gotta try em out. Researching over the weekend
I find meshroom's image outputs from 360 video to be very limiting, it only goes along the middle of the frame which has you missing out on up close things and only focusing on things in the horizon. My solution was to put the video on an inverted sphere in blender, with some cameras (12 of them) facing outwards from the center at varying angles, and then create a bunch of camera markers (Ctrl b) that switches between all the cameras every frame. I found I got way better results doing this, especially because I have a lower end 360 camera thats only 5k res. Hope this helps someone
You want to avoid high fov to minimize distorted edges, which tend to be useless in photogrammetry
@@Thats_Cool_Jack interesting. I have almost zero experience with Blender. What is your experience with your method being trained into a NeRF or used for photogrammetry output?
@@thenerfguru it works really well. The images are the same quality as they would be if you were using the meshroom method but you can choose the angles that the cameras are looking. When I record the 360 video I sway the camera on the end of a camera stick back and forth while walking to create as much parallax as possible, which gets the best depth information, but can be somewhat blurry in low light situations. I've done both nerf and photogrammetry. I made a vrchat world using this method in a graffiti alleyway.
Thanks for your awesome try cloud you introduce more details about how to import 360 video in the blender and output multi-view perspective images
@@Thats_Cool_Jackwould it be possibile to get your blender file? Thank you :)
Such a great video. Thank you. I have been doing historic site and relic capture for a while now using photogrammetry and different NeRF solutions like Luma AI. I am excited to get started with Gaussian Splatting because: 1. It should render a lot faster for my clients, 2. may look better, and 3. It honestly seems easier to set up that many of the cutting edge NeRF frameworks I've been experimenting with that require Linux. Much of my workflow involves Windows because I also do a lot of Insta360 Captures, Omniverse, etc. This is great stuff!
This is such an important video you can't even imagine!
Very cool. I'm headed to a cabin on top of a mountain and Im going to do some loops with a drone in attempts to turn it into some sport of radiance field. Thank you for this tutorial.
Loops are amazing for this technology!
Buying one of these today just for GS generation! I am super excited to try this out!!!
Thank you much-needed video
I was rushing this one out for you! I could use tips on how to get better 360 video footage. I have the two cameras at the start of the video. A Insta360 Pro II and a RS One 1-Inch
Awesome, def gonna try and play with it. But how do you get the sky/ceiling rendered? As you said you didnt include the top? Also wonder how you can remove yourself if you use a 360 camera. I wonder if this would work with a fisheye and 4K video. Then you are always out of the image and can get very high res images or just pictures on my Canon R5 with fisheye. Any idea on what command you need then?
Really great! If you mounted three cameras to one post at different heights could you combine the three videos to make a better result? Or does the source have to come from the same device moving the one device to different heights? Thanks
That could work. However, I would want all 3 cameras to be the same camera model.
Can't wait to see how you make a Gaussian Splatting scene from Insta360 Pro footage.
Wow this is epic! What I did not quite understand, in this training data did you only record the alley way once or did you record it multiple times walking different paths as you told us to do?
This was a single walk through. You can see that I didn't have the best movement freedom in the end. Unless I stuck to the single trajectory, the result falls apart fast.
Thanks for your time and effort. I want to try it out myself soon. Was the whole progress in real-time? I especially mean the creation of the 3d gausian file. I just wonder how fast this can be. Thanks so far and best wishes :)
When it comes to NeRFs and GS, can you foresee any advantage to shooting with that larger 360 camera when in 3D mode? I have the Canon dual fisheye 3D 180 8K video camera and hoping to take advantage of it in new, unintended ways, but seems like stereoscopic wouldn't help for this purpose as you could just take more pictures with a single lens, no?
It can help but what helps the most is constantly moving the camera. I have my camera at the end of a stick and I sway it back and forth as I walk to create the most parallax
Thanks for your detailed and professional video. We follow your steps and we can indeed get gaussian splatting results, but we also found that the 6K panoramic video (200MB bitrate, H265) shot using insta360 RS one is converted into a 360 image by ffmpeg, and then converted into perspective images using alicevision. The images are not very clear. Could you please give us some guidance on how to improve the clarity of the picture?
Having the same problem. But it seems to be the insta360 RS One that simply does not deliver good image quality.
Insane idea. I was thinking of using iPhone lidar to capture point clouds but then that has a limited field of view and hence more waving the camera around.
Capturing in 360 could be much more efficient.
Amazing tutorial thanks! While I don’t have a 360 camera I do have a full frame camera and a fish eye lens, how would you compare this workflow to if I was to take 4k video with the fisheye and obviously walk back and forth multiple time at different height?
Walking back and forth work. Just make sure you don’t make any sharp rotations with the camera.
Just found you on youtube after following you on linkedin for a while now! Great stuff! One question, do the scans have the correct real world measurements, for example could i measure a kitchen counter that is scanned and it be correct?
Awesome, exactly what I was looking for! I do want my reconstruction to have the views looking up and down though, not just 360 horizontally. Is there a way to extract that data from a spherical 360 video?
Regarding the conversion of the rectangular images to cubemaps - I'm afraid I don't understand the need for this.
My experience with COLMAP is intermediate, but I typically experienced fewer camera pose misalignment issues when I didn't perform any operations on the input images. Not to mention the extreme slowdown on bundle adjustment & block matching when you start having tons of image tiles.
Does Insta360 Studio not allow you to export the raw video from each camera independently? Or are you performing this workflow for some other reason?
Additionally I'd love to hear why you're using meshroom for the cubemaps instead of something like 'ffmpeg -i input_equirectangular.mp4 -vf "v360=e:ih_fov=90:iv_fov=90:output_layout=cubemap_16" cubemap_output.mp4'
Great questions:
1. I cannot export raw images from each lens. I use that workflow from my Insta360 Pro II, but I still drop a lot of the extreme warped sections of the images.
2. As far as FFMPEG, That shows you have far back in updates I’ve been focused on with this software! After a few comments, I have written a Python script to extract 8 images and added some additional controls for optimization.
3. For getting 8 cubemapped images, I’m going off of what I have tested in the past and works best. Using just the front, back, left, right, up and down images do not yield a great result.
@@thenerfguru Thank you very much for the clarification.
I have an Insta360 Pro II also and would like to try your workflow out! Other than dealing with a balling ball on a pole overhead, does the workflow for a Pro II differ from this video?@@thenerfguru
@@thenerfguru if you dont mind, can you share your Python script to extract 8 images please
Hi, thank you for your awesome work. I want to know if you ever try to use 360 video of indoor environment for Gaussian Splatting and is it ok in term of output quality?
Dear Jonathan. I have a question. When you cut the pano images with meshroom into cube maps they will have the size 1200by1200 with your line of code. I wonder if there is a formula with which one can calculate the maximum size possible for the input pano. I for example will be able to use a 8k 360 camera soon and i wonder what would be the ideal cube map size for the corresponding input material. Do you have any idea how to calculate or figure this out? Or is it a simple try out process? Thanks :-)
Hello Jonathan, I already have a NeRF and a Gaussian Splatting from the same scene, and I would like to make a video comparision to show how better are the GS, any recomendations about how to do it??
Thanks
You bet! You can either manually resize all of your photos ahead of time, or when you prep the images it should make a half, quarter, and 8th scale version.
Hi, thanks for these tutorials. Is it possibile to export the point clouds or a 3d model of these results? Thanks
This is freaking amazing!
Agreed!
Thanks fro posting! Would it have been better to shoot stills every 5' or 2m with the 360 1 inch? As per your suggestion, would a higher elevation pass walking one way and then a
lower elevation going back the other way be ideal?
Great video Jonathan. Thank you. Have you tried any footage with the Insta360 Pro to compare the results with the One RS 1-inch?
I would like that too! Do you mean the x3? If only Insta360 could send me a loaner :)
Hi Jonathan,
When I follow your workflow the quality of the generated Gaussian splatting looks good only if you follow exactly the same path with the original (recording) camera.
In your video you show the 6-camera Insta360 Pro model. Have you tried to create a Gaussian Splatting using that camera? I'd expect that the higher resolution would produce better results(?).
Keep up your excellent work.
is it possible for you to share the 360 video for practice. I haven't been able to find good 360 videos to try gaussian splatting on. I have tried it successfuly on a lot 2d videos but just can't seem to find a good 360 one. Thanks for you the beginner guide, it was really helpful
Sorry, I am using machine translated English. I hope I can understand it
--------------------------------
Thank you very much for your video. I have learned a lot,
Here is a small question to ask, if I created a ply model in a video I filmed
I found that the ground was missing, and then I was filming a video of the ground and creating another ply model
So how can we merge these two ply models into a complete one? If I can merge, then I can segment and shoot more videos, making a scene perfect without dead corners.
笔记
I’m still having issues just getting my computer to run python and such so I can start making nerfs. But I have a drone with 360 camera attachments I would love to start making using this
What’s happening with Python? Is it not added to your path?
@@thenerfguru I am not sure why it wasn't working with my C: drive as that is where my OS is, but I put it on an old OS drive, now python is working just fine. technology, its weird sometimes 😆
Hello, do you think it would be possible to create a complete race track and then map it using Blender etc.?
How can i download the 3d environment into some .glb or other file format?
Not possible with this current project. However, this workflow will get you okay results with software like Reality Capture or Object Capture.
Do you know a way to use a point cloud, i.e., some Leica scans, and use that point cloud to 3D Gaussian Splatting?
You need source images. I am not the most well versed in Leica solutions. Do you get both a point cloud and images from a scan station?
I know that for the Leica BLK2GO, it captures both the LiDAR scans and the 360 panorama stills as you go. In the Leica SW, you can export the images every x feet that you want. The devices use both the laser and the RGB sensor to do SLAM as you move.
Why do yous plit the images with meshroom? Can't colemap deal with fisheye lenses?
That’s a good question. Give it a shot. I bet you’ll have a fun time with COLMAP 🙃. Also, not sure how to export native fisheye images from this camera. I can do it with my Insta360 Pro II, but I still prefer using my only dewarp calibration.
Very interesting!
Thanks!
Asked on a previous video, but wondering if you'd know how to view these in VR?
My next video will be how to view these in Unity. I’m not Unity expert, but I think you can do it in there.
@@thenerfguru Sounds great, look forward to it.
I'm gonna give it a try
Comment if you get stuck! I was literally losing my voice while making the video. 😅
@@thenerfguru well I ran it last night and I was seg faulting. I think it's my cuda toolkit version, I hope. Thanks for sharing I'll reference your videos for help
This also works for NeRFs and photogrammetry!
What would be the result if you didn't move back or turn around while capturing the video? I tried to create a nerf after capturing a video inside a room, moving from one end to the other, but it didn't work out. Why its happening?
Was it a 360 camera? Rooms can be tough if the walls are bare. You end up with cubemapped images without unique features.
I have worked with the insta360 RS one inch, and it is not worth the price tag! Bigger sensor is great for low light and higher dynamic range but this model has a few drawbacks. 1 high price, 2 the flare as seen in this video. I suggest buying a qoocam 3 at much lower price and better specs. It just released but will be on shelves soon.
That sun flare issue is terrible!
I have been using The Insta360 RS One Inch, the Insta360 X3, and the iPhone 13Pro. All three have their place in captures. The higher resolution and the larger sensor on the One Inch is great, but I really find the in-camera, realtime HDR video of the X3 helpful in outdoor scenes. If you can keep your subject in front of you as you orbit, even an older iPhoneXR is worlds better than the Insta360s. If you need to get in somewhere tight like a smaller building, out come the 360s. The 13Pro has much better low light and high pixel density captures than either, if you can orbit your subject. This is especially true now that they added shooting in RAW as an option in the Pro phones. Keep capturing!!
@@thenerfguru Indeed. You already know this, but for others on here, try to walk in the shadows like a thief in Skyrim. You can often pull up a map of your target area, evaluate when you will be there to do the capture, and try to stay in the shade as the sun moves during the day. This is a little easier in towns and cities since you can use the buildings' shadows. Sometimes, you just need to sidestep a foot to the right or left and it makes all the difference. Not always an option, but it can help. You can also tape a piece of paper up to the camera on the side with the sun (just wide enough) so it will keep the sun off the lens. You will lose some degrees of capture on the side with the sun, but what you do capture will be glare free. might be a fair trade.
is there an option within the aliceVision command to also include the view upwards?
outstanding - see you in the next one
Thanks!
You're great!
Thanks!
Can I measure the relative width in the result of Gaussian? Which software you suggest? Thank you!
It seems like the algo has a hard time with more data. For example you normally go around something and have a small area to look at with nerf or Gaussian .. however how do you maybe even combine for a larger scene ? You go around something and then you expand by getting more footage and try and combine all the data so you have more to look at .. or just create a larger scene .. it seems to have problems with that .. any thoughts ??
You mentioned around 15min that you could have gone back over the scene again. Would that significantly increase processing time but also significantly improve image quality (remove floaters, blur, etc.. )?
It probably wouldn't make training time too much longer. However, it would reduce floaters and bad views. You basically would have a greater degree of freedom.
How can you remove your head or body from all this?
when recording the video always have your body at the end of the camera stick, turn off horizon stabilization
Pued usar esto con three js?
Can I know your gpu spec used to build the gaussian splatting model? Thanks
I am using an RTX 3090ti
how long did this take to convert and train for you?
It really depends. Convert takes usually around 5-20 minutes depending on the scene. Could take longer for a lot of images. Train takes 30-45 minutes.
hm thats weird. small videos ive done have taken hours and hours just to convert. maybe i missed this in your tutorial video but do I need to capture at a lower res?@@thenerfguru
Perhaps. Maybe less total images in the end. Set the fps to like 1 or .5
OHH ok ive always done 30fps@@thenerfguru
Looks well made💪, but Abit unnecessary ;)
I usually use a long screw,40 mm. Screw it in 20mm into the corner and stick the magnet to it. Completely hidden by the sensor 🤙
it would be nice if tools like this could eventually take 360 photos as input natively
You could batch it and not have to deal with the different steps.
apparently LUMA AI allows you to do that via their cloud service
Hi!
What are the exact convert.py parameters you run the 360 vid?
I tried with mine, I shoot with insta 360 X3, good, slow recording, 4K equirects, I do exactly how you show and colmap only finds 3-6 images... :S
Do you have plenty of parallax in the scene? Of all of the objects are far away, not enough parallax this can happen.
thank you very much~!!
You’re welcome!
Is anyone getting a this app can't run on your PC, check software publisher, even though this has worked before?
@thenerfguru i wonder if using this method, you can create stereoscopic 3d gaussin splatting using a VR180 camera? i have footage i can provide for testing purposes
Interesting. My next video will be how to display this all in Unity. I bet it can be accomplished in there.
@@thenerfguru rad! ill be on the lookout for that video - keep crushing it man
hi I went through the process of the convert.py but ‘Mapper failed with code’ showed up after hours of processing. 😢
would be awesome if it could just directly process 360 pictures directly to get it all
This call could be batch scripted so you don’t have to go through all of the steps one by one.
fantastiic technic
Thank you so much
Would this work using Google Street view 360 images?
I have not tried it. Can you great a clean image extract from Google?
Yes, there is way you can download and view them. I took 1080 X 1920 stills and free then into photogrammetry software but the result was a sphere with the image protected onto it.
Feed in the all the street view data from google maps.
I don't know how to scrape all of the street view data, but yes that would technically work.
Hi, Do i need GPS data in my photos for this? The QooCam3 can only do this by pairing with my phone
You do not need GPS data.
Ok, so I bought one and tried this and my resulting GS seemed to be as if it was a single frame? a tiny section of the total recorded space. Any ideas why this may happen? I might be doing something wrong, my first attempt ever
I have all my 360 frames each. I split them with ffmpeg. I see all the split frames, I put them into the "input" folder of my COLMAP root. Although after its done, I see in COLMAP "images" there is only 3 and that is the spot that i see in my GS. It only processed 3 of the 4600 images
Are you attempting to work with the equirectangular images or splitting them with Meshroom?
@@thenerfguru splitting them with meshroom
I tried with an all new data set and got the same result. I must be missing something
have you finished your problem?
@user-kd2uw1oy1d the entire splat needs to handled in less than the entirety of your VRAM, that was the issue. I bought an XGRIDS K1 scanner, boom problem solved, insane quality
Anyone knows how to use equirectangular images without breaking them into separate FOVs? This would seem the best use of the data.
Perhaps your best bet is to try Nerfstudio's 360 image supported training. Then, convert it to 3D Gaussian Splatting format. I don't have a tutorial for this though.
so after we get a gaussian splat where can we even use it? no adobe programs can run them, da vinci cant, blender does it very poorly, ue5 costs $100, i think maybe unity is the only program that can use a gaussian splat. they are awesome but its like havin 8k video and youtube only plays 1080. where can i actually use these splats to make a cool video?
I believe UE5 has some free options now!
@@thenerfguru thanks
Can you multicam nerfs and splats?
Do you mean record with multiple cameras at once? Could be achieved if all of the cameras were the same model/lens
@@thenerfguru Thank you, I'm picturing two 360 cameras. perhaps one on a stick for sweeping around and one on a pole sticking up from a backpack? Or two at different heights on a walking stick. Do you have any guesses as to how two insta360 X3s used like that would do vs a single RS ONE 360 edition? Also imagining a frame to put 3 of them for quick one pass scanning of cooperative humans.
can you make custom FOV, i like to add more top part to the exported frame
Maybe, I have not looked into the python scripts provided by Meshroom. However, you may be able to modify them.
oh nice!
What's the camera name?
Is it possible to extract a point cloud?
Not currently. I wouldn’t be surprised if a new project comes out where geometry is exportable. I’ve seen a paper on it and a demo code, but it’s not usable today.
WHAT KIND OF CAMERA?
In this video I used an Insta360 One RS 1-Inch Edition.
@@thenerfguru thanks dude
@@thenerfguru Hey, I got the same device and wanted to try reproducing the similar thing like you did, but I could only generate almost-one frame result after rendering although the aliceVision_utils_split360Images did a lot "subimages". I checked the result "output" directory, actually there were only few images used.
Do you have any idea about the problem I had?
I just tried a dataset of 1456 images (1200x1200) and my 24 gb vram wasnt large enough, going for 728 (half) now to be safe
727 of the 728 images linked, and uses around 18gb of dedicated VRAM
how does the model look ? @@lodewijkluijt5793