How to Use 360 Video for 3D Gaussian Splatting (and NeRFs!)

Поділитися
Вставка
  • Опубліковано 27 лис 2024

КОМЕНТАРІ • 156

  • @jimj2683
    @jimj2683 Місяць тому +1

    There is so much depth information with the parallax effect and lighting/shadows.

  • @mcmulla2
    @mcmulla2 Рік тому +6

    Perfect! i've been messing with this thanks to your Splatting tutorial, excited to mess with some 360 footage I captured too!

    • @thenerfguru
      @thenerfguru  Рік тому

      Awesome! Follow me on social. If you share anything you come up with, tag me and I’ll repost it.

  • @secondfavorite
    @secondfavorite Рік тому +3

    thanks a bunch! This is what I needed. I have an Insta360 but didn't know where to start.

    • @thenerfguru
      @thenerfguru  Рік тому

      Great! Let me know if you run into any roadblocks.

  • @marcomoscoso7402
    @marcomoscoso7402 Рік тому +3

    Looks so straightforward. I wonder what will this technology look like in 5 years.

    • @c0nsumption
      @c0nsumption Рік тому +2

      Can’t help but think that gaming and sims are going to change dramatically 🤔
      Like is this the future of memory? 🤷🏽‍♂️

    • @marcomoscoso7402
      @marcomoscoso7402 Рік тому

      @@c0nsumption there are implementations with Unreal engine already with this technology. I think this is the future of games

    • @c0nsumption
      @c0nsumption Рік тому

      @@marcomoscoso7402 dang, been switching over to unreal since about a year ago cause I had iffy feelings after being a Unity dev for 10 years. Thank God I did. I gotta try em out. Researching over the weekend

  • @Thats_Cool_Jack
    @Thats_Cool_Jack Рік тому +10

    I find meshroom's image outputs from 360 video to be very limiting, it only goes along the middle of the frame which has you missing out on up close things and only focusing on things in the horizon. My solution was to put the video on an inverted sphere in blender, with some cameras (12 of them) facing outwards from the center at varying angles, and then create a bunch of camera markers (Ctrl b) that switches between all the cameras every frame. I found I got way better results doing this, especially because I have a lower end 360 camera thats only 5k res. Hope this helps someone

    • @Thats_Cool_Jack
      @Thats_Cool_Jack Рік тому +1

      You want to avoid high fov to minimize distorted edges, which tend to be useless in photogrammetry

    • @thenerfguru
      @thenerfguru  Рік тому

      @@Thats_Cool_Jack interesting. I have almost zero experience with Blender. What is your experience with your method being trained into a NeRF or used for photogrammetry output?

    • @Thats_Cool_Jack
      @Thats_Cool_Jack Рік тому +1

      @@thenerfguru it works really well. The images are the same quality as they would be if you were using the meshroom method but you can choose the angles that the cameras are looking. When I record the 360 video I sway the camera on the end of a camera stick back and forth while walking to create as much parallax as possible, which gets the best depth information, but can be somewhat blurry in low light situations. I've done both nerf and photogrammetry. I made a vrchat world using this method in a graffiti alleyway.

    • @jiennyteng
      @jiennyteng 11 місяців тому

      Thanks for your awesome try cloud you introduce more details about how to import 360 video in the blender and output multi-view perspective images

    • @LukeMor
      @LukeMor 4 місяці тому

      @@Thats_Cool_Jackwould it be possibile to get your blender file? Thank you :)

  • @RelicRenditions
    @RelicRenditions Рік тому +5

    Such a great video. Thank you. I have been doing historic site and relic capture for a while now using photogrammetry and different NeRF solutions like Luma AI. I am excited to get started with Gaussian Splatting because: 1. It should render a lot faster for my clients, 2. may look better, and 3. It honestly seems easier to set up that many of the cutting edge NeRF frameworks I've been experimenting with that require Linux. Much of my workflow involves Windows because I also do a lot of Insta360 Captures, Omniverse, etc. This is great stuff!

  • @caedicoes
    @caedicoes Рік тому +3

    This is such an important video you can't even imagine!

  • @choiceillusion
    @choiceillusion Рік тому +1

    Very cool. I'm headed to a cabin on top of a mountain and Im going to do some loops with a drone in attempts to turn it into some sport of radiance field. Thank you for this tutorial.

    • @thenerfguru
      @thenerfguru  Рік тому

      Loops are amazing for this technology!

  • @Aero3D
    @Aero3D 10 місяців тому

    Buying one of these today just for GS generation! I am super excited to try this out!!!

  • @360_SA
    @360_SA Рік тому +2

    Thank you much-needed video

    • @thenerfguru
      @thenerfguru  Рік тому +1

      I was rushing this one out for you! I could use tips on how to get better 360 video footage. I have the two cameras at the start of the video. A Insta360 Pro II and a RS One 1-Inch

  • @Photonees
    @Photonees Рік тому +2

    Awesome, def gonna try and play with it. But how do you get the sky/ceiling rendered? As you said you didnt include the top? Also wonder how you can remove yourself if you use a 360 camera. I wonder if this would work with a fisheye and 4K video. Then you are always out of the image and can get very high res images or just pictures on my Canon R5 with fisheye. Any idea on what command you need then?

  • @bradmoore3778
    @bradmoore3778 Рік тому +2

    Really great! If you mounted three cameras to one post at different heights could you combine the three videos to make a better result? Or does the source have to come from the same device moving the one device to different heights? Thanks

    • @thenerfguru
      @thenerfguru  Рік тому

      That could work. However, I would want all 3 cameras to be the same camera model.

  • @benjaminwoite6136
    @benjaminwoite6136 Рік тому

    Can't wait to see how you make a Gaussian Splatting scene from Insta360 Pro footage.

  • @benbork9835
    @benbork9835 Рік тому +2

    Wow this is epic! What I did not quite understand, in this training data did you only record the alley way once or did you record it multiple times walking different paths as you told us to do?

    • @thenerfguru
      @thenerfguru  Рік тому

      This was a single walk through. You can see that I didn't have the best movement freedom in the end. Unless I stuck to the single trajectory, the result falls apart fast.

  • @deniaq1843
    @deniaq1843 7 місяців тому

    Thanks for your time and effort. I want to try it out myself soon. Was the whole progress in real-time? I especially mean the creation of the 3d gausian file. I just wonder how fast this can be. Thanks so far and best wishes :)

  • @brettcameratraveler
    @brettcameratraveler Рік тому +3

    When it comes to NeRFs and GS, can you foresee any advantage to shooting with that larger 360 camera when in 3D mode? I have the Canon dual fisheye 3D 180 8K video camera and hoping to take advantage of it in new, unintended ways, but seems like stereoscopic wouldn't help for this purpose as you could just take more pictures with a single lens, no?

    • @Thats_Cool_Jack
      @Thats_Cool_Jack Рік тому +1

      It can help but what helps the most is constantly moving the camera. I have my camera at the end of a stick and I sway it back and forth as I walk to create the most parallax

  • @John-b3k7c
    @John-b3k7c Рік тому +2

    Thanks for your detailed and professional video. We follow your steps and we can indeed get gaussian splatting results, but we also found that the 6K panoramic video (200MB bitrate, H265) shot using insta360 RS one is converted into a 360 image by ffmpeg, and then converted into perspective images using alicevision. The images are not very clear. Could you please give us some guidance on how to improve the clarity of the picture?

    • @RobertWildling
      @RobertWildling Рік тому

      Having the same problem. But it seems to be the insta360 RS One that simply does not deliver good image quality.

  • @mattizzle81
    @mattizzle81 Рік тому +1

    Insane idea. I was thinking of using iPhone lidar to capture point clouds but then that has a limited field of view and hence more waving the camera around.
    Capturing in 360 could be much more efficient.

  • @mankit.mp4
    @mankit.mp4 Рік тому +1

    Amazing tutorial thanks! While I don’t have a 360 camera I do have a full frame camera and a fish eye lens, how would you compare this workflow to if I was to take 4k video with the fisheye and obviously walk back and forth multiple time at different height?

    • @thenerfguru
      @thenerfguru  Рік тому +1

      Walking back and forth work. Just make sure you don’t make any sharp rotations with the camera.

  • @roscho-dev
    @roscho-dev 10 місяців тому

    Just found you on youtube after following you on linkedin for a while now! Great stuff! One question, do the scans have the correct real world measurements, for example could i measure a kitchen counter that is scanned and it be correct?

  • @darrendavid9758
    @darrendavid9758 4 місяці тому

    Awesome, exactly what I was looking for! I do want my reconstruction to have the views looking up and down though, not just 360 horizontally. Is there a way to extract that data from a spherical 360 video?

  • @JustThomas1
    @JustThomas1 Рік тому +4

    Regarding the conversion of the rectangular images to cubemaps - I'm afraid I don't understand the need for this.
    My experience with COLMAP is intermediate, but I typically experienced fewer camera pose misalignment issues when I didn't perform any operations on the input images. Not to mention the extreme slowdown on bundle adjustment & block matching when you start having tons of image tiles.
    Does Insta360 Studio not allow you to export the raw video from each camera independently? Or are you performing this workflow for some other reason?
    Additionally I'd love to hear why you're using meshroom for the cubemaps instead of something like 'ffmpeg -i input_equirectangular.mp4 -vf "v360=e:ih_fov=90:iv_fov=90:output_layout=cubemap_16" cubemap_output.mp4'

    • @thenerfguru
      @thenerfguru  Рік тому +3

      Great questions:
      1. I cannot export raw images from each lens. I use that workflow from my Insta360 Pro II, but I still drop a lot of the extreme warped sections of the images.
      2. As far as FFMPEG, That shows you have far back in updates I’ve been focused on with this software! After a few comments, I have written a Python script to extract 8 images and added some additional controls for optimization.
      3. For getting 8 cubemapped images, I’m going off of what I have tested in the past and works best. Using just the front, back, left, right, up and down images do not yield a great result.

    • @JustThomas1
      @JustThomas1 Рік тому

      @@thenerfguru Thank you very much for the clarification.

    • @jtogle
      @jtogle Рік тому +1

      I have an Insta360 Pro II also and would like to try your workflow out! Other than dealing with a balling ball on a pole overhead, does the workflow for a Pro II differ from this video?@@thenerfguru

    • @panonesia
      @panonesia 9 місяців тому

      @@thenerfguru if you dont mind, can you share your Python script to extract 8 images please

  • @chithanhle3404
    @chithanhle3404 Місяць тому

    Hi, thank you for your awesome work. I want to know if you ever try to use 360 video of indoor environment for Gaussian Splatting and is it ok in term of output quality?

  • @deniaq1843
    @deniaq1843 2 місяці тому

    Dear Jonathan. I have a question. When you cut the pano images with meshroom into cube maps they will have the size 1200by1200 with your line of code. I wonder if there is a formula with which one can calculate the maximum size possible for the input pano. I for example will be able to use a 8k 360 camera soon and i wonder what would be the ideal cube map size for the corresponding input material. Do you have any idea how to calculate or figure this out? Or is it a simple try out process? Thanks :-)

  • @frankricardocarrillo1094
    @frankricardocarrillo1094 Рік тому +2

    Hello Jonathan, I already have a NeRF and a Gaussian Splatting from the same scene, and I would like to make a video comparision to show how better are the GS, any recomendations about how to do it??
    Thanks

    • @thenerfguru
      @thenerfguru  Рік тому +1

      You bet! You can either manually resize all of your photos ahead of time, or when you prep the images it should make a half, quarter, and 8th scale version.

  • @DanyDinho91
    @DanyDinho91 Рік тому +1

    Hi, thanks for these tutorials. Is it possibile to export the point clouds or a 3d model of these results? Thanks

  • @AD34534
    @AD34534 Рік тому +1

    This is freaking amazing!

  • @JWPanimation
    @JWPanimation 5 місяців тому

    Thanks fro posting! Would it have been better to shoot stills every 5' or 2m with the 360 1 inch? As per your suggestion, would a higher elevation pass walking one way and then a
    lower elevation going back the other way be ideal?

  • @vassilisseferidis
    @vassilisseferidis Рік тому

    Great video Jonathan. Thank you. Have you tried any footage with the Insta360 Pro to compare the results with the One RS 1-inch?

    • @thenerfguru
      @thenerfguru  Рік тому

      I would like that too! Do you mean the x3? If only Insta360 could send me a loaner :)

    • @vassilisseferidis
      @vassilisseferidis Рік тому

      Hi Jonathan,
      When I follow your workflow the quality of the generated Gaussian splatting looks good only if you follow exactly the same path with the original (recording) camera.
      In your video you show the 6-camera Insta360 Pro model. Have you tried to create a Gaussian Splatting using that camera? I'd expect that the higher resolution would produce better results(?).
      Keep up your excellent work.

  • @kawishraj3558
    @kawishraj3558 2 місяці тому

    is it possible for you to share the 360 video for practice. I haven't been able to find good 360 videos to try gaussian splatting on. I have tried it successfuly on a lot 2d videos but just can't seem to find a good 360 one. Thanks for you the beginner guide, it was really helpful

  • @loganliu1573
    @loganliu1573 Рік тому

    Sorry, I am using machine translated English. I hope I can understand it
    --------------------------------
    Thank you very much for your video. I have learned a lot,
    Here is a small question to ask, if I created a ply model in a video I filmed
    I found that the ground was missing, and then I was filming a video of the ground and creating another ply model
    So how can we merge these two ply models into a complete one? If I can merge, then I can segment and shoot more videos, making a scene perfect without dead corners.
    笔记

  • @KeyPointProductionsVA
    @KeyPointProductionsVA Рік тому +2

    I’m still having issues just getting my computer to run python and such so I can start making nerfs. But I have a drone with 360 camera attachments I would love to start making using this

    • @thenerfguru
      @thenerfguru  Рік тому

      What’s happening with Python? Is it not added to your path?

    • @KeyPointProductionsVA
      @KeyPointProductionsVA Рік тому

      @@thenerfguru I am not sure why it wasn't working with my C: drive as that is where my OS is, but I put it on an old OS drive, now python is working just fine. technology, its weird sometimes 😆

  • @christianfeldmannofficial
    @christianfeldmannofficial 2 місяці тому

    Hello, do you think it would be possible to create a complete race track and then map it using Blender etc.?

  • @narendramall85
    @narendramall85 Рік тому +2

    How can i download the 3d environment into some .glb or other file format?

    • @thenerfguru
      @thenerfguru  Рік тому

      Not possible with this current project. However, this workflow will get you okay results with software like Reality Capture or Object Capture.

  • @GooseMcdonald
    @GooseMcdonald Рік тому +3

    Do you know a way to use a point cloud, i.e., some Leica scans, and use that point cloud to 3D Gaussian Splatting?

    • @thenerfguru
      @thenerfguru  Рік тому

      You need source images. I am not the most well versed in Leica solutions. Do you get both a point cloud and images from a scan station?

    • @RelicRenditions
      @RelicRenditions Рік тому

      I know that for the Leica BLK2GO, it captures both the LiDAR scans and the 360 panorama stills as you go. In the Leica SW, you can export the images every x feet that you want. The devices use both the laser and the RGB sensor to do SLAM as you move.

  • @EconaelGaming
    @EconaelGaming Рік тому +1

    Why do yous plit the images with meshroom? Can't colemap deal with fisheye lenses?

    • @thenerfguru
      @thenerfguru  Рік тому

      That’s a good question. Give it a shot. I bet you’ll have a fun time with COLMAP 🙃. Also, not sure how to export native fisheye images from this camera. I can do it with my Insta360 Pro II, but I still prefer using my only dewarp calibration.

  • @Povilaz
    @Povilaz Рік тому +1

    Very interesting!

  • @pixxelpusher
    @pixxelpusher Рік тому +2

    Asked on a previous video, but wondering if you'd know how to view these in VR?

    • @thenerfguru
      @thenerfguru  Рік тому +1

      My next video will be how to view these in Unity. I’m not Unity expert, but I think you can do it in there.

    • @pixxelpusher
      @pixxelpusher Рік тому

      @@thenerfguru Sounds great, look forward to it.

  • @monstercameron
    @monstercameron Рік тому +1

    I'm gonna give it a try

    • @thenerfguru
      @thenerfguru  Рік тому

      Comment if you get stuck! I was literally losing my voice while making the video. 😅

    • @monstercameron
      @monstercameron Рік тому

      @@thenerfguru well I ran it last night and I was seg faulting. I think it's my cuda toolkit version, I hope. Thanks for sharing I'll reference your videos for help

    • @thenerfguru
      @thenerfguru  Рік тому

      This also works for NeRFs and photogrammetry!

  • @animax-yz
    @animax-yz 8 місяців тому +1

    What would be the result if you didn't move back or turn around while capturing the video? I tried to create a nerf after capturing a video inside a room, moving from one end to the other, but it didn't work out. Why its happening?

    • @thenerfguru
      @thenerfguru  8 місяців тому

      Was it a 360 camera? Rooms can be tough if the walls are bare. You end up with cubemapped images without unique features.

  • @lolo2k
    @lolo2k Рік тому +1

    I have worked with the insta360 RS one inch, and it is not worth the price tag! Bigger sensor is great for low light and higher dynamic range but this model has a few drawbacks. 1 high price, 2 the flare as seen in this video. I suggest buying a qoocam 3 at much lower price and better specs. It just released but will be on shelves soon.

    • @thenerfguru
      @thenerfguru  Рік тому

      That sun flare issue is terrible!

    • @RelicRenditions
      @RelicRenditions Рік тому

      I have been using The Insta360 RS One Inch, the Insta360 X3, and the iPhone 13Pro. All three have their place in captures. The higher resolution and the larger sensor on the One Inch is great, but I really find the in-camera, realtime HDR video of the X3 helpful in outdoor scenes. If you can keep your subject in front of you as you orbit, even an older iPhoneXR is worlds better than the Insta360s. If you need to get in somewhere tight like a smaller building, out come the 360s. The 13Pro has much better low light and high pixel density captures than either, if you can orbit your subject. This is especially true now that they added shooting in RAW as an option in the Pro phones. Keep capturing!!

    • @RelicRenditions
      @RelicRenditions Рік тому

      ​@@thenerfguru Indeed. You already know this, but for others on here, try to walk in the shadows like a thief in Skyrim. You can often pull up a map of your target area, evaluate when you will be there to do the capture, and try to stay in the shade as the sun moves during the day. This is a little easier in towns and cities since you can use the buildings' shadows. Sometimes, you just need to sidestep a foot to the right or left and it makes all the difference. Not always an option, but it can help. You can also tape a piece of paper up to the camera on the side with the sun (just wide enough) so it will keep the sun off the lens. You will lose some degrees of capture on the side with the sun, but what you do capture will be glare free. might be a fair trade.

  • @joselondono
    @joselondono 5 місяців тому

    is there an option within the aliceVision command to also include the view upwards?

  • @underbelly69
    @underbelly69 Рік тому +1

    outstanding - see you in the next one

  • @melkorvalar7645
    @melkorvalar7645 Рік тому +1

    You're great!

  • @hangdu4417
    @hangdu4417 Рік тому

    Can I measure the relative width in the result of Gaussian? Which software you suggest? Thank you!

  • @Instant_Nerf
    @Instant_Nerf Рік тому

    It seems like the algo has a hard time with more data. For example you normally go around something and have a small area to look at with nerf or Gaussian .. however how do you maybe even combine for a larger scene ? You go around something and then you expand by getting more footage and try and combine all the data so you have more to look at .. or just create a larger scene .. it seems to have problems with that .. any thoughts ??

  • @JaanTalvet
    @JaanTalvet 10 місяців тому

    You mentioned around 15min that you could have gone back over the scene again. Would that significantly increase processing time but also significantly improve image quality (remove floaters, blur, etc.. )?

    • @thenerfguru
      @thenerfguru  10 місяців тому +1

      It probably wouldn't make training time too much longer. However, it would reduce floaters and bad views. You basically would have a greater degree of freedom.

  • @alvydasjokubauskas2587
    @alvydasjokubauskas2587 Рік тому +1

    How can you remove your head or body from all this?

    • @Thats_Cool_Jack
      @Thats_Cool_Jack Рік тому +2

      when recording the video always have your body at the end of the camera stick, turn off horizon stabilization

  • @LuisGustavoJulio
    @LuisGustavoJulio 9 днів тому

    Pued usar esto con three js?

  • @hyunjincho5972
    @hyunjincho5972 8 місяців тому +1

    Can I know your gpu spec used to build the gaussian splatting model? Thanks

    • @thenerfguru
      @thenerfguru  8 місяців тому

      I am using an RTX 3090ti

  • @S41L0R
    @S41L0R Рік тому +2

    how long did this take to convert and train for you?

    • @thenerfguru
      @thenerfguru  Рік тому

      It really depends. Convert takes usually around 5-20 minutes depending on the scene. Could take longer for a lot of images. Train takes 30-45 minutes.

    • @S41L0R
      @S41L0R Рік тому

      hm thats weird. small videos ive done have taken hours and hours just to convert. maybe i missed this in your tutorial video but do I need to capture at a lower res?@@thenerfguru

    • @thenerfguru
      @thenerfguru  Рік тому

      Perhaps. Maybe less total images in the end. Set the fps to like 1 or .5

    • @S41L0R
      @S41L0R Рік тому

      OHH ok ive always done 30fps@@thenerfguru

  • @R.Akerblad
    @R.Akerblad 10 місяців тому

    Looks well made💪, but Abit unnecessary ;)
    I usually use a long screw,40 mm. Screw it in 20mm into the corner and stick the magnet to it. Completely hidden by the sensor 🤙

  • @Legnog822
    @Legnog822 Рік тому

    it would be nice if tools like this could eventually take 360 photos as input natively

    • @thenerfguru
      @thenerfguru  Рік тому

      You could batch it and not have to deal with the different steps.

    • @foolishonboards
      @foolishonboards 8 місяців тому

      apparently LUMA AI allows you to do that via their cloud service

  • @martondemeter4203
    @martondemeter4203 11 місяців тому

    Hi!
    What are the exact convert.py parameters you run the 360 vid?
    I tried with mine, I shoot with insta 360 X3, good, slow recording, 4K equirects, I do exactly how you show and colmap only finds 3-6 images... :S

    • @thenerfguru
      @thenerfguru  11 місяців тому

      Do you have plenty of parallax in the scene? Of all of the objects are far away, not enough parallax this can happen.

  • @kachuncheng-s1v
    @kachuncheng-s1v Рік тому +1

    thank you very much~!!

  • @tribaltheadventurer
    @tribaltheadventurer Рік тому

    Is anyone getting a this app can't run on your PC, check software publisher, even though this has worked before?

  • @sashachechelnitsky1194
    @sashachechelnitsky1194 Рік тому +1

    @thenerfguru i wonder if using this method, you can create stereoscopic 3d gaussin splatting using a VR180 camera? i have footage i can provide for testing purposes

    • @thenerfguru
      @thenerfguru  Рік тому

      Interesting. My next video will be how to display this all in Unity. I bet it can be accomplished in there.

    • @sashachechelnitsky1194
      @sashachechelnitsky1194 Рік тому

      @@thenerfguru rad! ill be on the lookout for that video - keep crushing it man

  • @kyle.deisgn4626
    @kyle.deisgn4626 8 місяців тому

    hi I went through the process of the convert.py but ‘Mapper failed with code’ showed up after hours of processing. 😢

  • @liquidmasl
    @liquidmasl Рік тому +1

    would be awesome if it could just directly process 360 pictures directly to get it all

    • @thenerfguru
      @thenerfguru  Рік тому +2

      This call could be batch scripted so you don’t have to go through all of the steps one by one.

  • @lucho3612
    @lucho3612 Рік тому

    fantastiic technic

  • @tribaltheadventurer
    @tribaltheadventurer Рік тому

    Thank you so much

  • @briancunning423
    @briancunning423 Рік тому +1

    Would this work using Google Street view 360 images?

    • @thenerfguru
      @thenerfguru  Рік тому

      I have not tried it. Can you great a clean image extract from Google?

    • @briancunning423
      @briancunning423 Рік тому

      Yes, there is way you can download and view them. I took 1080 X 1920 stills and free then into photogrammetry software but the result was a sphere with the image protected onto it.

  • @Moctop
    @Moctop Рік тому +1

    Feed in the all the street view data from google maps.

    • @thenerfguru
      @thenerfguru  Рік тому

      I don't know how to scrape all of the street view data, but yes that would technically work.

  • @27klickslegend
    @27klickslegend 11 місяців тому

    Hi, Do i need GPS data in my photos for this? The QooCam3 can only do this by pairing with my phone

    • @thenerfguru
      @thenerfguru  11 місяців тому

      You do not need GPS data.

  • @Aero3D
    @Aero3D 9 місяців тому

    Ok, so I bought one and tried this and my resulting GS seemed to be as if it was a single frame? a tiny section of the total recorded space. Any ideas why this may happen? I might be doing something wrong, my first attempt ever
    I have all my 360 frames each. I split them with ffmpeg. I see all the split frames, I put them into the "input" folder of my COLMAP root. Although after its done, I see in COLMAP "images" there is only 3 and that is the spot that i see in my GS. It only processed 3 of the 4600 images

    • @thenerfguru
      @thenerfguru  9 місяців тому

      Are you attempting to work with the equirectangular images or splitting them with Meshroom?

    • @Aero3D
      @Aero3D 9 місяців тому

      @@thenerfguru splitting them with meshroom

    • @Aero3D
      @Aero3D 9 місяців тому

      I tried with an all new data set and got the same result. I must be missing something

    • @方川-g8z
      @方川-g8z 3 місяці тому

      have you finished your problem?

    • @Aero3D
      @Aero3D 3 місяці тому

      @user-kd2uw1oy1d the entire splat needs to handled in less than the entirety of your VRAM, that was the issue. I bought an XGRIDS K1 scanner, boom problem solved, insane quality

  • @felixgeen6543
    @felixgeen6543 Рік тому

    Anyone knows how to use equirectangular images without breaking them into separate FOVs? This would seem the best use of the data.

    • @thenerfguru
      @thenerfguru  Рік тому

      Perhaps your best bet is to try Nerfstudio's 360 image supported training. Then, convert it to 3D Gaussian Splatting format. I don't have a tutorial for this though.

  • @wrillywonka1320
    @wrillywonka1320 11 місяців тому

    so after we get a gaussian splat where can we even use it? no adobe programs can run them, da vinci cant, blender does it very poorly, ue5 costs $100, i think maybe unity is the only program that can use a gaussian splat. they are awesome but its like havin 8k video and youtube only plays 1080. where can i actually use these splats to make a cool video?

    • @thenerfguru
      @thenerfguru  10 місяців тому +1

      I believe UE5 has some free options now!

    • @wrillywonka1320
      @wrillywonka1320 10 місяців тому

      @@thenerfguru thanks

  • @spaceghostcqc2137
    @spaceghostcqc2137 Рік тому +1

    Can you multicam nerfs and splats?

    • @thenerfguru
      @thenerfguru  Рік тому

      Do you mean record with multiple cameras at once? Could be achieved if all of the cameras were the same model/lens

    • @spaceghostcqc2137
      @spaceghostcqc2137 Рік тому

      @@thenerfguru Thank you, I'm picturing two 360 cameras. perhaps one on a stick for sweeping around and one on a pole sticking up from a backpack? Or two at different heights on a walking stick. Do you have any guesses as to how two insta360 X3s used like that would do vs a single RS ONE 360 edition? Also imagining a frame to put 3 of them for quick one pass scanning of cooperative humans.

  • @panonesia
    @panonesia 9 місяців тому

    can you make custom FOV, i like to add more top part to the exported frame

    • @thenerfguru
      @thenerfguru  9 місяців тому

      Maybe, I have not looked into the python scripts provided by Meshroom. However, you may be able to modify them.

  • @allether5377
    @allether5377 Рік тому

    oh nice!

  • @XiaoyuXue-xw9wf
    @XiaoyuXue-xw9wf Рік тому

    What's the camera name?

  • @CristianSanz520
    @CristianSanz520 Рік тому +1

    Is it possible to extract a point cloud?

    • @thenerfguru
      @thenerfguru  Рік тому +1

      Not currently. I wouldn’t be surprised if a new project comes out where geometry is exportable. I’ve seen a paper on it and a demo code, but it’s not usable today.

  • @hasszhao
    @hasszhao Рік тому +1

    WHAT KIND OF CAMERA?

    • @thenerfguru
      @thenerfguru  Рік тому

      In this video I used an Insta360 One RS 1-Inch Edition.

    • @hasszhao
      @hasszhao Рік тому

      @@thenerfguru thanks dude

    • @hasszhao
      @hasszhao Рік тому

      ​@@thenerfguru Hey, I got the same device and wanted to try reproducing the similar thing like you did, but I could only generate almost-one frame result after rendering although the aliceVision_utils_split360Images did a lot "subimages". I checked the result "output" directory, actually there were only few images used.
      Do you have any idea about the problem I had?

  • @lodewijkluijt5793
    @lodewijkluijt5793 Рік тому

    I just tried a dataset of 1456 images (1200x1200) and my 24 gb vram wasnt large enough, going for 728 (half) now to be safe

    • @lodewijkluijt5793
      @lodewijkluijt5793 Рік тому

      727 of the 728 images linked, and uses around 18gb of dedicated VRAM

    • @foolishonboards
      @foolishonboards 8 місяців тому

      how does the model look ? @@lodewijkluijt5793