The NeRF Guru
The NeRF Guru
  • 15
  • 445 466
100x Your Image Prep Speed for Gaussian Splatting Using GLOMAP
In this video I show you how to install and run GLOMAP, a general purpose global structure-from-motion pipeline for image-based reconstruction. GLOMAP essentially is a novel way to run sparse 3D reconstruction at one or two orders of magnitude faster.
Link to GitHub Project: github.com/jonstephens85/glomap_windows
I show you:
How to Install COLMAP and COLMAP
How to manually run GLOMAP
How to run my script that automates the process and prepares your data for using either 3DGS or Nerfstudio.
Tutorial Timeline
00:00 Intro
01:23 Installing COLMAP
04:52 Installing GLOMAP
06:25 Running GLOMAP Manually
14:00 Running GLOMAP with automated scripts for 3DGS
19:25 Running GLOMAP with automated scripts for Nerfstudio
24:27 Running GLOMAP with image intervals
If you run into bugs, PLEASE comment here or in the project.
If you enjoyed this video, please give my channel a follow.
Thanks to Dell for providing a Precision 3680 workstation for making this content.
Переглядів: 5 101

Відео

Edit, Measure, and Render Videos with Polycam's Gaussian Splat Update!
Переглядів 2,9 тис.8 місяців тому
In this video, I review the latest updates to Polycam's gaussian splat viewer. They added cropping, measuring, rescaling, and rendering videos! Note, I did accidentally leave out that you can rotate your model which is very important for those random results that are tilted. You can view Florence, Italy splat from Polycam at: poly.cam/capture/da017733-540a-4acb-a353-30b73889dced Please follow m...
Nerfstudio Gaussian Splat Models Compared
Переглядів 3 тис.8 місяців тому
In this video, I compare Nerfstudio's Splatfacto model versus their new Splatfacto-Big model to compare the differences in quality. Tell me what you think of the two different results! For background reference, I used a 4k video from a DJI Mavic 3 Classic for training. I processed both examples with Nerfstudio's defaults, no control over downscaling, etc. Please follow my channel for advanced t...
How to Create Gaussian Splats with Nerfstudio
Переглядів 16 тис.10 місяців тому
In this video, I walk you through how to create gaussian spats in Nerfstudio. The code has been soft-launched and this is the unofficial guide. I show you have to create a gaussian splatting scene using demo data and then show you how to use your own data. If you have not installed Nerfstudio since late December 2023, you will need to install or re-install. I made a walk through video for you: ...
How to Install Nerfstudio (2024)
Переглядів 12 тис.10 місяців тому
In this video, I walk you through how to install Nerfstudio for Windows. This guide is meant for beginners and advanced users. If you have installed Nerfstudio in the past and want to get the latest updates for gaussian-splatting and the latest Nerf models, this will be a good refresher. For newbies, I do my best to make it easy to copy and paste codes to follow along. If you run into any issue...
What is Better? Polycam Gaussian Splats vs Original Gaussian Splats
Переглядів 12 тис.Рік тому
In this video, I compare Polycam's new Guassian Splatting output against the original 3D Gaussian Splatting for Real Time Radiance Fields project. Both options have pros and cons. My goal is that this informs you better on what version to use. Show notes: 00:00 Intro 01:11 Fury Cat Scene from iPhone 04:22 Outdoor Scene from Drone 08:58 Small Bridge from iPhone 11:24 Pros of OG Gaussian Splats 1...
How to View 3D Gaussian Splatting Scenes in Unity
Переглядів 25 тис.Рік тому
In this tutorial, I show you how to import 3D Gaussian Splatting scenes in to Unity and view them in real time. From there, you can add post processing, effects, VR support, or wherever your imagination and skill take you. The first part of the video covers how to import the 3D Gaussian Splatting model. In the second half of the video, I show you how to add post processing effects like color gr...
How to Use 360 Video for 3D Gaussian Splatting (and NeRFs!)
Переглядів 27 тис.Рік тому
In this video, I show you how to take 360 video or images and use them to train a 3D Gaussian Splatting scene. This is an absolute beginner's guide and is the easiest way to get started. However, it is not the BEST way to do this. I will make a second video in the future that is more involved. You will need the 2021 version of Meshroom. Download it here: www.fosshub.com/Meshroom-old.html?dwl=Me...
How to Use The Nerfstudio Viewer With 3D Gaussian Splatting
Переглядів 17 тис.Рік тому
In this video I show you how to set up the Nerfstudio viewer to work with trained 3D Gaussian Splatting models. You can create camera animations and render flythroughs in seconds! Below are the necessary links: Nerfstudio / 3DGS code fork: github.com/yzslab/nerfstudio/tree/gaussian_splatting My How to use 3D Gaussian Splatting Fork: github.com/jonstephens85/gaussian-splatting-Windows The comman...
Does training 3D Gaussian Splats Longer Make a Difference?
Переглядів 25 тис.Рік тому
In this video, I compare 3 different scenes trained at 7k iterations and 30k iterations. I also show you how the launch different checkpoints in the viewer and resize the rendering window. What do you think? Can you see the differences? Check out the show notes to jump between the different scenes: 00:00 Intro 00:20 How to view different training checkpoints 01:57 Screening Plant 7k iterations ...
How to Use Nerfstudio Data to Make 3D Gaussian Splats
Переглядів 7 тис.Рік тому
In this video I show you how to convert images that you prepared with Nerfstudio's COLMAP format into usable files to train a 3D Gaussian Splat scene. This will save you time for larger scene that took a lot of time to process with COLMAP. Find the link GitHub repo here: github.com/jonstephens85/gaussian-splatting-Windows
Getting Started With 3D Gaussian Splatting for Windows (Beginner Tutorial)
Переглядів 191 тис.Рік тому
In this video, I walk you through how to install 3D Gaussian Splatting for Real-Time Radiance Field Rendering. I also walk you through how to make your own scenes with 3D Gaussian Splats. You do not need any prior programming or command prompt experience. See below for links to the modified repository I reference in the video as well as helpful text links that you will use in the video. Link to...
The First Selfie Using 3D Gaussian Splats?
Переглядів 8 тис.Рік тому
In this video I show off the detail captured on a highly complex scene...with me in the middle of it! The quality beats anything I have produced with Instant NeRF or Nerfstudio. Learn more about 3D Gaussian Splatting for Real-Time Radiance Field Rendering here: repo-sam.inria.fr/fungraph/3d-gaussian-splatting/
3D Gaussian Splatting First Impressions
Переглядів 93 тис.Рік тому
In this video, I give you my first impressions on using 3D Gaussian Splatting for Real-Time Radiance Field Rendering. I used 289 images taken from a DJI Mavic 3 Enterprise flow in a helix pattern around a telecommunications tower. Processing Stats: Image preparation - 19 minutes Training to 30,000 iterations - 48 minutes 3D Gaussian Splatting for Real-Time Radiance Field Rendering Info: Project...
Movement in NeRFs
Переглядів 1,6 тис.Рік тому
Movement in NeRFs

КОМЕНТАРІ

  • @LuisGustavoJulio
    @LuisGustavoJulio 5 днів тому

    Pued usar esto con three js?

  • @jaredthebrown
    @jaredthebrown 5 днів тому

    I saw the term Gaussian Splatt and looked it up. I came to this video and watched a few minutes of it. I understand that this chap is speaking English - I recognise some of the words - but honestly, on the whole it sounds like a sequence of random words "git, splat, hub, fork, repo". Great stuff!

  • @LuisGustavoJulio
    @LuisGustavoJulio 9 днів тому

    Has anyone encountered this error? splatting>python train.py -s images/input Optimizing Output folder: ./output/be20adee-d [13/11 10:03:31] Tensorboard not available: not logging progress [13/11 10:03:31] Traceback (most recent call last): File "train.py", line 282, in <module> training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from) File "train.py", line 51, in training scene = Scene(dataset, gaussians) File "D:\Library\gaussian-splatting\scene\__init__.py", line 49, in __init__ assert False, "Could not recognize scene type!" AssertionError: Could not recognize scene type!

  • @刘智宇-v1t
    @刘智宇-v1t 12 днів тому

    Nice display, thank you.

  • @maxistm.711
    @maxistm.711 20 днів тому

    I really enjoy your videos! Keep up the good work my friend.

  • @Krewov
    @Krewov 25 днів тому

    For those who have problems with the right CUDA version, my advice is downgrade your Visual Studio to 2019, since NOT all versions of the visual studio 2022 have the supported MSC_VER for CUDA 12.x or below

  • @wonglaihim4864
    @wonglaihim4864 25 днів тому

    awesome and thanks

  • @mn04147
    @mn04147 Місяць тому

    can you make a tutorial about making dynamic gaussian splattung?

  • @josephdevelop3112
    @josephdevelop3112 Місяць тому

    I get this while "Installing pip dependencies" Pip subprocess error: ERROR: Directory 'submodules/diff-gaussian-rasterization' is not installable. Neither 'setup.py' nor 'pyproject.toml' found. failed CondaEnvException: Pip failed

  • @jimj2683
    @jimj2683 Місяць тому

    There is so much depth information with the parallax effect and lighting/shadows.

  • @SoftYoda
    @SoftYoda Місяць тому

    Do you also have sometime the pinhole bug with GLOMAP when trying to train Gaussians splats?

    • @thenerfguru
      @thenerfguru Місяць тому

      Pinhole bug? Can you elaborate?

    • @SoftYoda
      @SoftYoda Місяць тому

      @@thenerfguru finally i fixed it, it was the COLMAP part when creating the database for points.

  • @chithanhle3404
    @chithanhle3404 Місяць тому

    Hi, thank you for your awesome work. I want to know if you ever try to use 360 video of indoor environment for Gaussian Splatting and is it ok in term of output quality?

  • @tarekabdelkader7047
    @tarekabdelkader7047 Місяць тому

    Hi, I have a problem during Installing the Optimizer. When I run the second command ((conda env create --file environment.yml) it starts ok but then gives error while installing pip dependencies like below: (any help is appreciated) Pip subprocess output: Processing c:\users\tarek\gaussian-splatting\submodules\diff-gaussian-rasterization Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Processing c:\users\tarek\gaussian-splatting\submodules\simple-knn Preparing metadata (setup.py): started Preparing metadata (setup.py): finished with status 'done' Building wheels for collected packages: diff_gaussian_rasterization, simple_knn Building wheel for diff_gaussian_rasterization (setup.py): started Building wheel for diff_gaussian_rasterization (setup.py): finished with status 'error' Running setup.py clean for diff_gaussian_rasterization Building wheel for simple_knn (setup.py): started Building wheel for simple_knn (setup.py): finished with status 'error' Running setup.py clean for simple_knn Failed to build diff_gaussian_rasterization simple_knn Installing collected packages: simple_knn, diff_gaussian_rasterization Running setup.py install for simple_knn: started Running setup.py install for simple_knn: finished with status 'error' Pip subprocess error: error: subprocess-exited-with-error × python setup.py bdist_wheel did not run successfully. │ exit code: 1 ╰─> [22 lines of output]

  • @bluebottle7835
    @bluebottle7835 Місяць тому

    Thanks for the video. One quick question though, why is it that I can see gaussian splat in view port but not in the main camera. In other words, Unity does not render gaussian splat in any of the camera except view port. If you know, please share. Thanks

  • @GaidaCanix
    @GaidaCanix Місяць тому

    Heya, I've followed your tutorial until I've had to pass the command convert.py -s data/Videos/input The directory exists, I'm just not sure what "Invalid options" that I've provided that caused the python command to fail. E20240928 01:38:44.370547 19964 logging.cc:56] [option_manager.cc:811] Check failed: ExistsDir(*image_path) E20240928 01:38:44.371256 19964 option_manager.cc:877] Invalid options provided. ERROR:root:Feature extraction failed with code 1. Exiting.

    • @GaidaCanix
      @GaidaCanix Місяць тому

      I've figured it out: there is a specific folder directory: input_data/subject_name/input/images####.jpg You have to follow that exactly, else there will be directory problems.

  • @hewas321
    @hewas321 Місяць тому

    Heyya! Thanks for providing such a great tutorial video. I wonder if you have no video for MacOS?

  • @deniaq1843
    @deniaq1843 Місяць тому

    Dear Jonathan. I have a question. When you cut the pano images with meshroom into cube maps they will have the size 1200by1200 with your line of code. I wonder if there is a formula with which one can calculate the maximum size possible for the input pano. I for example will be able to use a 8k 360 camera soon and i wonder what would be the ideal cube map size for the corresponding input material. Do you have any idea how to calculate or figure this out? Or is it a simple try out process? Thanks :-)

  • @21-CSE-29BHUSHANSINGH
    @21-CSE-29BHUSHANSINGH 2 місяці тому

    Building wheel for simple_knn (setup.py): started Building wheel for simple_knn (setup.py): finished with status 'error' Running setup.py clean for simple_knn Failed to build diff_gaussian_rasterization simple_knn Installing collected packages: simple_knn, diff_gaussian_rasterization Running setup.py install for simple_knn: started Running setup.py install for simple_knn: finished with status 'error' Pip subprocess error: error: subprocess-exited-with-error plz help

  • @21-CSE-29BHUSHANSINGH
    @21-CSE-29BHUSHANSINGH 2 місяці тому

    Building wheel for simple_knn (setup.py): started Building wheel for simple_knn (setup.py): finished with status 'error' Running setup.py clean for simple_knn Failed to build diff_gaussian_rasterization simple_knn Installing collected packages: simple_knn, diff_gaussian_rasterization Running setup.py install for simple_knn: started Running setup.py install for simple_knn: finished with status 'error' Pip subprocess error: error: subprocess-exited-with-error plz help

  • @ISOH9
    @ISOH9 2 місяці тому

    Hello there, I recently followed your instructions, but I encountered an issue with Nerfstudio. It requires a YML file using the "--load-config" option, but as you know, the 3DGS process doesn't produce a YML file. As a result, it's difficult to follow along with your video now. Is there any way to open the 3DGS PLY file using Nerfstudio? I'm quite new to this field, and thank you for your great video regardless!

  • @LePetitBat
    @LePetitBat 2 місяці тому

    Can this be used with different feature extractors?

    • @thenerfguru
      @thenerfguru Місяць тому

      Not sure. Haven’t tried.

  • @kawishraj3558
    @kawishraj3558 2 місяці тому

    is it possible for you to share the 360 video for practice. I haven't been able to find good 360 videos to try gaussian splatting on. I have tried it successfuly on a lot 2d videos but just can't seem to find a good 360 one. Thanks for you the beginner guide, it was really helpful

  • @changeair-d3q
    @changeair-d3q 2 місяці тому

    Thank you!well ,could I use it to get a "transform_json" file? For example, for instant-ngp training

  • @dharmasai0
    @dharmasai0 2 місяці тому

    does it support dynamic scenes ?

  • @vladan.Poison
    @vladan.Poison 2 місяці тому

    nice. can anything that gets produced via "dark magic" you've showed be used with Post Shot ?

  • @prateeksrivastava9123
    @prateeksrivastava9123 2 місяці тому

    Building wheel for diff_gaussian_rasterization (setup.py): finished with status 'error' --> Always this error is coming while running the --> conda env create --file environment.yml

  • @のりしお-p9d
    @のりしお-p9d 2 місяці тому

    Special thanks from Japan!!!!! Arigato

  • @wpftutorial
    @wpftutorial 2 місяці тому

    Thank you so much for sharing this stuff. Very powerful! I wish you would make a video describing how to get great quality gaussian splatting for a room. Tips like what kind of videos or what kind of pictures. "How to take good quality source material for great splatts" or something like that. Thanks!

    • @thenerfguru
      @thenerfguru Місяць тому

      Sorry for the month long delay in responding! I think this is a great topic. I’ll get around to making a video soon.

  • @AntonioMac3301
    @AntonioMac3301 2 місяці тому

    hey Jonathan, great tutorial! I am having trouble when trying to run this with nerfstudio, I run the script you have and it works great and then I ran the same ns-process-data command. However, when I started training with ns-train, all the cameras were positioned in the same spot. Checking the transforms.json file showed that all these cameras were positioned at the same spot. Is there some way to fix this? wish you could show that training with nerfstudio part in your video

  • @srikanthhari8667
    @srikanthhari8667 2 місяці тому

    @thenerfguru You ran environment.yml directly which installed submodules/diff-gaussian-rasterization and submodules/simple-knn I am facing issues in installing this setup I have ran nvcc--version cmd which gave cuda 11.8 tool kit is installed please do support on what changes I need to do I tried it multiple times gone through multiple opensource communities anyone who has done successfully can answer

  • @rafall1118
    @rafall1118 2 місяці тому

    To me the way glomap works is really similar how 3d scanners align scanned frames, or rather correct them after scanning with a global registration ICP algorithm. As shown in the video this will easily work with objects with distinct features, but if you tried it with geometrical shapes or featureless/smooth objects it most likely wouldn't. Still very interesting! Didn't think someone was insane enough to try matching "stereo frames" for photogrammetry but for GS it makes sense I guess as you don't need the mm accuracy. Or perhaps I'm completely wrong, gotta read that paper 😂

    • @thenerfguru
      @thenerfguru 2 місяці тому

      You are correct, it takes a global camera alignment approach! Also, it still relies on matching features. If you have blank or reflective surfaces you will have a hard time.

  • @christianfeldmannofficial
    @christianfeldmannofficial 2 місяці тому

    Hello, do you think it would be possible to create a complete race track and then map it using Blender etc.?

  • @sandyatef-l7c
    @sandyatef-l7c 2 місяці тому

    it took a lot of time and i kept doing the same steps over and over because you didn't speak in a clear way so some times i didn't understand you in general I'm not programmer or anything of that and i didn't install visual studio before but in the same time i didn't watch any one else because i loved your way i don't know why keep going you are amazing honey

  • @RishabhJha-s2p
    @RishabhJha-s2p 2 місяці тому

    This is really interesting. I'm excited to explore this! By the way, great job, @thenerfguru. I hope you create videos on dynamic scenes as well.😅

    • @thenerfguru
      @thenerfguru 2 місяці тому

      Like 4D GS?

    • @RishabhJha-s2p
      @RishabhJha-s2p 2 місяці тому

      @@thenerfguru Yes , Especially There multipleviews scenes implementation would be nice

  • @AhmedSaiedEissa-b5y
    @AhmedSaiedEissa-b5y 2 місяці тому

    (( CondaEnvException: Pip failed )) keeps saying that every time nd idk what is the prplm >_<

    • @NeroPop
      @NeroPop 2 місяці тому

      I have the same problem and whats even more annoying is that it used to work! it broke when I deleted the repo folders on my hard drive and re-installed them. I believe its due to an update to Visual Studio 2022 meaning that it's no longer compatible with CUDA 11.8 but I could be v wrong so please let me know if it works or not, I plan to test this myself tomorrow. Goodluck!

  • @JordanKock-g9i
    @JordanKock-g9i 2 місяці тому

    You Rock Jonathan 💪🏽

  • @OlliHuttunen78
    @OlliHuttunen78 2 місяці тому

    Great tutorial Jonathan! Glomap seem to be very fast. Great work with the python script to make this even faster. Thanks for that. I have to check how this could be implemented with Postshot.

    • @thenerfguru
      @thenerfguru 2 місяці тому

      I don’t use Postshot often. If you can use COLMAP data with Postshot, this will work too. It’s the same output

    • @OlliHuttunen78
      @OlliHuttunen78 2 місяці тому

      @@thenerfguru Yeah! I managed to make it work with Postshot. But it seems that it works best with only smaller datasets where you have 120 to 250 images. When you have something bigger where is nearly 1000 images from different takes it generates only huge sparse cube and it naturally does not lead to any reasonable outcome. It seems that the Glomap method works best for now with easier material where the images come from a single continuous shot. It partly explains why the speed of the process has been optimized to be so fast. But if there are more challenging scans shot with, for example, different cameras and FOV, this early development phase of Glomap cannot solve them yet.

    • @freddiemercury5424
      @freddiemercury5424 2 місяці тому

      @@OlliHuttunen78thanks to you both for your nice content!🤘 You already helped me so much. I‘m a researcher at HSWT, we try to model complex structures like the crowns of trees. We use fpv and camera drones as well as a camera pole, similar to yours Olli. Do you have any tips how to get sharper Gaussian splats?

    • @thenerfguru
      @thenerfguru 2 місяці тому

      My script is definitely is geared towards using one camera model, however, if you run the steps manually and have a solid grasp of COLMAP's functions and modifier flags, you run huge image sets of multiple cameras. For example, you should have a folder for each camera and then put them all in a common image folder. The top level image folder is your --image_path folder and then pass this modifier for the feature extractor: --ImageReader.single_camera_per_folder 1. They call it out in the project too: github.com/colmap/glomap/blob/main/docs/getting_started.md

    • @thenerfguru
      @thenerfguru 2 місяці тому

      @freddiemercury5424 have you tried getting proper camera calibrations on your camera and undistorting them all first? Then you should be able to use simple_pinhole camera for all of the various cameras.

  • @shiftsync9988
    @shiftsync9988 2 місяці тому

    Awesome video. 15:12 @anyone trying this: Be sure to not have whitespaces in your file path as it will not work (as of now)

    • @thenerfguru
      @thenerfguru 2 місяці тому

      Yea, never white space in your paths. EVER!

  • @trollenz
    @trollenz 2 місяці тому

    I'm grabbing 🍿 for this one... Thanks !!

  • @자기관리론-g3w
    @자기관리론-g3w 2 місяці тому

    "Hi there, great video! I'm currently using a reality capture pipeline to generate point clouds. I'm curious to know more about how this method compares to traditional reality capture. Could you elaborate on the differences in terms of processing speed and quality of the resulting point cloud?

    • @thenerfguru
      @thenerfguru 2 місяці тому

      It should be on par with speed and accuracy of reality capture. However, this is opensource and can be built right into your workflows. Just depends on how you want to use the data. I will do a comparison video!

    • @자기관리론-g3w
      @자기관리론-g3w 2 місяці тому

      @@thenerfguru Thank you. Your videos have been incredibly helpful, and I really appreciate all the great content you share.

    • @thenerfguru
      @thenerfguru 2 місяці тому

      Thanks!

    • @deniaq1843
      @deniaq1843 2 місяці тому

      @@thenerfguru the glomap method is way faster than the alignment with RC, isnt it? so how can you say that it is par with RC. I am curios! :-)

    • @thenerfguru
      @thenerfguru 2 місяці тому

      @@deniaq1843I have found when testing many datasets the speed seems about the same. I have not tried really large datasets. I think that’s where GLOMAP may pull ahead.

  • @kiase978
    @kiase978 2 місяці тому

    you dropped this, 👑

    • @thenerfguru
      @thenerfguru 2 місяці тому

      Yup! It was a fun evening project. I wished GLOMAP was more robust. Not all datasets are successful.

  • @SamuelZiemian
    @SamuelZiemian 3 місяці тому

    can you export this into unreal engine ?

  • @jinggao-c9s
    @jinggao-c9s 3 місяці тому

    C:\Users\Administrator>conda create --name nerfstudio -y python=3.8 'conda' 不是内部或外部命令,也不是可运行的程序 或批处理文件。 C:\Users\Administrator>

  • @hrishikeshkatkar7641
    @hrishikeshkatkar7641 3 місяці тому

    Hello there. It's a great video on how to use it, especially with installation and this one. Can we export a mesh from Gaussian splitting? There may be an option in the viewer under the point cloud. I haven't used it yet, so I reviewed your tutorial. Thanks.

  • @_SGTM_
    @_SGTM_ 3 місяці тому

    What happens if i run it on a gtx 1650???? will my pc explode??

  • @Daexx5
    @Daexx5 3 місяці тому

    I was here before it went mainstream. It's awesome.

  • @Pananl2three
    @Pananl2three 3 місяці тому

    Thanks for your tutorial!! Could 3D Gaussian splatting interact with light in the scene, and could it be captured by a reflection probe?

  • @renanmonteirobarbosa8129
    @renanmonteirobarbosa8129 3 місяці тому

    You need a tutorial building everything from source to run on Linux Distro.

  • @GMax17
    @GMax17 4 місяці тому

    Amazing effect for static backgrounds

  • @zhuhangwei
    @zhuhangwei 4 місяці тому

    During the process of using gaussian-splatting-Windows, when the image files originally in the 'input' folder are processed by python convert.py -s and output to the 'images' directory, they reduce in number from dozens to just a few, and these images now display a wide-angle effect. What could be the issue, and how should one go about troubleshooting this?